Underwater RPi Camera Build

Hi there, I’m working on an underwater camera that can be powered by a smart mooring. Eventually, I’d love to see this project evolve into a small low cost waterproof Bristlemouth powered camera that could be used for real-time object detection and send alerts through the smart mooring + spotter API.

I’m a mechanical engineer with minimal experience in embedded hardware, so I’m using this project to learn about firmware and software development (yikes!). The motivation for this project comes from my passion for free diving along California’s coast and my desire to build hardware that encourages me to spend more time underwater. But my recent dives have highlighted the devastation that the purple sea urchin population has had on the once plentiful kelp forest.

Perhaps a tool like this could help in the fight to restore these ecosystems by enabling remote monitoring. Imagine a bunch of moorings along the cost with underwater cameras that could classify and count the number of sea urchins in a given region, then send daily reports to a dashboard to quantify changes in urchin population and highlight areas that are responding better to certain restoration techniques. It’s also possible that a camera based approach is a terrible idea for this application sine the water in this region is typically murky and biofouling will be extreme, but hey, I’ll cross that bridge when I get there.

To get things started and keep things simple I’ve decided to break this project into a few Stages:

STAGE 0: learn how to setup a raspberry pi
a. use a raspberry pi zero configured to run headless

STAGE 1: use the dev kit as a proof of concept
a. get something working quickly, build + test + break ⇒ repeat
b. powered by Bristlemouth, but no data transfer over Bristlemouth (yet)
c. focus on the software + electrical bits
d. don’t worry about waterproofing

STAGE 2: build a camera module
a. make it compact and small
b. configure the Ebox to power cycle the raspberry pi
c. waterproof, designed for depths of 25m or less
d. short deployments, 1-2 weeks

STAGE 3: communicate over bristlemouth
a. add BM electronics and firmware
b. get bi-directional data transfer between RPi and Mote
c. have the Ebox tell the camera to turn on, then take a picture
d. have the camera tell the Ebox that is successfully took a picture
e. waterproof, designed for depths of 100m or less

STAGE 4: add image classification
a. create a deep learning data set
b. train a model to detect sea urchins
c. deploy a pre-trained model onto the raspberry pi
d. camera sends stats on number of urchins in a picture

Looking forward to hearing if others are working on similar projects.

11 Likes

Alright, and now some details from Stage 1 of the build.

Here is a sketch of my “functional diagram” to illustrate the plan with the Dev. Kit tube.

Initially I wanted to fit a rechargeable battery pack inside the tube and mount everything to a smart mooring. But the battery pack was too big to fit inside the tube. So I figured this was a good time to try an power the camera from the BM bus.

My first challenge was to setup the Raspberry Pi Zero W with the Debian Bullseye Lite distribution. I used the Raspberry Pi imager tool to setup the SD card and preconfigure the wifi settings. I setup the RPi as a “headless” install so that I could wirelessly SSH into the RPi to update code and transfer files. This will be helpful in the future when the RPi is inside a pressure housing or fully encapsulated in potting and I can’t access the USB ports.

Pro tip: if you plan to SSH into the RPi over wifi, make sure you have the RPi Zero W or 2W . I spent two frustrating evenings trying to configure the hardware and get the device to logon to my home wifi network, only to discover that I had an older RPi Zero didn’t have onboard wifi…doh.

Hook up the USB camera and save a picture

My first test was getting a USB camera to save an image to the local disk. I used an early generation DeepWater exploreHD camera. This is an awesome little waterproof camera rated for depths up to 400m. Later on I will switch over to a Raspberry Pi Camera Module v3 since I want to design my own pressure housing - but for now this is a great starting point.

Setup MJPG-Streamer for a USB Camera

Now that I had a RPi wired up to a USB camera it was time to setup a video stream. I followed this tutorial to setup MJPG-Streamer on the RPi. I ran into a few issues installing the correct versions of all the dependencies but eventually I got it to work. For me, the trick was installing this version of NumPy and OpenCV together.

pip install numpy==1.20.0
pip install opencv-python-headless==4.5.3.56

Now I could have the RPi stream video to a web server and access the video stream on my laptop. This was by far the most frustrating AND rewarding part of the project yet. It took me three evenings of reading tutorials + asking Chat GPT for help before I was finally able to get this screen shot, but totally worth it.

Quick power test to see how much juice this unit draws

It was important to quantify how much power the RPi + camera would need because the Bristlefin dev board has limitations on how much power it can supply. To run this test I use a niffty USB power monitor from Adafruit . You plug one side of the power monitor (blue thing) into a power source, then plug the load into the other end and it will display voltage, current , and mAH while the device is running. It’s a fast + easy way to establish the baseline power consumption for a project if you don’t need micro-amp accuracy. The camera was pulling around 1.4 watts while streaming video.

At this point I decided to place the camera inside the Dev. Kit tube behind a clear dome port from Bluerobotics. You could also install this particular camera outside the tube since it is rated for 400m water depth, but I didn’t want to deal with a cable penetration to run the cable from inside the tube to the outside lid. I have access to a 3D printer so it’s easier for me to 3D print a bracket that holds the camera behind the dome port and keep all electronics inside the tube.

I ran a quick test with the camera behind the dome port to check the image quality. Here’s a good write up on the differences between flat and dome ports for underwater photography, which includes illustrations. For best results, the camera sensor should be located at the center of curvature of the dome port. My test was in air so these was not significant difference with and without the dome. That will be different underwater.

Mount the camera to end cap + wire it up

Below is an image of the assembly with the USB camera mounted + wired to a 5V Buck Converter + RPi. During testing I found that the 5V pin on the Dev Kit board could not provide enough current to the RPi to keep it running continuously. To get around this limitation, I wired a Buck Converter to the 12V VOUT pin. The 12V regulator on the Bristlefin PCBA (big orange board) has a higher max current rating than the 5V regulator and was enough to keep the RPi running and happy.

In this iteration the USB camera is NOT at the center of curvature of the dome port. I was limited on space inside the clear tube so I move the camera closer to the dome to get everything to fit. Not ideal as this will further increase image distortion, but it will work for this first test.

Close up the tube

I used plenty of electrical tape to secure the RPi to one side of the Bristlefin and then shoved the wires inside the tube being careful not to pinch anything when installing the end cap. If this was going to be used for an underwater test then I’d have lubed the o’rings with grease before closing up the tube.

Note on assembly: The tube has a decent volume of air inside that acts like an air spring and fights against you. Bluerobotics sells a pressure vent to make this assembly easier. Worth a try if you plan to open and close your tubes frequently.

Setup RPi for Ad-Hoc network

I wanted to be able to wirelessly SSH into the RPi from my laptop independent of having a local internet connection so that I was not limited to testing my hardware indoors. So I setup the RPi to act as a Hotspot following this tutorial . I hit a few problems when trying to turn DHCP on/off which resulted in me bricking the unit, but luckily I could still access the SD card and fix the issue. That was a good lesson to learn BEFORE I potted the assembly and lost access to the SD card.

Power it up + SSH in + take a picture

Back at the Sofar Pier 28 office I setup my Ebox to power the BM bus continuously and then wired up the camera. After a quick boot I logged onto the hotspot and ran the script to setup the video stream.

Here’s what I captured. Beautiful sunny day in our San Francisco office!

By this point I had discovered a few issues with my assembly that prevented me from getting a completely waterproof seal with the end caps. Unfortunatly the 3D printed bracket that I made was too thick and caused an interference between the Bristlefin PCBA and the end cap with the dome port. This made it impossible to get a sufficient seal to keep the tube dry.

But before making any changes to the hardware I decided to review my goals for STAGE 1. Everything was complete since it did not need to be waterproof.

STAGE 1: use the dev kit as a proof of concept
a. [DONE] get something working quickly, build + test + break ⇒ repeat
b. [DONE] powered by Bristlemouth, but no data transfer over Bristlemouth (yet)
c. [DONE] focus on the software + electrical bits
d. [DONE] don’t worry about waterproofing

Now, on to STAGE 2!

3 Likes

Very nice thanks for your posting! My Angel Sharks group must also develop a Camera thanks for the Raspberry Pi protip as I almost would have used the older Pi Zero one too if not for your protip. Also wondering about the urchins. was planning on counting them in real-time with object recognition (pre-trained of course) maybe using PyTorch PyTorch object detection with pre-trained networks - PyImageSearch

@AngelSharks glad the tip about the Pi Zero was helpful.

I am not familiar with PyTorch, thanks for sharing the link. I was planning to follow this tutorial to try and build the urchin detector and counter. I suspect one of the tricky parts is developing a lightweight neural network that can be deployed on a Pi Zero to run in near real time. That tutorial walks through developing a collection of images for training, using the tool Keras to train a Convolutional Neural Network, and then deploying the pre-trained algorithm on the Pi for object detection.

I’ve never tried anything like this before, so I’m sure there will be some difficult parts once I dig into the details.

My latest version of the camera is much smaller and incorporates a 3d Printed shell + epoxy to keep the electronics waterproof. I want to keep it as small as possible, roughly the size of a GoPro, so it’s easy to install at different locations in the water column. Here are some early concepts for the camera module housing.

1 Like

For this stage of the project I wanted to take the Dev. Kit hardware and shrink it down into a small form factor about the same size as a GoPro camera. That would make my camera module fairly compact and easy to mount under a spotter, on the anchor, or along the smart mooring. I also wanted to get something built quickly so that I could start testing the camera underwater to see what unforeseen issues would pop up.

Here is a reminder of my goals for this stage of the project:

STAGE 2: build a camera module
a. make it compact and smaller
b. configure the Ebox to power cycle the raspberry pi
c. waterproof, designed for shallow depths
d. short deployments, 1-2 weeks

Research other projects

With those goals in mind, I did some research to see how other people had developed low cost underwater cameras with a Raspberry Pi. Here are a few references that I found helpful:

  1. PipeCam: the low-cost underwater camera, simple design with RPi camera module + time lapse
  2. Epoxy Pi resin.io, completely encapsulated in epoxy and could use wireless LAN + Bluetooth to communicate with Pi inside
  3. DeepPi, miniturized camera for deep sea exploration in 3D printed resin enclosure + potting

The DeepPi article is a fantastic reference for anyone trying to encapsulate electronics for underwater use. My camera module is directly inspired by their work and assembly method. The following details are specific to my first build of the underwater camera module.

How does it work

The camera for this project is a Raspberry Pi Camera Module V3 with a 12MP sensor. This was wired to the PRi Zero via the CSI camera connector with a short flat cable. There was a red LED wired to one of the GPIOs on the RPi to function as an indicator light. The LED will turn on after the RPi boots up and stays on as long as it has power. The RPi is powered by a buck converter that takes the 24V from the Bristlemouth bus and steps it down to 5V. That’s it!

In the future I’ll wire up a Bristlemouth Mote to the the RPi (maybe using serial RX/TX or SPI for higher data transfer rates?) to enable bi-directional data transfer.

The 3D printed camera module shell

The exterior shell of the camera module was 3D printed in two halves.

Unlike the DeepPi project, I decided to use FDM 3D printed parts instead of resin prints. A few years ago I purchased an Elego Mars resin 3D printer so that I could print my own pressure housing just like DeepPi did. While resin printers are amazing machines and perfectly suited for this type of project, I didn’t like working with the resins and the mess they create in my small shop.

So for this build I’m experimenting with printing plastic parts using ASA filament on a Bamboo PS1 3D printer using 100% infill and 0.08mm layer height to try and achieve a solid shell (as solid as possible). If you don’t want to print ABS or ASA, I have found that PLA boxes filled with potting material will last 2-4 weeks when continuously submerged in sea water - long enough to test a quick prototype if you’re in a pinch. The material ASA has higher UV resistance and lower water absorption compared to PLA and ABS. I’m not entirely sure how the ASA will do, but the material is marketed at being for “outdoor” DIY projects like 3D printed watering cans so it should do at least as good as PLA if not better.

Wire it up + assembly

Solder the wires from the Bristlemouth connectors to the input on the buck converter. Then solder wires from the output on the buck converter to the correct headers on the RPi.

The PCBAs are held in place with small M3 self tapping screws that bite into the plastic shells. The RPi + buck converter + LED get secured to the “bottom” shell, while the camera gets mounted to the underside of the “top” shell.

I used a dab of super glue to secure the LED in place. You don’t need a lot, just enough to create a temporary seal to prevent potting material from spilling out. I like the ultragel type glue for this type of work as it is thicker and better at filling gaps.

Here is the assembly all wired up and secured inside the shells.

Add potting material to waterproof electronics

Double check: make sure the assembly is working BEFORE you add potting material, this is you last chance to catch any mistakes!

I’m using a clear epoxy, DP270, to fill the voids around the electronics and keep them waterproof. When cured this is a rigid material and it bonds well with ABS and 3D printed parts. It’s marketed as a potting compound that is safe for electronics and has a low viscosity and low exotherm.

I started by preheating the epoxy cartridge in a small oven at 150F for 15 minutes. This reduced the viscosity and helped the material flow better to eliminate bubbles. Once the RPi was covered I stopped and let this sit overnight to cure.

OK, will need to finish the rest of the story in a follow up post.

4 Likes

Wow, you’ve been busy! We set up our AI Computer Vision project to recognize Marine mammals as part of our Shellfish Aquaculture partners must report all Marine mammal encounters for compliance w/the the Marine Mammals Act. We used a basic cell phone camera using a pre-trained model. In your case, you need maybe 100s of different Urchnin photos and label them all, a Tensorflow lite install will fit on the Pi 0 https://arxiv.org/pdf/1712.05877.pdf

1 Like

also get the code working before the PI as you need to downsize models to get everything onto the Pi 0 with a smaller number of training photos you get lots of errors like this every everything jumping out of the water was labeled a bird.

A ‘fast’ C++ implementation of TensorFlow Lite classification on a bare Raspberry Pi zero. GitHub - Qengineering/TensorFlow_Lite_Classification_RPi_zero: TensorFlow Lite on a bare Raspberry Pi Zero

1 Like

With the bottom half of the camera module cured it was time to close up the assembly and fill the the rest of the volume with potting material. I used electrical tape to secure the two shells together and coverup the seam line between the top and bottom shells. As a last step I added a small square of blue tape over the camera lens to protect it from epoxy drips.

Fill the other half

This second potting step was more difficult than I expected - in the future I’ll do this differently so that the whole interior can be filled in a single step. I had to be careful not to get epoxy onto the camera lens else that would distort the optics. The trick was to attach a needle tip to the end of the mixing nozzle then carefully inject epoxy into the gap between the shell and the camera PCBA. Heating the cartridge in the oven at 150F for 15 minutes prior to dispensing reduced the viscosity of the epoxy and made it flow better. This reduced the risk of air bubbles getting trapped and made the material flow better.

A note for next time: I’ll try placing the assembly inside a pressure pot to force the epoxy into all the voids and collapse any bubbles during the curing process. In the past I’ve converted a paint spraying kit to build a low cost DIY pressure pot suitable for small projects. They can be pressurized to 40-50 PSI (check the warning labels before trying this!) and I’ve achieved good results.

If you don’t know much about them, here’s a good video that explains the differences between pressure pots and vacuum chambers and how they help to reduce bubbles and voids in your resin and potting projects.

Here is the needle tip that fits onto the end of the mixing nozzle.

I had to carefully dispense epoxy here to finish filling the interior space. It was a slow process.

Once it was filled I let everything cure for 48 hours.

Install the lens and o’ring

The next step was to install the o’ring and glass lens into the housing. This is a critical step as it creates a watertight seal. There is an undercut in the shell that holds a single-turn spiral internal retaining ring. The retaining ring locks the lens in place which compresses the o’ring to create a mechanical seal between the lens and the 3D printed shell. The following photos show an early prototype printed in grey PLA, but the assembly steps are the same.

Using a clamp to compress the lens and o’ring before inserting the spiral retaining ring.

Visual inspection to ensure there is uniform compression of the o’ring.

System level test

With everything assembled it was time for a test. I configured the Ebox to provide continuous power on the Bristlemouth bus and then wired up the camera with a penetrator and some jumpers.

The red light turned on indicating that the RPi had successfully booted up and run my “LED ON” script. You can see the red LED in the photo below on the bottom left side of the camer.

And here is the first test using the MJPEG streaming demo app. Success!

The next step will be dunking the camera into a pool for a quick shallow water time-lapse test.

1 Like

This was the first underwater test for the camera module. The camera was submerged 1 meter below the surface in a pool for four hours and setup as a time lapse taking a picture every 60 seconds.

Set Ebox to provide continuous power

The Ebox was setup to provide continuous power to the Bristlemouth bus for this short test. I used the following commands to configure the ebox over a serial terminal:

bridge cfg set 0 s u bridgePowerControllerEnabled 0
bridge cfg commit 0 s

You should see the Ebox reply with the following if it works:

...
63925t [BRIDGE_CFG] [INFO] Key 0: bridgePowerControllerEnabled
...
63936t [BRIDGE_CFG] [INFO] Key 0: smConfigurationCrc
63944t [BRIDGE_CFG] [INFO] Node ID: 0 Partition: system Value: **0**

You can verify this worked by using a voltage meter to measure 24V between the Bristlemouth sockets.

Python script to take still image + add timestamp

I needed a script to run on the RPi to control the camera and take a picture. The code below uses the PyCamera2 library to snap a picture + add timestamp to file name an save the image to the specific folder. This script will get called every time you want to take a picture.

#!//usr/bin/python3
# file name: image_timestamp.py

import datetime
import time
from picamera2 import Picamera2

# -- Setup the camera -- 
picam2 = Picamera2()
config = picam2.create_still_configuration()
picam2.configure(config)

# -- Setup the datetime stamp --
date = datetime.datetime.now().strftime("%m_%d_%Y_%H_%M_%S")

# -- Start the camera --
picam2.start()

## -- Save the image with custom file name --
picam2.capture_file("/home/pi/images/"+ date + ".jpg")
picam2.stop()

Have the script run on startup + every 60 seconds

Now that we have a script for taking the picture and saving the file we need to setup the RPi to call that script once the camera module turns on, and continue to call the script every 60 seconds while the power is still on.

One way to accomplish this is by configuring crontab on the RRi. Check to confirm Crontab is enabled for user. Reference URL below if you have some issues, else SSH into the RPi and run the followng command.

crontab -l

If you use sudo it creates a different crontab than the standard user logged in, use NON-sudo for getting files to work

To open crontab and edit the files that get called, type the following:

crontab -e

Setup for running on boot, and every 1 minute by adding the following lines to the end of the file.

# Start program on reboot
@reboot python3 /home/pi/image_timestamp.py

# Run program every 1 minute while powered
*/1 * * * * python3 /home/pi/image_timestamp.py

The last step was to run a quick test to make sure it all works and the files were being saved in the correct directory. As a final step I charged up the battery in the Ebox to ensure it was full.

Toss it in the pool

With the code working it was time to submerge the camera one meter below the surface. I let this sit in a friends pool for about 4 hours.

After removing the camera from the water I noticed a small amount of moisture behind the glass lens. This indicated that there was a small leak. Nothing too serious - but likely water was leaking past the o’ring because of non-uniform compression.

For this test the camera was simply dangling by the Bristlemouth jumpers.

Here you can see moisture trapped inside the camera behind the lens. Luckily this did not ruin the image quality during the test. I was able to fix the leak issue by reinstalling the lens to get more uniform compression of the entire o’ring before installing the single twist locking ring. This has not been a problem with follow up deployments.

The red LED was helpful for debugging and knowing when power was being applied to the module. There were no other signs of leaks or issues with the module.

Convert still images to a video

There are plenty of ways to take a folder of images and convert them into a video.

For this test I wanted to keep it simple so I used Quicktime to generate a movie from an image sequence. It was fast and easy if you have a mac. One downside to this method is that you cannot specify the frame rate - so the video is pretty short.

In the future I’d like to have a script automatically generate a time lapse for me and specify the frame rate. I’m not sure if the RPi Zero has enough processing power and memory to do this - but that’s a project for another day.

Here are the images from the camera module compressed in to 12 seconds of video. Each still image was approximately 1.2 MB before being converted to a video file. I was impressed with the image quality and clarity (it helps to be in a pool!). Below you can see the sun change in light as the sun sweeps across the sky. You can also see the light diffraction patterns on the floor of the pool as the wind blows and forms ripples at the surface. So cool!

The sad lonely pool robot is about 10-15ft away from the camera and you can still see some of the details. Not bad for a $30 camera sensor.

Link to video here

A few still images:

2 Likes

Yikes, it’s been awhile since the last project update and a lot has been happening.

After testing the camera module in the pool I decided to mount this under the Spotter Buoy and toss it in the harbor behind our office to see how it would stand up to salt water over a week of testing. Remember, this is an FDM 3D PLA printed housing filled with epoxy with a raspberry pi zero inside. All images are being logged to the Pi locally and I access the pi over an ad-hoc network connection to download the files after recovering the buoy.

To my surprise, it worked!!!

The RPi underwater camera successfully gathered over 550 still images over the 8 days it was in the water. No significant water in the housing, but I did see a tiny bit of moisture near the inside surface of the o’ring gland. Likely a leak from the 3D printed housing as this is not a perfect 100% infill.

What worked?

  • Ran continually snapping photos
  • Power was not a problem, even on overcast days power draw didn’t drain battery
  • No biofouling or issues with buoy getting tangled for short deployment

What went wrong?

  • Some images are still showing as ZERO bytes
  • Some images show as having a file size but will not display on either Pi or Mac
    • file corruption during saving?
  • I had to delete about 140 of the 730 images due to these two errors.
  • I took photos at night, so more that half the images are just black

A few thoughts for the next test:

  1. program it to NOT take pictures at night (doh!)
  2. have it take video every hour, 5-10 seconds, in addition to stills
  3. increase frequency of photos, was taking a photo ever 15 min, can we do every 5 instead with power budget?
  4. Mount this on an anchor and drop it to the bottom of the harbor, you can’t get much more than 1 meter of vis on a good day in San Francisco Bay due to high volume of water exchange with each tide swing
  5. Could integrate another Pi inside the buoy to get video streaming over wifi network

Below are some images from the predeployment and during deployment. I installed a reference card hanging ~1 meter below the buoy to help determine the relative visibility. In air, you can see the card crystal clear, but in the water with all the turbidity swirling around it was hard to see the card most of the time.

Reference image in air to bench march what the camera is capable of in ideal conditions. (requirement: must always take reference shot with dog in photo for size calibration)

Day two, one of the more clear images

Last day in the water, murky pea-soup.

2 Likes

Awhile back I saw that another team was able to successfully transmit an image from a raspberrypi over cellular using their dev kit. AND they shared their source code. With a bit of help from ChatGPT and the latest UART support for bm_serial.py I was able to cobble together a new camera module that transmits a compressed 640 x 480 pixel image over the Bristlemouth. The whole process takes about 15 minutes and requires ~70 cellular messages be sent in succession (currently limited by the transmit message size) but this is good enough for an MVP.

Big thanks to @Kaitlynyau and team for posting their Subsea Imaging Solution and sharing the source code.

I made a few tweaks to their code + the bm_serial.py code to be compatible with the Pi Zero and Blinka implementation of Ciruitpython. I’m still wrapping my head around GitHub and repos (sorry, I’m an ME by training and software is still magic to me) but with the help of ChatGPT I was able to come up with a working proof of concept. I’ll post more details once I get a fully working version of the hardware up and running for time lapse in the next few weeks.,

3D printed camera module housing, with RPi Zero + mote + 2x sets of Bristlemouth connectors

Fit check with all the PCBAs, I am using a “Bristleback” PCBA which is a smaller and more simple version of the Dev Kit board that allows this to be a smaller form factor.

And here is one of the first images that I was able to transmit over cellular using modified example code from the Subsea Imaging Solution reference code.Major thanks to the UC Santa Barbara team for sharing the code and documenting everything on GitHub. I’d like to do that same in the near future.

Next steps…get a CNC’ed plastic housing and make this waterproof.

5 Likes

What is better than one Bristlemouth powered camera?

Three BM + Raspberry Pi camera modules transmitting data simultaneously on the same smart mooring!

Since the last post I built up two more modules. These new modules were assembled with shells that were CNC’ed from ABS plastic. These will be tougher than the blue + yellow 3D printed prototype and fully waterproof once filled with a potting compound. The parts were ordered from PCBway and took ~3 weeks to be delivered and cost about $150 each. The quality was not the best so I’ll try JLCpcb for my next order based on a suggestion from @estackpole and compare cost and quality.

Photos below show the CNC’ed shells getting ready for their first underwater pressure test.


Initially I had planned to install a removable lid on the back of the camera with a static o’ring seal for waterproofing. But the lid seal leaked after 8 hrs at 150 PSI (~100m deep) in the hydrostatic pressure chamber so I opted to encapsulate the electronics with a potting compound instead. I’d like to refine the o’ring lid concept in the future as it would be handy to be able to access the electronics to modify the wiring for debugging - but that was not a critical requirement at this stage in the project. I suspect this was a combination of issues: poor surface finish on the o’ring gland + too small of a corner radius on the inner walls of the shell + insufficient o’ring compression for my assembly tolerance stack. But hey, that’s why it’s good to test.

Picture of the first shell getting pressure tested, and the water inside the housing after 8 hours at depth. This leak coincided with a 50 PSI drop on the pressure gauge. I plan to share more details about the hydrostatic pressure vessel design in a follow up post.

Once the decision was made to ditch the lid and switch to potting it was time to finish the assembly. I installed all the boards + made my wired connections using my first prototype as a guide to create a quick wiring diagram - it’s not an official Altium schematic but it get’s the job done. I also used a bit of hot melt to keep the wires laying flat against the PCBAs. You don’t want wires to stick out of the potting material as this will create a water leak path.


It is helpful to perform a pre-potting test and a post-posting test to confirm the modules are working before and after you add the potting. Some potting compounds generate a considerable amount of heat as they cure and that can cause issues with thermal expansion + warping + internal stress on electrical components that can lead to failures.

I had help from the awesome folks on the Sofar production team who used their whizz-bang mixing + dispensing machine to carefully fill the module and remove any air bubbles that formed at the surface. We filled the modules in one shot using a 2 part urethane with a 60 minute gel time. It has a low viscosity so that it fills all the nooks and crannies in between the circuit boards.

For the next build we plan to fill the modules in two shots as this will reduce the risk of bubbles forming which can create a leak path and allow water to damage the electronics. Even the smallest imperfection in the potting can lead to a failure so it’s critical to take your time, ensure all parts are clean and free of dirt and oils, and allow the potting to cure under the recommended temperature and humidity conditions.

Keep the modules in a safe + clean space and allow them to cure overnight. It’s best if they don’t get moved or bumped for the first 60 minutes while the urethane is gelling. A pair of panavice holders were repurposed as assembly fixtures for this process.

Modules back at home on my desk for their post-potting testing to verify that all systems were still working after the urethane had cured: camera, RTC, UART to mote, Pi + SSH + wifi, mote to ebox. All systems good to go.


With the modules built and known to be working, it was time to run a “dry soak” test with all the hardware wired up on land. This is a good test to verify that the firmware and software are all working as expected before tossing any hardware into the water. I ran this test with all three modules wired to one ebox in my back yard and plugged into DC power. It was important to ensure the ebox had a clear view of the sky for optimal cell signal strength - past tests with the ebox indoors resulted in some of the messages failing to be transmitted.

And below are example images that were pulled from the API data and stitched back together - all three image were captured + transmitted at the same time. This uses bm_serial to enable coms between Pi + Mote.

Next steps, stick the modules in the pressure vessel for a few days and verify that they work while at ~100m of depth.

5 Likes

First successful hydrostatic test of BM cameras, down to 100m of depth .

I ran this test October 24th, here were my notes:

  • Dry soak checks were good.
  • I’ve got a spotter + ebox outside to get better cell reception, running a 10m SM cables into the workshop to the hydrostatic tank
  • Spotter plugged into DC wall power, 100% on for now
  • Now we wait to see if they send data in ~30min

I placed a digital clock + thermometer in front of the tank. This created a time stamp in the image to tell me when each picture was taken.

And here were some of the first images I received over cellular during the test, everything worked as expected.

The cameras stayed under pressure for 5 days and worked the entire time. No leaks - yahoo!

After the hydrostatic pressure test in the work shop, it was time to deploy these in the wild.

Notes are from November 7th

Luckily we have a convenient deck off the back of our office so I could safely lower the two cameras into the San Francisco bay for this test.

I mounted the cameras back to back on a 1/2" thick piece of HDPE then used a pair of 3D printed cable clamps to attach the smart mooring to the frame. The 3D printed parts are robust enough for this afternoon test but I would not trust them for a deployment longer than a week or so.

Lowering the cameras into the water was fast and easy, the Spotter stayed on land with a clear view of the sky to broadcast the cellular data.

One of the two cameras had an issue with the RTC not reporting the correct date and time, so the timestamp encoded in the image name is not correct. In the next hardware rev I’d like to omit the RTC and rely on the GPS timestamp from the Spotter. But both cameras were still able to broadcast the photos without issue for 4 hours. All the photos looked similar.

Yum, check out the murky visibility in the harbor. Looks like pea-soup.

Now that the hardware was tested and confirmed to be working it was time to pack it up and hand them over to @estackpole for a weekend deployment.

2 Likes

[Notes from November 9th]

@estackpole offered to deploy the cameras for a weekend test in a shallow harbor near Alameda, CA. He was able to deploy the system from a canoe - does not get much easier than that - and the weather looked ideal for a day on the water.

Below you can see all the hardware wired up and ready for deployment. He uses a small mushroom anchor to keep the system in place. We omitted a surface float since this was a short deployment in a shallow harbor and we did not expect or need the Spotter to measure ocean waves at this location.

If you are curious and want to learn more about Sofar’s recommendations for how to design and install smart moorings check out this video and this guide for more details. The docs are continually being updated with new info and are a great reference for Spotter and smart mooring specific mooring considerations.

You can see the blue tape that Eric aded to the Spotter lid. It’s always a good idea to add your contact info somewhere visible on the mooring incase a curious observer paddles by and has questions.

After an hour of being in the water it was time to check the remote data using the API calls and the python parsing script. Here are some of the first few images from the two cameras transmitting the compressed VGA resolution images. You can just barely see the bottom of the harbor from one camera view, and the other has a clear shot of the bundle of cables from the smart mooring.

The mooring was in the water for the weekend. We had hoped we might see a seal in the images - but not luck. Maybe next time.


After the weekend soak Eric recovered the mooring and washed off all the gear before returning it back to Sofar HQ. Mission accomplished, and big thanks to @estackpole for sharing the photos and offering to deploy this system for this field test.

2 Likes

Hi, and thanks for that really awesome description of the work that you’ve been doing. As a retired engineer, I love the detail.

I was wondering if you were using Bristlemouth to actually control the cameras you’re working with in terms of maybe things like turning them on and off. You can find information on the project I’m working on at Elmhurst University, but in short, I’m trying to see if we can use Bristlemouth not only to gather sensor data, but control devices as well. It brings up interesting questions like what would you have to do to a thruster to have it “bristlemouth aware”.

I don’t actually have a developer’s kit, but from what I’ve been able to glean, controlling a rover in realtime connected over a tether to a smart buoy is probably not something that’s going to work right now (or maybe ever), but I’ve been considering how I might use a set up kind of like a Mars rover, where, because of the communications delay, the rover is more or less autonomous, but it gets “mission instructions” based on sensor data returned to a controlling system. Maybe Bristlemouth could be used to send a set of instructions to a rover and let the rover’s onboard systems do the rest.

Anyway I was curious as to if you’d used Bristlemouth to do any control functions.

If you have any other thoughts on my ideas, I’d love to hear them. Like you, we’re coming at this from a point of zero knowledge, and just finding our way along. We’ve built one rover so far, a very simpe manually controlled unit to learn some fundamentals. Our new rover is being made with 3d printed parts and will have a Raspberry Pi as well as a PixHawk system, which is an open source quadcopter system that’s been modified to control underwater rovers. I would love to use Bristlemouth as part of this. There is a good deal of work being done that involves really expensive ROV systems, and we’re experimenting to see what can be done with much less expensive units.

Thanks. If you made it to Bristlecon, I hope you’re enjoying it. I’ve been able to use the Zoom link to listen in.

Ron