Underwater RPi Camera Build

Hi there, I’m working on an underwater camera that can be powered by a smart mooring. Eventually, I’d love to see this project evolve into a small low cost waterproof Bristlemouth powered camera that could be used for real-time object detection and send alerts through the smart mooring + spotter API.

I’m a mechanical engineer with minimal experience in embedded hardware, so I’m using this project to learn about firmware and software development (yikes!). The motivation for this project comes from my passion for free diving along California’s coast and my desire to build hardware that encourages me to spend more time underwater. But my recent dives have highlighted the devastation that the purple sea urchin population has had on the once plentiful kelp forest.

Perhaps a tool like this could help in the fight to restore these ecosystems by enabling remote monitoring. Imagine a bunch of moorings along the cost with underwater cameras that could classify and count the number of sea urchins in a given region, then send daily reports to a dashboard to quantify changes in urchin population and highlight areas that are responding better to certain restoration techniques. It’s also possible that a camera based approach is a terrible idea for this application sine the water in this region is typically murky and biofouling will be extreme, but hey, I’ll cross that bridge when I get there.

To get things started and keep things simple I’ve decided to break this project into a few Stages:

STAGE 0: learn how to setup a raspberry pi
a. use a raspberry pi zero configured to run headless

STAGE 1: use the dev kit as a proof of concept
a. get something working quickly, build + test + break ⇒ repeat
b. powered by Bristlemouth, but no data transfer over Bristlemouth (yet)
c. focus on the software + electrical bits
d. don’t worry about waterproofing

STAGE 2: build a camera module
a. make it compact and small
b. configure the Ebox to power cycle the raspberry pi
c. waterproof, designed for depths of 25m or less
d. short deployments, 1-2 weeks

STAGE 3: communicate over bristlemouth
a. add BM electronics and firmware
b. get bi-directional data transfer between RPi and Mote
c. have the Ebox tell the camera to turn on, then take a picture
d. have the camera tell the Ebox that is successfully took a picture
e. waterproof, designed for depths of 100m or less

STAGE 4: add image classification
a. create a deep learning data set
b. train a model to detect sea urchins
c. deploy a pre-trained model onto the raspberry pi
d. camera sends stats on number of urchins in a picture

Looking forward to hearing if others are working on similar projects.

11 Likes

Alright, and now some details from Stage 1 of the build.

Here is a sketch of my “functional diagram” to illustrate the plan with the Dev. Kit tube.

Initially I wanted to fit a rechargeable battery pack inside the tube and mount everything to a smart mooring. But the battery pack was too big to fit inside the tube. So I figured this was a good time to try an power the camera from the BM bus.

My first challenge was to setup the Raspberry Pi Zero W with the Debian Bullseye Lite distribution. I used the Raspberry Pi imager tool to setup the SD card and preconfigure the wifi settings. I setup the RPi as a “headless” install so that I could wirelessly SSH into the RPi to update code and transfer files. This will be helpful in the future when the RPi is inside a pressure housing or fully encapsulated in potting and I can’t access the USB ports.

Pro tip: if you plan to SSH into the RPi over wifi, make sure you have the RPi Zero W or 2W . I spent two frustrating evenings trying to configure the hardware and get the device to logon to my home wifi network, only to discover that I had an older RPi Zero didn’t have onboard wifi…doh.

Hook up the USB camera and save a picture

My first test was getting a USB camera to save an image to the local disk. I used an early generation DeepWater exploreHD camera. This is an awesome little waterproof camera rated for depths up to 400m. Later on I will switch over to a Raspberry Pi Camera Module v3 since I want to design my own pressure housing - but for now this is a great starting point.

Setup MJPG-Streamer for a USB Camera

Now that I had a RPi wired up to a USB camera it was time to setup a video stream. I followed this tutorial to setup MJPG-Streamer on the RPi. I ran into a few issues installing the correct versions of all the dependencies but eventually I got it to work. For me, the trick was installing this version of NumPy and OpenCV together.

pip install numpy==1.20.0
pip install opencv-python-headless==4.5.3.56

Now I could have the RPi stream video to a web server and access the video stream on my laptop. This was by far the most frustrating AND rewarding part of the project yet. It took me three evenings of reading tutorials + asking Chat GPT for help before I was finally able to get this screen shot, but totally worth it.

Quick power test to see how much juice this unit draws

It was important to quantify how much power the RPi + camera would need because the Bristlefin dev board has limitations on how much power it can supply. To run this test I use a niffty USB power monitor from Adafruit . You plug one side of the power monitor (blue thing) into a power source, then plug the load into the other end and it will display voltage, current , and mAH while the device is running. It’s a fast + easy way to establish the baseline power consumption for a project if you don’t need micro-amp accuracy. The camera was pulling around 1.4 watts while streaming video.

At this point I decided to place the camera inside the Dev. Kit tube behind a clear dome port from Bluerobotics. You could also install this particular camera outside the tube since it is rated for 400m water depth, but I didn’t want to deal with a cable penetration to run the cable from inside the tube to the outside lid. I have access to a 3D printer so it’s easier for me to 3D print a bracket that holds the camera behind the dome port and keep all electronics inside the tube.

I ran a quick test with the camera behind the dome port to check the image quality. Here’s a good write up on the differences between flat and dome ports for underwater photography, which includes illustrations. For best results, the camera sensor should be located at the center of curvature of the dome port. My test was in air so these was not significant difference with and without the dome. That will be different underwater.

Mount the camera to end cap + wire it up

Below is an image of the assembly with the USB camera mounted + wired to a 5V Buck Converter + RPi. During testing I found that the 5V pin on the Dev Kit board could not provide enough current to the RPi to keep it running continuously. To get around this limitation, I wired a Buck Converter to the 12V VOUT pin. The 12V regulator on the Bristlefin PCBA (big orange board) has a higher max current rating than the 5V regulator and was enough to keep the RPi running and happy.

In this iteration the USB camera is NOT at the center of curvature of the dome port. I was limited on space inside the clear tube so I move the camera closer to the dome to get everything to fit. Not ideal as this will further increase image distortion, but it will work for this first test.

Close up the tube

I used plenty of electrical tape to secure the RPi to one side of the Bristlefin and then shoved the wires inside the tube being careful not to pinch anything when installing the end cap. If this was going to be used for an underwater test then I’d have lubed the o’rings with grease before closing up the tube.

Note on assembly: The tube has a decent volume of air inside that acts like an air spring and fights against you. Bluerobotics sells a pressure vent to make this assembly easier. Worth a try if you plan to open and close your tubes frequently.

Setup RPi for Ad-Hoc network

I wanted to be able to wirelessly SSH into the RPi from my laptop independent of having a local internet connection so that I was not limited to testing my hardware indoors. So I setup the RPi to act as a Hotspot following this tutorial . I hit a few problems when trying to turn DHCP on/off which resulted in me bricking the unit, but luckily I could still access the SD card and fix the issue. That was a good lesson to learn BEFORE I potted the assembly and lost access to the SD card.

Power it up + SSH in + take a picture

Back at the Sofar Pier 28 office I setup my Ebox to power the BM bus continuously and then wired up the camera. After a quick boot I logged onto the hotspot and ran the script to setup the video stream.

Here’s what I captured. Beautiful sunny day in our San Francisco office!

By this point I had discovered a few issues with my assembly that prevented me from getting a completely waterproof seal with the end caps. Unfortunatly the 3D printed bracket that I made was too thick and caused an interference between the Bristlefin PCBA and the end cap with the dome port. This made it impossible to get a sufficient seal to keep the tube dry.

But before making any changes to the hardware I decided to review my goals for STAGE 1. Everything was complete since it did not need to be waterproof.

STAGE 1: use the dev kit as a proof of concept
a. [DONE] get something working quickly, build + test + break ⇒ repeat
b. [DONE] powered by Bristlemouth, but no data transfer over Bristlemouth (yet)
c. [DONE] focus on the software + electrical bits
d. [DONE] don’t worry about waterproofing

Now, on to STAGE 2!

3 Likes

Very nice thanks for your posting! My Angel Sharks group must also develop a Camera thanks for the Raspberry Pi protip as I almost would have used the older Pi Zero one too if not for your protip. Also wondering about the urchins. was planning on counting them in real-time with object recognition (pre-trained of course) maybe using PyTorch PyTorch object detection with pre-trained networks - PyImageSearch

@AngelSharks glad the tip about the Pi Zero was helpful.

I am not familiar with PyTorch, thanks for sharing the link. I was planning to follow this tutorial to try and build the urchin detector and counter. I suspect one of the tricky parts is developing a lightweight neural network that can be deployed on a Pi Zero to run in near real time. That tutorial walks through developing a collection of images for training, using the tool Keras to train a Convolutional Neural Network, and then deploying the pre-trained algorithm on the Pi for object detection.

I’ve never tried anything like this before, so I’m sure there will be some difficult parts once I dig into the details.

My latest version of the camera is much smaller and incorporates a 3d Printed shell + epoxy to keep the electronics waterproof. I want to keep it as small as possible, roughly the size of a GoPro, so it’s easy to install at different locations in the water column. Here are some early concepts for the camera module housing.

1 Like

For this stage of the project I wanted to take the Dev. Kit hardware and shrink it down into a small form factor about the same size as a GoPro camera. That would make my camera module fairly compact and easy to mount under a spotter, on the anchor, or along the smart mooring. I also wanted to get something built quickly so that I could start testing the camera underwater to see what unforeseen issues would pop up.

Here is a reminder of my goals for this stage of the project:

STAGE 2: build a camera module
a. make it compact and smaller
b. configure the Ebox to power cycle the raspberry pi
c. waterproof, designed for shallow depths
d. short deployments, 1-2 weeks

Research other projects

With those goals in mind, I did some research to see how other people had developed low cost underwater cameras with a Raspberry Pi. Here are a few references that I found helpful:

  1. PipeCam: the low-cost underwater camera, simple design with RPi camera module + time lapse
  2. Epoxy Pi resin.io, completely encapsulated in epoxy and could use wireless LAN + Bluetooth to communicate with Pi inside
  3. DeepPi, miniturized camera for deep sea exploration in 3D printed resin enclosure + potting

The DeepPi article is a fantastic reference for anyone trying to encapsulate electronics for underwater use. My camera module is directly inspired by their work and assembly method. The following details are specific to my first build of the underwater camera module.

How does it work

The camera for this project is a Raspberry Pi Camera Module V3 with a 12MP sensor. This was wired to the PRi Zero via the CSI camera connector with a short flat cable. There was a red LED wired to one of the GPIOs on the RPi to function as an indicator light. The LED will turn on after the RPi boots up and stays on as long as it has power. The RPi is powered by a buck converter that takes the 24V from the Bristlemouth bus and steps it down to 5V. That’s it!

In the future I’ll wire up a Bristlemouth Mote to the the RPi (maybe using serial RX/TX or SPI for higher data transfer rates?) to enable bi-directional data transfer.

The 3D printed camera module shell

The exterior shell of the camera module was 3D printed in two halves.

Unlike the DeepPi project, I decided to use FDM 3D printed parts instead of resin prints. A few years ago I purchased an Elego Mars resin 3D printer so that I could print my own pressure housing just like DeepPi did. While resin printers are amazing machines and perfectly suited for this type of project, I didn’t like working with the resins and the mess they create in my small shop.

So for this build I’m experimenting with printing plastic parts using ASA filament on a Bamboo PS1 3D printer using 100% infill and 0.08mm layer height to try and achieve a solid shell (as solid as possible). If you don’t want to print ABS or ASA, I have found that PLA boxes filled with potting material will last 2-4 weeks when continuously submerged in sea water - long enough to test a quick prototype if you’re in a pinch. The material ASA has higher UV resistance and lower water absorption compared to PLA and ABS. I’m not entirely sure how the ASA will do, but the material is marketed at being for “outdoor” DIY projects like 3D printed watering cans so it should do at least as good as PLA if not better.

Wire it up + assembly

Solder the wires from the Bristlemouth connectors to the input on the buck converter. Then solder wires from the output on the buck converter to the correct headers on the RPi.

The PCBAs are held in place with small M3 self tapping screws that bite into the plastic shells. The RPi + buck converter + LED get secured to the “bottom” shell, while the camera gets mounted to the underside of the “top” shell.

I used a dab of super glue to secure the LED in place. You don’t need a lot, just enough to create a temporary seal to prevent potting material from spilling out. I like the ultragel type glue for this type of work as it is thicker and better at filling gaps.

Here is the assembly all wired up and secured inside the shells.

Add potting material to waterproof electronics

Double check: make sure the assembly is working BEFORE you add potting material, this is you last chance to catch any mistakes!

I’m using a clear epoxy, DP270, to fill the voids around the electronics and keep them waterproof. When cured this is a rigid material and it bonds well with ABS and 3D printed parts. It’s marketed as a potting compound that is safe for electronics and has a low viscosity and low exotherm.

I started by preheating the epoxy cartridge in a small oven at 150F for 15 minutes. This reduced the viscosity and helped the material flow better to eliminate bubbles. Once the RPi was covered I stopped and let this sit overnight to cure.

OK, will need to finish the rest of the story in a follow up post.

4 Likes

Wow, you’ve been busy! We set up our AI Computer Vision project to recognize Marine mammals as part of our Shellfish Aquaculture partners must report all Marine mammal encounters for compliance w/the the Marine Mammals Act. We used a basic cell phone camera using a pre-trained model. In your case, you need maybe 100s of different Urchnin photos and label them all, a Tensorflow lite install will fit on the Pi 0 https://arxiv.org/pdf/1712.05877.pdf

1 Like

also get the code working before the PI as you need to downsize models to get everything onto the Pi 0 with a smaller number of training photos you get lots of errors like this every everything jumping out of the water was labeled a bird.

A ‘fast’ C++ implementation of TensorFlow Lite classification on a bare Raspberry Pi zero. GitHub - Qengineering/TensorFlow_Lite_Classification_RPi_zero: TensorFlow Lite on a bare Raspberry Pi Zero

1 Like

With the bottom half of the camera module cured it was time to close up the assembly and fill the the rest of the volume with potting material. I used electrical tape to secure the two shells together and coverup the seam line between the top and bottom shells. As a last step I added a small square of blue tape over the camera lens to protect it from epoxy drips.

Fill the other half

This second potting step was more difficult than I expected - in the future I’ll do this differently so that the whole interior can be filled in a single step. I had to be careful not to get epoxy onto the camera lens else that would distort the optics. The trick was to attach a needle tip to the end of the mixing nozzle then carefully inject epoxy into the gap between the shell and the camera PCBA. Heating the cartridge in the oven at 150F for 15 minutes prior to dispensing reduced the viscosity of the epoxy and made it flow better. This reduced the risk of air bubbles getting trapped and made the material flow better.

A note for next time: I’ll try placing the assembly inside a pressure pot to force the epoxy into all the voids and collapse any bubbles during the curing process. In the past I’ve converted a paint spraying kit to build a low cost DIY pressure pot suitable for small projects. They can be pressurized to 40-50 PSI (check the warning labels before trying this!) and I’ve achieved good results.

If you don’t know much about them, here’s a good video that explains the differences between pressure pots and vacuum chambers and how they help to reduce bubbles and voids in your resin and potting projects.

Here is the needle tip that fits onto the end of the mixing nozzle.

I had to carefully dispense epoxy here to finish filling the interior space. It was a slow process.

Once it was filled I let everything cure for 48 hours.

Install the lens and o’ring

The next step was to install the o’ring and glass lens into the housing. This is a critical step as it creates a watertight seal. There is an undercut in the shell that holds a single-turn spiral internal retaining ring. The retaining ring locks the lens in place which compresses the o’ring to create a mechanical seal between the lens and the 3D printed shell. The following photos show an early prototype printed in grey PLA, but the assembly steps are the same.

Using a clamp to compress the lens and o’ring before inserting the spiral retaining ring.

Visual inspection to ensure there is uniform compression of the o’ring.

System level test

With everything assembled it was time for a test. I configured the Ebox to provide continuous power on the Bristlemouth bus and then wired up the camera with a penetrator and some jumpers.

The red light turned on indicating that the RPi had successfully booted up and run my “LED ON” script. You can see the red LED in the photo below on the bottom left side of the camer.

And here is the first test using the MJPEG streaming demo app. Success!

The next step will be dunking the camera into a pool for a quick shallow water time-lapse test.

1 Like

This was the first underwater test for the camera module. The camera was submerged 1 meter below the surface in a pool for four hours and setup as a time lapse taking a picture every 60 seconds.

Set Ebox to provide continuous power

The Ebox was setup to provide continuous power to the Bristlemouth bus for this short test. I used the following commands to configure the ebox over a serial terminal:

bridge cfg set 0 s u bridgePowerControllerEnabled 0
bridge cfg commit 0 s

You should see the Ebox reply with the following if it works:

...
63925t [BRIDGE_CFG] [INFO] Key 0: bridgePowerControllerEnabled
...
63936t [BRIDGE_CFG] [INFO] Key 0: smConfigurationCrc
63944t [BRIDGE_CFG] [INFO] Node ID: 0 Partition: system Value: **0**

You can verify this worked by using a voltage meter to measure 24V between the Bristlemouth sockets.

Python script to take still image + add timestamp

I needed a script to run on the RPi to control the camera and take a picture. The code below uses the PyCamera2 library to snap a picture + add timestamp to file name an save the image to the specific folder. This script will get called every time you want to take a picture.

#!//usr/bin/python3
# file name: image_timestamp.py

import datetime
import time
from picamera2 import Picamera2

# -- Setup the camera -- 
picam2 = Picamera2()
config = picam2.create_still_configuration()
picam2.configure(config)

# -- Setup the datetime stamp --
date = datetime.datetime.now().strftime("%m_%d_%Y_%H_%M_%S")

# -- Start the camera --
picam2.start()

## -- Save the image with custom file name --
picam2.capture_file("/home/pi/images/"+ date + ".jpg")
picam2.stop()

Have the script run on startup + every 60 seconds

Now that we have a script for taking the picture and saving the file we need to setup the RPi to call that script once the camera module turns on, and continue to call the script every 60 seconds while the power is still on.

One way to accomplish this is by configuring crontab on the RRi. Check to confirm Crontab is enabled for user. Reference URL below if you have some issues, else SSH into the RPi and run the followng command.

crontab -l

If you use sudo it creates a different crontab than the standard user logged in, use NON-sudo for getting files to work

To open crontab and edit the files that get called, type the following:

crontab -e

Setup for running on boot, and every 1 minute by adding the following lines to the end of the file.

# Start program on reboot
@reboot python3 /home/pi/image_timestamp.py

# Run program every 1 minute while powered
*/1 * * * * python3 /home/pi/image_timestamp.py

The last step was to run a quick test to make sure it all works and the files were being saved in the correct directory. As a final step I charged up the battery in the Ebox to ensure it was full.

Toss it in the pool

With the code working it was time to submerge the camera one meter below the surface. I let this sit in a friends pool for about 4 hours.

After removing the camera from the water I noticed a small amount of moisture behind the glass lens. This indicated that there was a small leak. Nothing too serious - but likely water was leaking past the o’ring because of non-uniform compression.

For this test the camera was simply dangling by the Bristlemouth jumpers.

Here you can see moisture trapped inside the camera behind the lens. Luckily this did not ruin the image quality during the test. I was able to fix the leak issue by reinstalling the lens to get more uniform compression of the entire o’ring before installing the single twist locking ring. This has not been a problem with follow up deployments.

The red LED was helpful for debugging and knowing when power was being applied to the module. There were no other signs of leaks or issues with the module.

Convert still images to a video

There are plenty of ways to take a folder of images and convert them into a video.

For this test I wanted to keep it simple so I used Quicktime to generate a movie from an image sequence. It was fast and easy if you have a mac. One downside to this method is that you cannot specify the frame rate - so the video is pretty short.

In the future I’d like to have a script automatically generate a time lapse for me and specify the frame rate. I’m not sure if the RPi Zero has enough processing power and memory to do this - but that’s a project for another day.

Here are the images from the camera module compressed in to 12 seconds of video. Each still image was approximately 1.2 MB before being converted to a video file. I was impressed with the image quality and clarity (it helps to be in a pool!). Below you can see the sun change in light as the sun sweeps across the sky. You can also see the light diffraction patterns on the floor of the pool as the wind blows and forms ripples at the surface. So cool!

The sad lonely pool robot is about 10-15ft away from the camera and you can still see some of the details. Not bad for a $30 camera sensor.

Link to video here

A few still images:

2 Likes

Yikes, it’s been awhile since the last project update and a lot has been happening.

After testing the camera module in the pool I decided to mount this under the Spotter Buoy and toss it in the harbor behind our office to see how it would stand up to salt water over a week of testing. Remember, this is an FDM 3D PLA printed housing filled with epoxy with a raspberry pi zero inside. All images are being logged to the Pi locally and I access the pi over an ad-hoc network connection to download the files after recovering the buoy.

To my surprise, it worked!!!

The RPi underwater camera successfully gathered over 550 still images over the 8 days it was in the water. No significant water in the housing, but I did see a tiny bit of moisture near the inside surface of the o’ring gland. Likely a leak from the 3D printed housing as this is not a perfect 100% infill.

What worked?

  • Ran continually snapping photos
  • Power was not a problem, even on overcast days power draw didn’t drain battery
  • No biofouling or issues with buoy getting tangled for short deployment

What went wrong?

  • Some images are still showing as ZERO bytes
  • Some images show as having a file size but will not display on either Pi or Mac
    • file corruption during saving?
  • I had to delete about 140 of the 730 images due to these two errors.
  • I took photos at night, so more that half the images are just black

A few thoughts for the next test:

  1. program it to NOT take pictures at night (doh!)
  2. have it take video every hour, 5-10 seconds, in addition to stills
  3. increase frequency of photos, was taking a photo ever 15 min, can we do every 5 instead with power budget?
  4. Mount this on an anchor and drop it to the bottom of the harbor, you can’t get much more than 1 meter of vis on a good day in San Francisco Bay due to high volume of water exchange with each tide swing
  5. Could integrate another Pi inside the buoy to get video streaming over wifi network

Below are some images from the predeployment and during deployment. I installed a reference card hanging ~1 meter below the buoy to help determine the relative visibility. In air, you can see the card crystal clear, but in the water with all the turbidity swirling around it was hard to see the card most of the time.

Reference image in air to bench march what the camera is capable of in ideal conditions. (requirement: must always take reference shot with dog in photo for size calibration)

Day two, one of the more clear images

Last day in the water, murky pea-soup.

2 Likes

Awhile back I saw that another team was able to successfully transmit an image from a raspberrypi over cellular using their dev kit. AND they shared their source code. With a bit of help from ChatGPT and the latest UART support for bm_serial.py I was able to cobble together a new camera module that transmits a compressed 640 x 480 pixel image over the Bristlemouth. The whole process takes about 15 minutes and requires ~70 cellular messages be sent in succession (currently limited by the transmit message size) but this is good enough for an MVP.

Big thanks to @Kaitlynyau and team for posting their Subsea Imaging Solution and sharing the source code.

I made a few tweaks to their code + the bm_serial.py code to be compatible with the Pi Zero and Blinka implementation of Ciruitpython. I’m still wrapping my head around GitHub and repos (sorry, I’m an ME by training and software is still magic to me) but with the help of ChatGPT I was able to come up with a working proof of concept. I’ll post more details once I get a fully working version of the hardware up and running for time lapse in the next few weeks.,

3D printed camera module housing, with RPi Zero + mote + 2x sets of Bristlemouth connectors

Fit check with all the PCBAs, I am using a “Bristleback” PCBA which is a smaller and more simple version of the Dev Kit board that allows this to be a smaller form factor.

And here is one of the first images that I was able to transmit over cellular using modified example code from the Subsea Imaging Solution reference code.Major thanks to the UC Santa Barbara team for sharing the code and documenting everything on GitHub. I’d like to do that same in the near future.

Next steps…get a CNC’ed plastic housing and make this waterproof.

4 Likes