1. Home
  2. Docs
  3. Cameras for Raspberry Pi
  4. Synchronized Stereo Camera HAT
  5. Depth Mapping on Arducam Stereo Camera HAT with OpenCV

Depth Mapping on Arducam Stereo Camera HAT with OpenCV

Introduction

The Arducam Stereo Camera HAT is a revolutionary product that enables you to use two synchronized Pi camera modules on a single standard Raspberry Pi model. The stereo camera setup could be used in lots of applications that are hard to realize in the past. This documentation will show a demo of using the Arducam stereo camera HAT for depth mapping on a Raspberry Pi 3 with the help of our OV5647 Stereo Camera Board.

Credits to StereoPi, on which this tutorial and the codes used are based. Thanks to them for the work they have done.

Note

All of the following examples are for OpenCV novices, which are intended to be used in real production.

 

Video Tutorial

We also provide a video demo of this tutorial, and you can check that below:

Hardware setup

 Arducam Stereo Camera HAT, OV5647 Stereo Camera Board, and Raspberry Pi 3B are used.
Arducam Stereo Camera HAT, OV5647 Stereo Camera Board, and Raspberry Pi 3B are used.

Software Prerequisite

  • Raspbian Buster (kernel 4.19.57-v7+), latest version recommended
  • Python 3.5.3
  • OpenCV 3.4.4 (pre-compiled, ‘pip’ from Python Wheels)
  • StereoVision lib 1.0.3 (https://github.com/erget/StereoVision)

Installing Dependencies and Libraries

  • Installing the latest Raspbian
  • sudo pip install opencv-python
  • sudo pip3 install stereovision
  • sudo pip3 install matplotlib
  • git clone https://github.com/ArduCAM/MIPI_Camera.git

Notice

Key stroke processing is supported for all. Press Q key to stop them. If Ctrl+C is used to stop the script, the program results could be incorrect.

Capture Images

The 1_test.py script will be used in this step.

Open the console and go to the examples folder:

cd MIPI_Camera/RPI/stereo_depth_demo

Console Command:

python 1_test.py

Select the camera mode after running this program, and then you will see a preview window. Press Q to stop the process and save the last image captured, which will be used in the next scripts for tuning the parameters.

It helps you make sure the hardware is working and captures the first picture. Camera modes and resolutions(width and height) will be recorded for later uses.

How the 1_test.py works on stereo camera hat application
How the first script works

You can also see from the video above about how the first script works:

Note

By default, the script will scale the width to 1280 while keeping the aspect ratio, regardless of the resolutions chosen.

Step 2: Collect Images for Calibration

A perfect depth mapping sets a high standard for the camera setup, where two identical cameras have their optical, vertical and horizontal axis all in parallel. However, you can reach that perfection in real life, so calibration is required. The calibration software samples pictures of these two cameras on a specific object, analyzes them, and generates parameters for the correction. A chessboard is used in this demo.

The program starts with a 5-second countdown for positioning the object and then takes the picture. This process will be looped for 30 times by default, and repositioning is required between each countdown.

Note

Try to hold the object steady and keep it within both camera’s sight.

Console command:

python 2_chess_cycle.py
Collect Images for Calibration
Collect Images for Calibration

You can see from the video above to see more about this.

30 stereoscopic photos will then be saved in /scenes folder.

Step 3: Separate Captured Images

The 3_pairs_cut.py will separate the photos captured in the last step into left and right halves and save them in the /pairs folder. This separating process can run on-the-fly without saving, but this step could be helpful otherwise for subsequent uses. Image pairs could be saved from different capture series. You can process these images with your own codes or putting other stereoscopic images in this folder. You can also go through every stereo pair before the separation and sort the bad frames out. Console Command:

python 3_pairs_cut.py
Separate Captured Images
Separate Captured Images

You can see from the video above to see more about this.

Step 4: Calibration

4_calibration.py loads all previously saved pairs and figures out the correction matrices. It searches for the chessboard on the photo, and move on if the chessboard is not found. Therefore, the bad photos in the series won’t break the script. After the calculation is finished, it will rectify the last image and show the resulting “fixed” images. You can check the quality of the calibration in this step, which might take 1-2 minutes.

Step 4: Calibration
calibration

You can see from the video above to see more about this.

Console Command:

python 4_calibration.py 

Calibration script doing his job:

Note

1.If the calibration result is not correct, please delete the pictures under /scenes and /pairs, and capture the pictures again.
2. Please modify the chessboard parameters in this script if other chess boards are used.

Step 5: Depth Map Tuning

The image in script 1 and the calibration results from step 4 will be loaded in 5_dm_tune.py. It shows a depth map with an interface for fine-tuning. Press the Load Settings button to use the default parameters. It’s recommended to take a photo with 3 objects from different distances to find the right settings before you tune the parameters. The closest will be red, and the furthest far away.

You can see from the video above to see more about this.

python 5_dm_tune.py 

Here is how this looks like:

depth map tuning result
depth map tuning result

Step 6: Real-Time Depth Map Using Video

6_dm_video.py uses the results from the above steps to generate a depth map in realtime.

You can see from the video above to see more about this.

Console Command:

python 6_dm_video.py

The result:

Real-Time Depth Map Using Video Result
Real-Time Depth Map Using Video Result

Was this article helpful to you? Yes 3 No