1. Home
  2. Docs
  3. Jetson Cameras
  4. Multiple cameras on the Jetson
  5. How to use Arducam stereo camera to estimate depth on ROS with Visual SLAM

How to use Arducam stereo camera to estimate depth on ROS with Visual SLAM

2D representation is good enough for most applications. However, there are applications that require information to be provided in three dimensions. An important application is robotics, where information in three dimensions is required to accurately move the actuators. For example, depth estimation is a critical task for autonomous driving, and it is necessary to estimate the distance to cars, pedestrians, bicycles, animals, and obstacles. The popular way to estimate depth is LiDAR. However, the price of hardware is high, LiDAR is sensitive to rain and snow, so there is a cheaper alternative: Depth estimation with a stereo camera. This method is also called stereo matching.

5

Stereo matching aims to identify the corresponding points and retrieve their displacement to reconstruct the geometry of the scene as a depth map. As a passive method, stereo matching does not have to rely on explicitly transmitted and recorded signals such as infrared or lasers, which experience significant problems when dealing with outdoor scenes or moving objects, respectively.

In the document, we can learn how to use the Arducam stereo camera to estimate depth on ROS with Visual SLAM.

Table Of Contents

1.Create a folder of ROS workspace

Create a folder in the Home directory to store the camera driver related files.

mkdir ROS_WORKSPACE && cd ROS_WORKSPACE
mkdir -p CAM/src

2.Compile camera-related files

2.1 Install dependence

wget https://bootstrap.pypa.io/get-pip.py && python get-pip.py
sudo pip install v4l2
sudo apt install ros-melodic-camera-info-manager-py

2.2 Compile files in CAM/src

Enter the camera file directory, download and compile the camera-related file.

cd ~/ROS_WORKSPACE/CAM
git clone -b ov9281_stereo https://github.com/ArduCAM/Camarray_HAT.git
sudo mv Camarray_HAT/Jetson/ROS/arducam_stereo_camera src                
catkin_make

2.3 Flash the bash file

Build and devel files appear after catkin_make, and the path of setup.bash in the devel file needs to be written into bashrc.

source devel/setup.bash
gedit ~/.bashrc

After opening bashrc, add the setup.bash address just now at the end of bashrc, then save the file.

The setup.bash address such as source ~/ROS_WORKSPACE/CAM/devel/setup.bash.

Note: When adding the address, make sure to change to your own setup.bash address.

image 31
image 32

After finishing, you can execute the following command.

source ~/.bashrc

3.Creat ROS workspace to estimate the depth

cd ~/ROS_WORKSPACE
git clone https://github.com/ArduCAM/Nvidia_Jetson_ROS_SGM.git

4.Modify the compute capability

4.1 View the compute capability of your Jetson

cd /usr/local/cuda/samples/1_Utilities/deviceQuery
sudo make
sudo ./deviceQuery

From the figure, we can see that the Jetson Xavier NX CUDA driver version is 10.2 and CUDA compute capability is 7.2.

4.2 Modify compute capability

Open the CMakeLists.txt file in gpu_stereo_image_proc/CMakeLists.txt, and modify compute capability.

Note: The modify rule such as CUDA compute capability is 7.2, so -arch=sm_72.

5.Compil and add bash file to ~/.bashrc

cd ~/ROS_WORKSPACE/Nvidia_Jetson_ROS_SGM/depth_sgm
catkin_make
source devel/setup.bash
gedit ~/.bashrc

Add source ~/ROS_WORKSPACE/Nvidia_Jetson_ROS_SGM/depth_sgm/devel/setup.bash in the end

source ~/.bashrc

6.Run the algorithm

6.1 Run SGM algorithm

roslaunch arducam_stereo_camera arducam_stereo_camera.launch   #access camera
rosrun nodelet nodelet manager __name:=my_manager   #run the plug-in of nodelet
roslaunch gpu_stereo_image_proc libsgm_stereo_image_proc.launch manager:=/my_manager __ns:=/arducam #run SGM algorithm
rosrun image_view stereo_view stereo:=/arducam image:=image_rect #view the depth estimation image
rosrun rqt_reconfigure rqt_reconfigure    #modify the parameter to get better results
rostopic hz /arducam/disparity    #view the frame rate of output data
7 1

Modifiable parameter description

-min_disparity: Disparity to begin the search at [pixels] (maybe negative).
-disparity_range: Number of disparities to search [pixels].
-uniqueness_ratio: Filter out if the best match does not sufficiently exceed the next-best match.
-path_type: Scan line directions used during cost aggregation step.
-P1: The first parameter controlling the disparity smoothness. See below.
-P2: The second parameter controlling the disparity smoothness. The larger the values are, the smoother the disparity is. P1 is the penalty on the disparity change by plus or minus 1 between neighbor pixels. P2 is the penalty on the disparity change by more than 1 between neighbor pixels.

6.2 Run SGBM algorithm

roslaunch arducam_stereo_camera arducam_stereo_camera.launch   #access camera
rosrun nodelet nodelet manager __name:=my_manager   #run plug-in of nodelet
roslaunch gpu_stereo_image_proc vx_stereo_image_proc.launch manager:=/my_manager __ns:=/arducam #run SGBM algorithm
rosrun image_view stereo_view stereo:=/arducam image:=image_rect #view the depth estimation image
rosrun rqt_reconfigure rqt_reconfigure    #modify the parameter to get better results
rostopic hz /arducam/disparity    #view the frame rate of output data
xxSGBM1

Modifiable parameter description

-shrink_scale: Image size will be shrunk by this factor in image processing for accelerating calculation. Must be a power of 2.
-correlation_windows_size: SAD correlation window width [pixels].
-bt_clip_value: Truncation value (must be odd) for pre-filtering algorithm. It first computes x-derivative at each pixel and clips its value to [-bt_clip_value, bt_clip_value] interval.
-ct_win_size: Specifies the census transform window size.
-hc_win_size: Specifies the hamming cost window size.
-min_disparity: Disparity to begin the search at [pixels] (Must be zero or positive).
-max_disparity: Disparity to finish search at [pixels] (Must be divisible by 4).
-path_type: Scan line directions used during cost aggregation step. Can be selected from “Individual”, “SCANLINE_CROSS”, and “SCANLINE_ALL”. 8 individual directional options above are effective only when “Individual” is selected.
-FILTER_TOP_AREA: Extra flags for SGBM algorithm. Filter cost at top image area with low gradients.
-PYRAMIDAL_STEREO: Extra flags for SGBM algorithm. Use pyramidal scheme: lower resolution imagery for nearby objects and the full resolution for far-away objects.
-uniqueness_ratio: Filter out if the best match does not sufficiently exceed the next-best match.
-path_type: Scan line directions used during cost aggregation step.
-P1: The first parameter controlling the disparity smoothness. See below.
-P2: The second parameter controlling the disparity smoothness. The larger the values are, the smoother the disparity is. P1 is the penalty on the disparity change by plus or minus 1 between neighbor pixels. P2 is the penalty on the disparity change by more than 1 between neighbor pixels.
-disp12MaxDiff: Maximum allowed difference (in integer pixel units) in the left-right disparity check, only available in SGBM.

Was this article helpful to you? Yes No 1