1. Home
  2. Docs
  3. Jetson Cameras
  4. Multiple cameras on the Jetson
  5. How to use Arducam stereo camera to perform location with Visual SLAM

How to use Arducam stereo camera to perform location with Visual SLAM

Table Of Contents

1. What is Arducam stereo camera?

Stereo vision systems give the robots depth perception skills, which make artificial machines and systems develop an understanding of their environment by estimating the relative distance of objects in their vision from many visual cues. Hence, stereo vision are used in many areas of robotics, such as self-driving cars, drones for rescue missions, robots for remote surgery.

Arducam released this Stereo Camera MIPI Module Series for Raspberry Pi and Jetson Nano/Xavier NX. They directly connect to the MIPI CSI-2 connectors of the motherboards and run with a V4L2 camera driver on those platforms. They offer better flexibility to be integrated into your own hardware design or run with your own algorithm on embedded systems for applications like depth sensing, 3d mapping, SLAM, etc.

 Arducam 1MP Stereo Camera MIPI Module is a stereo camera module with two synchronized monochrome global shutter OV9281 image sensors (2x1MP). The monochrome sensor’s capable of excellent detail and sensitivity allow it to get higher accuracy and frame rates in extracting depth information.

Arducam 2MP Stereo Camera MIPI Module is a stereo camera module with two synchronized monochrome global shutter OV2311 image sensors (2x2MP). The monochrome sensor’s capability of excellent detail and sensitivity allows it to get higher accuracy and frame rates in extracting depth information.

2. What is Visual SLAM?

Visual simultaneous localization and mapping (SLAM) are quickly becoming an important advancement in embedded vision with many different possible applications. The technology, commercially speaking, is still in its infancy. However, it’s a promising innovation that addresses the shortcomings of other vision and navigation systems and has great commercial potential.

Visual SLAM is a specific type of SLAM system that leverages 3D vision to perform location and mapping functions when neither the environment nor the location of the sensor is known.

If you want to learn more about VSLAM, such as How Does Visual SLAM Technology Work, you can click here!

image 34

3. Use Arducam stereo camera to perform location

Note: This document is carried out with the melodic version of ROS installed, where the positioning algorithm reference operating platform is Xavier NX, and other Jetson platforms are slightly modified.

ROS-melodic installation refer to: http://wiki.ros.org/melodic/Installation/Ubuntu

3.1. Create a folder of ROS workspace

Create a folder in the Home directory to store the camera driver related files.

mkdir ROS_WORKSPACE && cd ROS_WORKSPACE
mkdir -p CAM/src

3.2. Compile camera related files

3.2.1 Install dependence

wget https://bootstrap.pypa.io/get-pip.py && python get-pip.py
sudo pip install v4l2
sudo apt install ros-melodic-camera-info-manager-py

3.2.2 Compile camera files in CAM/src

Enter the camera file directory, download and compile the camera file.

cd ~/ROS_WORKSPACE/CAM
git clone -b ov9281_stereo https://github.com/ArduCAM/Camarray_HAT.git 
sudo mv Camarray_HAT/Jetson/ROS/arducam_stereo_camera src
catkin_make

3.2.3 Flash the bash file

Build and devel files appear after catkin_make, and the path of setup.bash in the devel file needs to be written into bashrc.

source devel/setup.bash
gedit ~/.bashrc

After opening bashrc, add the setup.bash address just now at the end of bashrc, then save the file.

The setup.bash address such as source ~/ROS_WORKSPACE/CAM/devel/setup.bash.

Note: When adding the address, make sure to change to your own setup.bash address.

image 31
image 32

After finishing, you can execute the following command.

source ~/.bashrc

3.3. View the data released by the camera

roslaunch arducam_stereo_camera arducam_stereo_camera.launch
rostopic list
2 4

3.4. Calibrate the camera

Note: If you choose to use the calibrated default parameters, you only need to go to the next chapter. If you choose to calibrate the parameters yourself, follow the steps in this section of 3.4. Calibrate the camera, or follow the video.

Now the document will introduce you to how to use the Kalibr tool to calibrate. You also can refer to here to learn more!

3.4.1 Install Kalibr

Install dependence

Replace the corresponding ros version [Tester uses melodic]:

The dependencies related to melodic are as follows

  • ros-melodic-vision-opencv 
  • ros-melodic-image-transport-plugins
  • ros-melodic-cmake-modules
sudo apt-get install python-setuptools python-rosinstall ipython libeigen3-dev libboost-all-dev doxygen libopencv-dev ros-melodic-vision-opencv ros-melodic-image-transport-plugins ros-melodic-cmake-modules software-properties-common libpoco-dev python-matplotlib python-scipy python-git python-pip ipython libtbb-dev libblas-dev liblapack-dev python-catkin-tools libv4l-dev
Install OpenCV2.4.13

Download OpenCV2.4.13, unzip the file and save it in the Home directory.

cd ~/opencv-2.4.13.6
mkdir build && cd build
cmake -D BUILD_NEW_PYTHON_SUPPORT=OFF -D WITH_OPENCL=OFF -D WITH_OPENMP=ON -D INSTALL_C_EXAMPLES=OFF -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D WITH_QT=OFF -D WITH_OPENGL=OFF -D WITH_VTK=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_TESTS=OFF -D WITH_CUDA=OFF -D BUILD_opencv_gpu=OFF ..
make -j4
Download and install igraph

You can click here to download eigen and save it to Home directory.

cd ~/eigen-git-mirror-2.0.16
mkdir build && cd build
cmake ..
sudo make install
pip install python-igraph==0.7.1.post2 #(aarch64--Jetson)
1 5
Create a ROS workspace and download the code.
mkdir -p ~/kalibr_workspace/src
cd ~/kalibr_workspace/src
git clone https://github.com/ArduCAM/kalibr.git 
Compile Kalibr
cd ~/kalibr_workspace
catkin build -DCMAKE_BUILD_TYPE=Release -j4
2 5
Add setup.bash
source devel/setup.bash
gedit ~/.bashrc#add path source ~/kalibr_workspace/devel/setup.bash 
source ~/.bashrc

3.4.2 Use Kalibr for camera Calibrate

Record bag files and run 5 terminals.

Download and print the checkerboard according to 1:1.

Download Configuration file of .ymal.

roslaunch arducam_stereo_camera arducam_stereo_camera.launch
rosrun topic_tools throttle messages /arducam/left/image_raw 4.0 /left
rosrun topic_tools throttle messages /arducam/right/image_raw 4.0 /right

If advertised as /left(/right) appears, the data is published successfully, see the figure below.

3 4

Use ROS calibrate tool for visualization.

rosrun camera_calibration cameracalibrator.py --approximate 0.1 --size 11x8 --square 0.02 right:=/arducam/right/image_raw left:=/arducam/left/image_raw right_camera:=/arducam/right left_camera:=/arducam/left

Point the chessboard at the camera, and then run the following command to start recording the .bag data. When you are recording, pay attention to moving the camera left and right, up and down, and tilt to ensure the reliability of the calibration.

rosbag record -O stereo_calibra.bag /left /right
4 2

Calibrate based on recorded data. More about Calibration targets refer to here!

You can change the parameter location according to his own situation, the following reference instruction puts the .bag data file and the .yaml configuration file in the Home directory.

rosrun kalibr kalibr_calibrate_cameras --bag ~/stereo_calibra.bag --topics /left /right --models omni-radtan omni-radtan --target ~/checkerboard.yaml

The waiting time will be long, now you can take a break and have a cup of coffee before proceeding to the next step.

5 1

Now you have successfully completed the calibration. The internal parameters of the camera can be changed according to the generated .yaml file.

6 2

The camera model and distortion model can refer to here!

3.4.3 Troubleshooting

a. ImportError: cannot import name NavigationToolbar2Wx

You only need to change the NavigationToolbar2Wx in PlotCollection.py to NavigationToolbar2WxAgg.

b. ImportError: no module named igraph

You need to execute the following command line to install igraph.

sudo pip install python2.7-igraph
pip install python-igraph==0.7.1.post2

c. ImportError: no module named image

You need to change the import Image in kalibr_workspace/aslam_offline_calibration/kalibr/python/kalibr_camera_calibration/MulticamGraph.py to from PIL import Image

d. ImportError: no matching function for call to ‘getOptimalNewCameraMatrix

Specify the OpenCV version as opencv2 when compiling:
In these file Kalibr/aslam_cv/aslam_cameras_april/CMakeLists.txt, Kalibr/aslam_offline_calibration/ethz_apriltag2/CMakeLists.txt, Kalibr/opencv2_catkin/cmake/opencv2-extras.cmake.in are changed to the path of opencv2 compiled by yourself.

7 2

e. Extracting calibration target corners stuck during calibration

Modify the multithreading label multithreading=multithreading in the kalibr_calibrate_cameras.py file to multithreading=False
Modify the multithreading label multithreading=multithreading in the IccSensors.py file to multithreading=False

8 2
9 2

3.5. Algorithm related dependencies installation

3.5.1 Install Eigen3.3.90

git clone https://github.com/eigenteam/eigen-git-mirror
cd eigen-git-mirror
mkdir build && cd build
cmake ..
make -j4
sudo make install

3.5.2 Install Ceres2.0.0

You can refer to here to learn more!

sudo apt-get install liblapack-dev libsuitesparse-dev libcxsparse3 libgflags-dev libgoogle-glog-dev libgtest-dev
git clone https://ceres-solver.googlesource.com/ceres-solver
cd ceres-solver
mkdir build && cd build
cmake ..
sudo make install

3.5.3 Install Opencv3.4.14

Install OpenCV related dependence
sudo apt-get install build-essential pkg-config
sudo apt-get install cmake libavcodec-dev libavformat-dev libavutil-dev libglew-dev libgtk2.0-dev libgtk-3-dev libjpeg-dev libpng-dev libpostproc-dev libswscale-dev libtbb-dev libtiff5-dev libv4l-dev libxvidcore-dev libx264-dev qt5-default zlib1g-dev libgl1 libglvnd-dev pkg-config libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev mesa-utils
sudo apt-get install python2.7-dev python3-dev python-numpy python3-numpy
Resolve openGL conflicts
cd /usr/lib/aarch64-linux-gnu/
sudo ln -sf libGL.so.1.0.0 libGL.so
sudo gedit /usr/local/cuda/include/cuda_gl_interop.h
# Comment (line #62~68) of cuda_gl_interop.h 
//#if defined(__arm__) || defined(__aarch64__)
//#ifndef GL_VERSION
//#error Please include the appropriate gl headers before including cuda_gl_interop.h
//#endif
//#else
 #include <GL/gl.h>
//#endif
View the compute capability of your Jetson used
cd /usr/local/cuda/samples/1_Utilities/deviceQuery
sudo make
sudo ./deviceQuery

The following message appears after execution, the Cuda version installed by Jetson NX is 10.2, and the compute capability version is 7.2.

10 4
Compile and install OpenCV

It is recommended that you install OpenCV in the Home directory.

Click here to download OpenCV, and unzip in the Home directory.

Note: When you execute the command line of cmake, change CUDA_ARCH_BIN to the computing power version of your own platform, NX platform is 7.2.

cd ~/opencv-3.4.14
mkdir build && cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_CUDA=ON -D CUDA_ARCH_BIN=7.2 -D CUDA_ARCH_PTX="" -D ENABLE_FAST_MATH=ON -D CUDA_FAST_MATH=ON -D WITH_CUBLAS=ON -D WITH_LIBV4L=ON -D WITH_GSTREAMER=ON -D WITH_GSTREAMER_0_10=OFF -D WITH_QT=ON -D WITH_OPENGL=ON -D CUDA_NVCC_FLAGS="--expt-relaxed-constexpr" -D WITH_TBB=ON ..
make -j4
image 33

3.5.4 Compile and install cv_bridge

cd ~/ROS_WORKSPACE
git clone https://github.com/ArduCAM/Nvidia_Jetson_ROS_SLAM_VINS.git
cd ~/ROS_WORKSPACE/Nvidia_Jetson_ROS_SLAM_VINS/cv_bridge_melodic
catkin_make                               

Flash bash file, and add the setup.bash path to bashrc.

source devel/setup.bash
gedit ~/.bashrc #add path of setup.bash, like the following figure
source ~/.bashrc
14 1

3.5.5 Compilation algorithm

cd ~/ROS_WORKSPACE/Nvidia_Jetson_ROS_SLAM_VINS/VINS_GPU
catkin_make

Add bash file

#Add bash file
source devel/setup.bash
#Add setup.bash directory
gedit ~/.bashrc
source ~/.bashrc

3.6. Run the location algorithm

Note: The running posture data is saved in the specified path: output_path of JetSonCAM.yaml, which can be changed as required.
The format of the posture data is time, three-dimensional translation data, rotation data quaternion representation.

18 1
roslaunch arducam_stereo_camera arducam_stereo_camera.launch
roslaunch vins vins_rviz.launch
rosrun vins vins_node ~/ROS_WORKSPACE/Nvidia_Jetson_ROS_SLAM_VINS/VINS_GPU/src/VINS-GPU/config/JetSonCAM/JetSonCAM.yaml
rosrun loop_fusion loop_fusion_node ~/ROS_WORKSPACE/Nvidia_Jetson_ROS_SLAM_VINS/VINS_GPU/src/VINS-GPU/config/JetSonCAM/JetSonCAM.yaml

Note: If you choose to use the calibrated default parameters, you only need to execute the command line. If you choose to calibrate the parameters yourself, you need to do indispensable work before running.

Change the parameters in ~/ROS_WORKSPACE/Nvidia_Jetson_ROS_SLAM_VINS/VINS_GPU/src/VINS-GPU/config/JetSonCAM, the change method is as follows. Open the .yaml generated by the calibration as shown in Figure 1

  • Change the distortion_parameters in cam0_mei.yaml and cam1_mei.yaml to the data of the distortion_coeffs matrix in the .yaml file generated by calibration in turn;
  • Change xi in mirror_parameters to the first data in intrinsics;
  • Change the data in projection_parameters to the last four data in intrinsics, as shown in Figure 2.
16 1
17 1

3.7. Demo video

4. Troubleshooting

4.1. xxx is neither a launch file

RLException: [arducam_stereo_camera.launch] is neither a launch file 
in package [arducam_stereo_camera] nor is [arducam_stereo_camera] a 
launch file name The traceback for the exception was written to the log file
19 1

This is because the path of setup.bash is not recognized, you need to reflash setup.bash.

cd  ~/ROS_WORKSPACE/CAM/devel
source setup.bash
gedit ~/.bashrc
source ~/.bashrc
roslaunch arducam_stereo_camera arducam_stereo_camera.launch

4.2. Package xxx not found

You can try to re-catkin_make the file first, and then add the path of setup.bash.

cd ~/ROS_WORKSPACE/VINS
catkin_make
source devel/setup.bash
gedit ~/.bashrc
source ~/.bashrc
rosrun vins vins_node ~/ROS_WORKSPACE/VINS/src/VINS-Fusion-gpu/config/JetSonCAM/JetSonCAM.yaml

4.3. xxx.so conflict with xxx.so

libopencv_imgcodecs.so.3.2, needed by /opt/ros/melodic/lib/libcv_bridge.so,
may conflict with libopencv_imgcodecs.so.3.4.14

Recompile the file, and pay attention to the path specified as opencv3.4.14 when compiling.

Change the opencv-related path in the CMakeLists.txt under the camera_models, loop_fusion, and vins_estimator folders, and change it to the opencv path compiled by yourself.
set(OpenCV_FOUND 1)
include_directories( ${OpenCV_INCLUDE_DIRS})
set(OpenCV_DIR ~/opencv-3.4.14/build)
find_package(OpenCV 3 REQUIRED)

20

4.4. The positioning data file vio.csvVF is not output

Write the output_path path of the JetSonCAM.yaml configuration file as a fixed path,
For example: output_path: ‘/ home / wong / ROS_WORKSPACE / VINS‘ Instead of ‘~/ROS_WORKSPACE/VINS

Was this article helpful to you? Yes No