1. Home
  2. Docs
  3. OpenCV AI Kit (OAK)
  4. Performing location with Visual SLAM

Performing location with Visual SLAM

Visual simultaneous localization and mapping (SLAM) are quickly becoming an important advancement in embedded vision with many different possible applications. The technology, commercially speaking, is still in its infancy. However, it’s a promising innovation that addresses the shortcomings of other vision and navigation systems and has great commercial potential.

Visual SLAM is a specific type of SLAM system that leverages 3D vision to perform location and mapping functions when neither the environment nor the location of the sensor is known. If you want to learn more about VSLAM, such as How Does Visual SLAM Technology Work, you can click here!

More information about OAK you can go to here!

Now let us to learn how to use OAK to perform location with Visual SLAM.

image
OAK-D

1.Download and Install depthai-core

cd ~
mkdir oak
cd ~/oak
git clone --recursive https://github.com/luxonis/depthai-core.git --branch ros2-main-gen2
cd depthai-core
mkdir build && cd build
cmake -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/usr/local ..
make -j4
sudo make install
Asset 1@1920x

2.Creat the ROS workspace and Download the code

cd ~/oak
mkdir -p ros_workspace/src
cd ~/oak/ros_workspace
wget https://raw.githubusercontent.com/luxonis/depthai-ros/noetic-devel/underlay.repos
vcs import src < underlay.repos
rosdep install --from-paths src --ignore-src -r -y

3.Upgrade the vision of cmake

cd ~
wget http://www.cmake.org/files/v3.17/cmake-3.17.2.tar.gz
tar xf cmake-3.17.2.tar.gz
cd cmake-3.17.2
./configure
make -j4
sudo make install
sudo update-alternatives --install /usr/bin/cmake cmake /usr/local/bin/cmake 1 --force
cmake --version

4.Compile the Code

cd  ~/oak/ros_workspace
sudo apt install python3-vcstool
catkin_make
source devel/setup.bash
# add the setup.bash address
# address :source ~/oak/ros_workspace/devel/setup.bash
gedit ~/.bashrc
source ~/.bashrc

5.Run VINS_GPU algorithm

5.1  Install Eigen3.3.90

git clone https://github.com/eigenteam/eigen-git-mirror
cd eigen-git-mirror
mkdir build && cd build
cmake ..
sudo make install

5.2 Install Ceres2

sudo apt-get install liblapack-dev libsuitesparse-dev libcxsparse3 libgflags-dev libgoogle-glog-dev libgtest-dev
git clone https://ceres-solver.googlesource.com/ceres-solver
cd ceres-solver
mkdir build && cd build
cmake ..
make -j4
sudo make install

5.3 Install Opencv3.4.14

  • Install OpenCV related dependence
sudo apt-get install build-essential pkg-config
sudo apt-get install cmake libavcodec-dev libavformat-dev libavutil-dev libglew-dev libgtk2.0-dev libgtk-3-dev libjpeg-dev libpng-dev libpostproc-dev libswscale-dev libtbb-dev libtiff5-dev libv4l-dev libxvidcore-dev libx264-dev qt5-default zlib1g-dev libgl1 libglvnd-dev pkg-config libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev mesa-utils
sudo apt-get install python2.7-dev python3-dev python-numpy python3-numpy
  • Resolve openGL conflicts
cd /usr/lib/aarch64-linux-gnu/
sudo ln -sf libGL.so.1.0.0 libGL.so
sudo gedit /usr/local/cuda/include/cuda_gl_interop.h
# Comment (line #62~68) of cuda_gl_interop.h 
//#if defined(__arm__) || defined(__aarch64__)
//#ifndef GL_VERSION
//#error Please include the appropriate gl headers before including cuda_gl_interop.h
//#endif
//#else
 #include <GL/gl.h>
//#endif
  • View the compute capability of your Jetson used
cd /usr/local/cuda/samples/1_Utilities/deviceQuery
sudo make
sudo ./deviceQuery

The following message appears after execution, the Cuda version installed by Jetson NX is 10.2, and the compute capability version is 7.2.

  • Compile and install OpenCV

It is recommended that you install OpenCV in the Home directory.

Click here to download OpenCV, and unzip in the Home directory.

Note: When you execute the command line of cmake, change CUDA_ARCH_BIN to the computing power version of your own platform, NX platform is 7.2.

cd ~/opencv-3.4.14
mkdir build && cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_CUDA=ON -D CUDA_ARCH_BIN=7.2 -D CUDA_ARCH_PTX="" -D ENABLE_FAST_MATH=ON -D CUDA_FAST_MATH=ON -D WITH_CUBLAS=ON -D WITH_LIBV4L=ON -D WITH_GSTREAMER=ON -D WITH_GSTREAMER_0_10=OFF -D WITH_QT=ON -D WITH_OPENGL=ON -D CUDA_NVCC_FLAGS="--expt-relaxed-constexpr" -D WITH_TBB=ON ..
make -j4

5.4 Compile and install cv_bridge

cd ~/oak
git clone https://github.com/ArduCAM/OAK_Nvidia_Jetson_ROS_SLAM_VINS.git 
cd ~/oak/OAK_Nvidia_Jetson_ROS_SLAM_VINS/cv_bridge_melodic  
catkin_make
source devel/setup.bash
#  add setup.bash address as the following figure
gedit ~/.bashrc
source ~/.bashrc  

5.5 Compilation VINS_GPU algorithm

cd ~/oak/OAK_Nvidia_Jetson_ROS_SLAM_VINS/VINS_GPU
catkin_make
#Add bash file
source devel/setup.bash
#Add setup.bash directory
gedit ~/.bashrc
source ~/.bashrc

5.6 Run VINS_GPU algorithm

roslaunch vins oak.launch   # Enable OAK
rosrun vins vins_node ~/oak/OAK_Nvidia_Jetson_ROS_SLAM_VINS/VINS_GPU/src/VINS-GPU/config/oak/oak.yaml
roslaunch vins vins_rviz.launch

6.Troubleshooting

integer_sequence_algorithm.h:64:21:error:‘integer_sequence’ is not a member of  ‘std’

Due to the different versions of cmake, the commands for specifying C++ are different.

To use cmake3.17.2 in this document, the corresponding command is: set(CMAKE_CXX_STANDARD 11)

Was this article helpful to you? Yes 1 No 1