Depth Mapping on Jetson Nano
After proving the feasibility to use Arducam Stereo Camera HAT for depth mapping on the Raspberry Pi, we’ve moved on to the Jetson Nano platform. This blog mainly discusses the application on the Jetson Nano A02, which only offers a single CSI port like the standard Raspberry Pi models.
However, NVIDIA has introduced a new revision, B01, with 2 MIPI CSI-2 slots. We’ve not tested on B01, but you can refer to this blog where we’ve set up a quad-camera system on the Raspberry Pi Compute Module, which also comes with two CSI slots. This blog mainly discusses the Jetson Nano depth camera on revision A02.
2 Cameras, 1 Stereo HAT, 1 Nano Dev Board
To build a Jetson Nano depth mapping system, you will first need to set up a stereo camera on the Jetson Nano. Placed in different positions, the two cameras will produce images with different information for the depth calculation.
Before the B01 revision was out, you can only use a single camera port on the Jetson Nano, just like the Raspberry Pi. It was a headache to set up stereo visions on these single-board computers. Arducam firstly designed a stereo camera HAT for Raspberry Pi to disguise dual-camera setup as a single connected camera and built many applications upon this HAT. We’ve successfully built a depth mapping system on the Raspberry Pi with this HAT, and it takes a little further to make it happen on the Jetson Nano platform.
Due to the similarity with the Raspberry Pi depth mapping system, it takes the same accessories to build such a system on the Jetson Nano. What you’ll need is a Jetson Nano developer kit, an Arducam Stereo Camera HAT, and an Arducam Stereo Camera Board.
The calibration Issue
As mentioned above, the two cameras generate different information, and the differences will be used to calculate the object distance (or depth.) Therefore, it’s to some degree a math problem when it comes to depth mapping. The depth mapping system could be regarded as a function that receives two kinds of data inputs.
Since the data input decides the result, we will need to calibrate the input to the best state to get an accurate result, and the calibration issues are one that we have to face. Because a chessboard is used to assist calibration in this application, how to take good pictures of the chessboard becomes the key.
We’ve done enough tests to conclude that the chessboard needs to take an appropriate proportion in the image frame. It cannot be incomplete or too small, and cannot be shaken to produce blurry images. Pictures under /scenes and /pairs folder also needs to be deleted when a bad calibration result occurs.
Here is a video demo of the best calibration we’ve had so far:
Improved Stereo Vision
Since the release of the Raspberry Pi, many companies are pushing the specs of embedded systems to the limit. The Jetson Nano is such a product. It’s small but powerful enough for many AI applications to run on thanks to the technologies from the video card industry.
Arducam is dedicated to vision and imaging, and we would like to build powerful peripherals for such a powerful platform. As we are also marching on to embrace the AI world and bring more intelligence to our cameras, we consider the Jetson Nano a reliable and future-promising platform and will continue to work on building more camera modules around it.
Back to the depth mapping on Jetson Nano, our engineers are still working in terms of both hardware and software for a better depth map. We’ll try to get back with our latest breakthroughs and post detailed tutorials and video demos when we’re ready.