Realsense D435 Gazebo plugin Align Depth and Color Images.
Hello! I am trying to simulate an Intel Realsense D435 camera, from pal-robotics-realsense_gazebo_plugin . So far I am able to get it working, and on the Gazebo end of things, everything is fine.
There is a discrepancy between the color and depth images. The color images have a resolution of 1920x1080 with a horizontal FOV of 69.5, while the depth images have a resolution of 1240x720 with a horizontal FOV of 85.4.
How would I go about aligning the two streams? Do I have to change my camera intrinsics then? What changes must I make?
The code in the link above is not compatible with the realsense repository by Intel.
Note: If knowledge of required application is useful, here it is. I am trying to do 2 tasks. First, a simple mapping algorithm using the color and depth images. Second, segmentation algorithm, where I wish to segment my RGB image, and mask the same in the depth image.
Thanks!
Did you ever find a solution? I am running into the same problem when trying to simulate rtab-mapping (a SLAM package) with d435 in gazebo.
Hi @Sean_Roelofs. Sorry for the late reply. Unfortunately no, I was not able to solve it. However, I raised a question on the github repository. https://github.com/IntelRealSense/lib.... A method for how to do it is given, but I never got around to doing it, since I didnt have the time
Hey, I am facing the same problem with the D435 gazebo plugin, the rtabmap requires the color and depth to be aligned but the plugin there is a discrepancy. If any of you have solved it or have found a workaround. Please post it.