RealSense ROS Pointcloud and RGB Image alignment
I am working on a dog detection system using deep learning (Tensorflow object detection) and Real Sense D425 camera. I am using the Intel(R) RealSense(TM) ROS Wrapper in order to get images from the camera.
I am executing "roslaunch rs_rgbd.launch" and my Python code is subscribed to "/camera/color/image_raw" topic in order to get the RGB image. Using this image and object detection library, I am able to infer (20 fps) the location of a dog in a image as a box level (xmin,xmax,ymin,ymax)
I will like to crop the PointCloud information with the object detection information (xmin,xmax,ymin,ymax) and determine if the dog is far away or near the camera. I will like to use the aligned information pixel by pixel between the RGB image and the pointcloud.
How can I do it? Is there any topic for that?
Thanks in advance
I'm not sure I understand. You want to project rgb to pcd. Or you want to downscale point cloud somehow?
@kolya_rage I want to know the association between a RGB pixel and the pointcloud. For example, if I crop a zone of an RGB image, I will like to have only the information of the pointcloud of the cropped zone.
I see. So you should have projection matrix from the realsense camera, don't you?
What do you mean with projeciton matrix?With the RGB image I am able to detect if a dog appear on the image, and now with the pointcloud, I will like to determine if the dog is on a specific distance from the camera in order to set an alarm.
Your point cloud is produced by the real sense, right?
Yes, Real Sense D425 camera and ROS Wrapper
https://github.com/IntelRealSense/lib... Here is projection and deprojection api
OK, thanks I will have a look