Ask Your Question
1

RealSense ROS Pointcloud and RGB Image alignment

asked 2019-02-13 01:13:11 -0500

aitul gravatar image

I am working on a dog detection system using deep learning (Tensorflow object detection) and Real Sense D425 camera. I am using the Intel(R) RealSense(TM) ROS Wrapper in order to get images from the camera.

I am executing "roslaunch rs_rgbd.launch" and my Python code is subscribed to "/camera/color/image_raw" topic in order to get the RGB image. Using this image and object detection library, I am able to infer (20 fps) the location of a dog in a image as a box level (xmin,xmax,ymin,ymax)

I will like to crop the PointCloud information with the object detection information (xmin,xmax,ymin,ymax) and determine if the dog is far away or near the camera. I will like to use the aligned information pixel by pixel between the RGB image and the pointcloud.

How can I do it? Is there any topic for that?

Thanks in advance

edit retag flag offensive close merge delete

Comments

1

I'm not sure I understand. You want to project rgb to pcd. Or you want to downscale point cloud somehow?

kolya_rage gravatar imagekolya_rage ( 2019-02-13 03:23:52 -0500 )edit

@kolya_rage I want to know the association between a RGB pixel and the pointcloud. For example, if I crop a zone of an RGB image, I will like to have only the information of the pointcloud of the cropped zone.

aitul gravatar imageaitul ( 2019-02-13 03:41:09 -0500 )edit
1

I see. So you should have projection matrix from the realsense camera, don't you?

kolya_rage gravatar imagekolya_rage ( 2019-02-13 04:00:35 -0500 )edit

What do you mean with projeciton matrix?With the RGB image I am able to detect if a dog appear on the image, and now with the pointcloud, I will like to determine if the dog is on a specific distance from the camera in order to set an alarm.

aitul gravatar imageaitul ( 2019-02-13 04:11:58 -0500 )edit
1

Your point cloud is produced by the real sense, right?

kolya_rage gravatar imagekolya_rage ( 2019-02-13 04:23:45 -0500 )edit

Yes, Real Sense D425 camera and ROS Wrapper

aitul gravatar imageaitul ( 2019-02-13 04:32:00 -0500 )edit
1

https://github.com/IntelRealSense/lib... Here is projection and deprojection api

kolya_rage gravatar imagekolya_rage ( 2019-02-13 04:34:41 -0500 )edit

OK, thanks I will have a look

aitul gravatar imageaitul ( 2019-02-13 04:42:42 -0500 )edit

1 Answer

Sort by » oldest newest most voted
1

answered 2019-02-13 04:35:23 -0500

aPonza gravatar image

updated 2019-02-13 04:36:25 -0500

Not sure this will be any good/efficient but you can directly get the depth for each pixel in the depth image via something like this. (xmin,xmax,ymin,ymax) provide you with all the UV coordinates you need to look for the depth of. There must be a better way, but this would work, and you wouldn't really need the pointcloud even.

Moreover you could check out q315362, where we were discussing ways of having a pose on the found object, which effectively also gives you the distance of the dog from the camera frame.

edit flag offensive delete link more

Comments

Is there any method or topic to associate each RGB pixel ( "/camera/color/image_raw") with the generated pointcloud?

aitul gravatar imageaitul ( 2019-02-13 04:41:40 -0500 )edit
1

The pointcloud itself is generated, it gets published here. Check this also.

aPonza gravatar imageaPonza ( 2019-02-13 05:16:57 -0500 )edit
1

Found this and this, I think you'll like them.

aPonza gravatar imageaPonza ( 2019-02-13 05:46:49 -0500 )edit

Thanks a lot, it looks really nice. I will have a look

aitul gravatar imageaitul ( 2019-02-13 07:35:41 -0500 )edit

Pointcloud and depth map are not substitutes of each other. This is not an answer to the question. You cannot use depth map for what you use with pointcloud. This question is still an unanswered one, to be honest.

Jägermeister gravatar imageJägermeister ( 2019-08-10 08:20:32 -0500 )edit

Hey! Did you find a solution? I'm working on the exact some problem. Getting more specific to my problem, (also addressing other comments on this thread), I know how to find object distance using the pyrealsense python wrapper, but when reading the image data from ROS topic "/camera/color/image_raw", I'm not really launchine the rs pipeline. So, there has to be a way to align the color stream from this topic to the depth data which is being published at another topic (probably "/camera/aligned_depth_to_color" but I'm not sure) and then do distance calculation. So, is there a way to do that? Thanks in advance.

chirag gravatar imagechirag ( 2019-08-27 10:02:20 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2019-02-13 01:13:11 -0500

Seen: 636 times

Last updated: Feb 13