Robotics StackExchange | Archived questions

Convert filtered PointCloud2 to Image

Hi everyone,

I'm looking for a way to convert a filtered PointCloud2 from an RGBD camera (i.e. background removed, limited range etc.) back to an image as it might have be seen by the camera. The points that have been removed shall be black in the resulting image. I know that I need the camera_info and the exact pose of the camera for this purpose. I do of course have both.

Is there already a node or (even better) nodelet that can do that?

Thanks a lot

update 1:

I've tried to use convert_pointcloud_to_image and it does not work as expected. I feed in a filtered point cloud (passthrough filter, z reduced). In the visualization it's clearly visible that the background is cut away. The reconstructed image however still contains the pixels even though they are removed from the point cloud. The passthrough filter got the parameter keep_organized = true. Otherwise the resulting point cloud would be unorganized and thus unsuitable for convert_pointcloud_to_image which requires an organized point cloud. convert_pointcloud_to_image internally does nothing more than calling pcl::toROSMsg which itself just memcpy the points from the PCL one after another into a ROS sensor_msgs/Image message. I'm fine with that and it should work as expected... but doesn't. I don't know where it gets the pixel data from even when fed a filtered PCL.

update 2:

Sorry, I haven't been clear enough about what kind of image I want to get out. The mention of camera_info was a bit misleading. I'm fine with a rectified image! It does not have to be distorted! I don't remember why I mentioned camera_info in the first place.

Asked by Hendrik Wiese on 2018-06-22 07:38:40 UTC

Comments

Answers

There's already code for this in the 'convert_pointcloud_to_image' node. This gives you an orthogonal projection, dropping the Z-axis of the pointcloud.

You hint that you want the true 'camera image' (i.e. raw, distorted) using the CameraInfo, which would require a projection using the K matrix in the CameraInfo message.

Asked by paulbovbel on 2018-07-03 12:02:55 UTC

Comments

I've tried that code already on a passthrough filtered pointcloud limited to a certain length along Z. Weirdly though it still returned the original image. The points from the cloud that got removed by the passthrough filter were still visible in the image. I expected them to be black or something.

Asked by Hendrik Wiese on 2018-07-03 12:44:24 UTC

I would too. The pass-through filter should remove the data from the pointcloud, making recovery into the original 'image' impossible. Make sure your implementation is functionally correct.

Asked by paulbovbel on 2018-07-03 12:53:29 UTC

I will take a look into it tomorrow and report back. Thanks for the advice.

Asked by Hendrik Wiese on 2018-07-03 13:18:20 UTC

I've checked this and it doesn't work as expected. I can see the filtered point cloud in rviz with the points removed. Running the node on the same topic tho results in the full image. The node uses pcl::toROSMsg which only does memcpy from the PCL to the Image. So, it should work, but doesn't...

Asked by Hendrik Wiese on 2018-07-05 08:28:14 UTC

@paulbovbel Just saw your edit. Actually, no, a rectified image is fine. No need to distort it. I want to send it through a skeleton tracking algorithm which needs a rectified image. Sorry, could have been clearer about that. I'll edit my question.

Asked by Hendrik Wiese on 2018-07-05 15:53:42 UTC

That's super strange! If you're willing to share a working example, I'd love to take a look.

Asked by paulbovbel on 2018-07-05 19:07:09 UTC

Appreciated! I've been too busy at work today though to come up with an example... earliest I can do is create one on Wednesday next week. Just so you know that it can take a while and you won't forget me. ;-)

Asked by Hendrik Wiese on 2018-07-06 11:49:43 UTC

For sure. I think at this point, if you have a reproduction, please file a ticket in perception_pcl, and link it here for future spelunkers.

Asked by paulbovbel on 2018-07-06 11:51:36 UTC

I will. Thanks so far!

Asked by Hendrik Wiese on 2018-07-06 12:49:36 UTC

You can always write a node to do this. I have done something similar for LiDAR data so it should be also quite simple for the RGBD data you have.

What you need to do is the following:
Subscribe to the topic where you have the RGBD data. From camera_info you can get the camera model:

camera_model.fromCameraInfo(info_msg);

Using pcl_ros::transformPointCloud you can transform your pointcloud to the camera frame (you can get the camera frame with camera_model.tfFrame()) .
Then project those points from 3D to 2D onto the image frame. You can create an object from image_geometry::PinholeCameraModel:

pixel2d = camera_model.project3dToPixel(point3d);

You will need to also pass the RGB values for each point.

Asked by pavel92 on 2018-07-09 04:59:46 UTC

Comments

Hi..I have followed your mentioned steps..I am getting output like [880,-334] if I project a point(1,1,1). Can you please help how should I interpret this where RGB image is of size (640,480)

Asked by elemecrobots on 2020-05-25 22:57:35 UTC