ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Coloring point clouds from images - how to match 3Dpoint/pixel efficiently?

asked 2011-05-09 02:15:47 -0500

raphael favier gravatar image

Hello all,

my colleague and I are developing a small node that colors a point cloud using a rgb image.

One of the bottleneck we found is using openCV to project each pixel back on the image_plane in order to find which pixel corresponds to which 3Dpoint, using camera models.

After browsing the ROS packages for a while, I stumbled upon the package pcl_point_cloud2_image_color which seems to do the exact same thing. Curious, we opened-up the code to see the trick used.

We were surprised to find the following:

  pcl_point_cloud2_image_color::PointCloud2ImageColor::getColorPointCloud2 (const sensor_msgs::PointCloud2ConstPtr &cloud, 
                                                                            const sensor_msgs::ImageConstPtr &image,
                                                                            sensor_msgs::Image &img_out,
                                                                            pcl::PointCloud<pcl::PointXYZRGB> &cloud_out)
  pcl::PointCloud<PointRGB> cloud_color;
  cloud_color.width    = cloud->width;
  cloud_color.height   = cloud->height;
  cloud_color.is_dense = true;
  cloud_color.points.resize (cloud_color.width * cloud_color.height);

  // Convert to a templated type
  pcl::PointCloud<pcl::PointXYZ> cloud_xyz;
  pcl::fromROSMsg (*cloud, cloud_xyz);

  img_out = *image;
  // Cast image to IplImage
  sensor_msgs::CvBridge img_bridge;
  cv::Mat img = img_bridge.imgMsgToCv (image);

  // Assign colors from the rectified image
  int i = 0;
  for (size_t u = 0; u < image->height; ++u)   // rows
    for (size_t v = 0; v < image->width; ++v, ++i)  // cols
      // If the point is invalid
      if (isnan (cloud_xyz.points[i].x) || isnan (cloud_xyz.points[i].y) || isnan (cloud_xyz.points[i].z))
      // Get the color information
      uint8_t g =<uint8_t>(u, v);
      int32_t rgb = (g << 16) | (g << 8) | g;
      cloud_color.points[i].rgb = *(float*)(&rgb);

  // Concatenate the XYZ information with the RGB one
  pcl::concatenateFields (cloud_xyz, cloud_color, cloud_out);

As one can see, this function does the following processing:

  1. Create a dense PointRGB cloud with the same number of points as in the sensor message.
  2. Convert the sensor message into a PointXYZ cloud.
  3. Iterate over each pixel of the image and assign its color to a point of the PointRGB cloud.
  4. Concatenate the PointRGB and PointXYZ clouds.

What I do not get here is how are we sure pixel[u,v] corresponds to point[i]? How come is there no 3D to 2D projection involved in the pixel/point matching?

I feel I must be missing an important optimization trick here. If anyone can explain, I would be very grateful.

Thanks in advance


edit retag flag offensive close merge delete


hi Raph, I am need to do solve the same problem (colorize point clouds from images / match 3d points and pixels). Do you have any pointers/suggestions for me? Thank you.

chukcha2 gravatar image chukcha2  ( 2016-08-23 19:26:49 -0500 )edit

2 Answers

Sort by ยป oldest newest most voted

answered 2011-05-09 20:52:34 -0500

The underlying assumption seems to be, that the rgb image and the (non-rgb) pointcloud are using the same pixel raster.

edit flag offensive delete link more


what does it mean exactly: "the same pixel raster"?

fabbro gravatar image fabbro  ( 2022-02-01 15:29:13 -0500 )edit

answered 2011-05-10 06:35:15 -0500

updated 2011-05-10 06:54:42 -0500

To expand on what Felix said:

The assumption is that there is a 1 to 1 mapping in the cloud message that came in. So, any projection would have been done beforehand in arranging the incoming message.

Using that assumption then, they have the same number of points and are in the same order, so the same index can be used.

If you are producing the point cloud in the first place and need to pack the message, you can use the camera parameters and libeigen though I don't know if that would be any faster than the opencv method you were using.

edit flag offensive delete link more

Question Tools



Asked: 2011-05-09 02:15:47 -0500

Seen: 10,397 times

Last updated: May 10 '11