Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Coloring point clouds from images - how to match 3Dpoint/pixel efficiently?

Hello all,

my colleague and I are developing a small node that colors a point cloud using a rgb image.

One of the bottleneck we found is using openCV to project each pixel back on the image_plane in order to find which pixel corresponds to which 3Dpoint, using camera models.

After browsing the ROS packages for a while, I stumbled upon the package pcl_point_cloud2_image_color which seems to do the exact same thing. Curious, we opened-up the code to see the trick used.

We were surprised to find the following:

  pcl_point_cloud2_image_color::PointCloud2ImageColor::getColorPointCloud2 (const sensor_msgs::PointCloud2ConstPtr &cloud, 
                                                                            const sensor_msgs::ImageConstPtr &image,
                                                                            sensor_msgs::Image &img_out,
                                                                            pcl::PointCloud<pcl::PointXYZRGB> &cloud_out)
  pcl::PointCloud<PointRGB> cloud_color;
  cloud_color.width    = cloud->width;
  cloud_color.height   = cloud->height;
  cloud_color.is_dense = true;
  cloud_color.points.resize (cloud_color.width * cloud_color.height);

  // Convert to a templated type
  pcl::PointCloud<pcl::PointXYZ> cloud_xyz;
  pcl::fromROSMsg (*cloud, cloud_xyz);

  img_out = *image;
  // Cast image to IplImage
  sensor_msgs::CvBridge img_bridge;
  cv::Mat img = img_bridge.imgMsgToCv (image);

  // Assign colors from the rectified image
  int i = 0;
  for (size_t u = 0; u < image->height; ++u)   // rows
    for (size_t v = 0; v < image->width; ++v, ++i)  // cols
      // If the point is invalid
      if (isnan (cloud_xyz.points[i].x) || isnan (cloud_xyz.points[i].y) || isnan (cloud_xyz.points[i].z))
      // Get the color information
      uint8_t g =<uint8_t>(u, v);
      int32_t rgb = (g << 16) | (g << 8) | g;
      cloud_color.points[i].rgb = *(float*)(&rgb);

  // Concatenate the XYZ information with the RGB one
  pcl::concatenateFields (cloud_xyz, cloud_color, cloud_out);

As one can see, this function does the following processing:

  1. Create a dense PointRGB cloud with the same number of points as in the sensor message.
  2. Convert the sensor message into a PointXYZ cloud.
  3. Iterate over each pixel of the image and assign its color to a point of the PointRGB cloud.
  4. Concatenate the PointRGB and PointXYZ clouds.

What I do not get here is how are we sure pixel[u,v] corresponds to point[i]? How come is there no 3D to 2D projection involved in the pixel/point matching?

I feel I must be missing an important optimization trick here. If anyone can explain, I would be very grateful.

Thanks in advance