Ask Your Question
2

Kinect 2 mapping of 2D pixel to 3D Pointcloud2 point

asked 2017-10-17 09:59:49 -0500

Hendrik Wiese gravatar image

updated 2017-10-18 04:28:02 -0500

Hi everyone,

I see that this is a question that has been answered a couple of times before but all the answers I have found didn't work out. So alas I'll have to ask once more.

In a ROS Nodelet I'm receiving a (approximately) synchronized dataset of 1. a message of type sensor_msgs::Image and 2. a message of type sensor_msgs::PointCloud2, both from a Kinect 2 (using iai_kinect2 nodelets like kinect2_bridge).

What I'm trying to do now is to find a specific point in the PointCloud2 that corresponds to a certain pixel (u, v) in the image. I want to do it for a number of pixels one by one. I broadcast the recognized points via TF2_ROS. That works so far. I can visualize the PointCloud2 and the generated TF frames in RVIZ.

Unfortunately though, it seems that only some of the points that I'm searching in the PointCloud2 seem to be located correctly. Although they are correctly located in the 2D image they are not mapped to the correct points in the cloud.

I'm picking the corresponding point from the point cloud like this (kinda pseudo code, don't nail me down to syntax errors):

void incoming_callback(sensor_msgs::PointCloud2ConstPtr& pcl, sensor_msgs::ImageConstPtr& img, int u, int v) {
  int point_index = u * pcl->point_step + v * pcl->row_step;
  int point_idx_x = point_index + pcl->fields[0].offset;
  int point_idx_y = point_index + pcl->fields[1].offset;
  int point_idx_z = point_index + pcl->fields[2].offset;

  float x; 
  float y; 
  float z; 

  memcpy(&x, &pcl->data[point_idx_x], sizeof(float));
  memcpy(&y, &pcl->data[point_idx_y], sizeof(float));
  memcpy(&z, &pcl->data[point_idx_z], sizeof(float));
  .....
}

Where u (col) and v (row) are the coordinates of the pixel in the camera image.

I'm not sure why some of the points get mapped wrongly... maybe someone can give me some insight and/or the royal road to do this with ROS Kinetic Kame.

Thanks a lot!

Update 1 - assumption

Alright, I assume that it is related to the spatial offset between the depth image (from the IR camera) and the RGB image. If I could exactly configure the positions of the RGB cam and the IR cam in TF it should work. I just don't know the exact values for the relative positions of the sensors inside the Kinect 2. Anyone?

Update 2 - manual measurement

Measured it myself and came up with a distance of 50mm (~2") between the sensors. Still no luck. So would still appreciate some help! Thanks!

Update 3 - calibration done

Okay, I've calibrated my Kinect 2. It's a lot better now, at least on the sd channel. Some of the points are still off but I think I can solve it by means of some prior knowledge, heuristics, sanity checks etc....

However, it's still way off on the qhd and the hd channel and I don't really understand why. From my algorithm's standpoint it's just a bigger point cloud and a bigger image. It ... (more)

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2017-10-18 04:27:08 -0500

Hendrik Wiese gravatar image

Okay, so the problem consisted of the following points. Read my updates for details:

  • Calibration of the Kinect 2 (yes, it's a tedious process but it has to be done... so rather do it now instead of wasting hours on trying to get around it like me. It's definitely worth it!)
  • The frames of the keypoint extraction and the Kinect point cloud have to match, of course. Sounds obvious but it apparently isn't.
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2017-10-17 09:59:49 -0500

Seen: 648 times

Last updated: Oct 18 '17