Kinect 2 mapping of 2D pixel to 3D Pointcloud2 point
Hi everyone,
I see that this is a question that has been answered a couple of times before but all the answers I have found didn't work out. So alas I'll have to ask once more.
In a ROS Nodelet I'm receiving a (approximately) synchronized dataset of 1. a message of type sensor_msgs::Image
and 2. a message of type sensor_msgs::PointCloud2
, both from a Kinect 2 (using iai_kinect2
nodelets like kinect2_bridge
).
What I'm trying to do now is to find a specific point in the PointCloud2 that corresponds to a certain pixel (u, v) in the image. I want to do it for a number of pixels one by one. I broadcast the recognized points via TF2_ROS. That works so far. I can visualize the PointCloud2 and the generated TF frames in RVIZ.
Unfortunately though, it seems that only some of the points that I'm searching in the PointCloud2 seem to be located correctly. Although they are correctly located in the 2D image they are not mapped to the correct points in the cloud.
I'm picking the corresponding point from the point cloud like this (kinda pseudo code, don't nail me down to syntax errors):
void incoming_callback(sensor_msgs::PointCloud2ConstPtr& pcl, sensor_msgs::ImageConstPtr& img, int u, int v) {
int point_index = u * pcl->point_step + v * pcl->row_step;
int point_idx_x = point_index + pcl->fields[0].offset;
int point_idx_y = point_index + pcl->fields[1].offset;
int point_idx_z = point_index + pcl->fields[2].offset;
float x;
float y;
float z;
memcpy(&x, &pcl->data[point_idx_x], sizeof(float));
memcpy(&y, &pcl->data[point_idx_y], sizeof(float));
memcpy(&z, &pcl->data[point_idx_z], sizeof(float));
.....
}
Where u (col) and v (row) are the coordinates of the pixel in the camera image.
I'm not sure why some of the points get mapped wrongly... maybe someone can give me some insight and/or the royal road to do this with ROS Kinetic Kame
.
Thanks a lot!
Update 1 - assumption
Alright, I assume that it is related to the spatial offset between the depth image (from the IR camera) and the RGB image. If I could exactly configure the positions of the RGB cam and the IR cam in TF it should work. I just don't know the exact values for the relative positions of the sensors inside the Kinect 2. Anyone?
Update 2 - manual measurement
Measured it myself and came up with a distance of 50mm (~2") between the sensors. Still no luck. So would still appreciate some help! Thanks!
Update 3 - calibration done
Okay, I've calibrated my Kinect 2. It's a lot better now, at least on the sd
channel. Some of the points are still off but I think I can solve it by means of some prior knowledge, heuristics, sanity checks etc....
However, it's still way off on the qhd
and the hd
channel and I don't really understand why. From my algorithm's standpoint it's just a bigger point cloud and a bigger image. It ...