Correlating RGB Image with Depth Point Cloud? [closed]

asked 2012-06-05 05:27:30 -0600

gstanley gravatar image

updated 2016-10-24 08:34:03 -0600

ngrennan gravatar image

We're trying to change the coordinate frame that our robot uses, but first we need a sample of points and their locations before we can determine the transform matrix. We are using checkerboards so we can use the checker-corners as our points. However, since OpenCV works with a 2D image, we had to correlate the pixel with the "/camera/depth/points" point cloud (which takes pcl::PointCloud<pointxyz>). Everything seems to be pretty accurate, except that some points seem to be "off the board". That is, we will have almost all of the points on a single plane (the board) and then one point, or an entire row or column of points, will be completely off the board, as if they "missed" the board. I will edit this post and include an image, as it might be useful.

We've thought that this could be an error due to intrinsic calibration, or that it could be a problem with the Kinect's registration. However, we could be wrong. Where do you think the problem might lie, and what steps should we take to fix it?


EDIT: Here is an image of the error. The blue circles are the corners, and the mass of red ones are just surrounding points. As you can see, there is one point that is very far from where it should be.

EDIT 2: Additionally (I'm not sure if it matters), the lab I'm in currently uses Diamondback. I'm not sure if this was a problem that was fixed in a later version.

edit retag flag offensive reopen merge delete

Closed for the following reason question is not relevant or outdated by tfoote
close date 2015-03-03 01:50:52.871105


It might be worth trying on Fuerte to see if the problem was fixed.

Eric Perko gravatar image Eric Perko  ( 2012-06-05 13:28:46 -0600 )edit

Can you post code or launch files for how you are doing this? Maybe that will help us ...

Kevin gravatar image Kevin  ( 2012-06-05 16:22:32 -0600 )edit