ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

How do cloud and image indices correspond in tod_training?

asked 2011-04-25 21:52:21 -0600

Julius gravatar image

updated 2011-04-25 21:53:10 -0600

Hi, my task is to retrieve 3d points that correspond to 2d points (guess inliers) from a query cloud. There is tod::PCLToPoints which exactly does that. I am wondering though why there is one common scaling factor for both x and y indices from image coordinate space.

int u = float(cloud.width) / image.cols * x
int v = float(cloud.width) / image.cols * y
// such that cloud.at(u, v) corresponds to image(x, y)

What's the reason behind this? I expected that the proportion between y and v is given by dividing cloud height by number of rows in the image, yet experiments show that the approach tod::PCLToPoints is correct (on point clouds and images gathered with tod_training scripts and Kinect camera).

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2011-06-17 11:03:26 -0600

Vincent Rabaud gravatar image

The depth is always 640x480. But the image itself can have a different size/aspect ratio with the Kinect. Hence, you need to rely on the cloud.width to get the right dimensions/ratio.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2011-04-25 21:52:21 -0600

Seen: 290 times

Last updated: Jun 17 '11