ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

what is the source of your data? if you have the camera parameters i.e K matrix or M matrix you could simply find the 2d point(image point) by

p = K[R|T] * P

P being your 3d point and and R and T are your camera rotation and translation.see camera calibration for finding out your camera parameters and the viewpoint is usually stored in the pointcloud header.

if camera is not available and you have a 3d point for every image point( highly unlikely unless it is a synthetic dataset) you could find the 3d to 2d relation using the image edges and corresponding 3d points (given the viewpoint you can find out the extreme end points in your 3d data, join the 3d point and camera center those with maximum angle should correspond to image end points). then use svd or any other method( min 6 points) (known to have some error).

good luck

what is the source of your data? if you have the camera parameters i.e K matrix or M matrix you could simply find the 2d point(image point) by

p = K[R|T] * P

P being your 3d point and and R and T are your camera rotation and translation.see camera calibration for finding out your camera parameters and the viewpoint is usually stored in the pointcloud header.

if the camera is not available and you have a 3d point for every image point( highly unlikely unless it is a synthetic dataset) you could find the 3d to 2d relation using the image edges and corresponding 3d points (given the viewpoint you can find out the extreme end points in your 3d data, join the 3d point and camera center those with maximum angle should correspond to image end points). then use svd or any other method( min 6 points) (known to have some error).

good luck

what is the source of your data? if you have the camera parameters i.e K matrix or M matrix you could simply find the 2d point(image point) by

p = K[R|T] * P

P being your 3d point and and R and T are your camera rotation and translation.see camera calibration for finding out your camera parameters and the viewpoint is usually stored in the pointcloud header.

if the camera is not available and you have a 3d point for every image point( highly unlikely unless it is a synthetic dataset) you could find the 3d to 2d relation using the image edges and corresponding 3d points (given the viewpoint you can find out the extreme end points in your 3d data, join the 3d point and camera center those with maximum angle should correspond to image end points). then use svd or any other method( min 6 points) (known to have some error).

also you could try to add random colors to your points (using pcl::pointXYZRGB) and then convert it to image, once you have the image match it to the original image and find the original colors(doubt that'll work but you can give it a try)

good luck