1 | initial version |

The 'world coordinates' of a pointcloud are represented in the transform data. If your pointcloud is in the *sensor* frame, and you have a tf transform (link) available to a *map* or *world* frame, you can use the tf library to transform the pointcloud into the desired frame, which will transform the positions of the points in the pointcloud.

The image you're talking about sounds like a projection of the pointcloud down to a 2D plane, which you can do by iterating over every point in the cloud (which you're already doing) and applying some linear algebra (look into the Eigen library).

2 | No.2 Revision |

The 'world coordinates' of a pointcloud are represented in the transform data. If your pointcloud is in the *sensor* frame, and you have a tf transform (link) available to a *map* or *world* frame, you can use the tf library to transform the pointcloud into the desired frame, which will transform the positions of the points in the pointcloud.

The image you're talking about sounds like a projection of the pointcloud down to a 2D plane, which you can do by iterating over every point in the cloud (which you're already doing) and applying some linear algebra (look into the Eigen ~~library).~~library). To actually build the image you'll probably find OpenCV useful.

3 | No.3 Revision |

The 'world coordinates' of a pointcloud are represented in the transform data. If your pointcloud is in the *sensor* frame, and you have a tf transform (link) available to a *map* or *world* frame, you can use the tf library to transform the pointcloud into the desired frame, which will transform the positions of the points in the pointcloud.

The image you're talking about sounds like a projection of the pointcloud down to a 2D plane, which you can do by iterating over every point in the cloud (which you're already doing) and applying some linear algebra (look into the Eigen library). To actually build the image you'll probably find OpenCV ~~useful.~~and this conversion useful, however you could build the sensor_msgs/Image directly.

4 | No.4 Revision |

*sensor* frame, and you have a tf transform (link) available to a *map* or *world* frame, you can use the tf library to transform the pointcloud into the desired frame, which will transform the positions of the points in the pointcloud.

The image you're talking about sounds like a projection of the pointcloud down to a 2D plane, which you can do by iterating over every point in the cloud (which you're already doing) and applying some linear algebra (look into the Eigen library). To actually build the image you'll probably find OpenCV and this ~~conversion useful, however you could build the sensor_msgs/Image directly.~~

EDIT: Found this projection API in PCL as well which should make it easier.

ROS Answers is licensed under Creative Commons Attribution 3.0 Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license.