ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

I think the customary way this is handled is that your URDF for the robot would include the sensor's pose, and the sensor's tf would be published by the robot_state_publisher. Then, when you're lidar (or other sensor) publishes it's point cloud, it uses that tf in the point cloud messages' headers.

If you don't have a URDF for the robot (or don't want to mess around with editing it) then you're idea of setting the target_frame for the pointcloud_to_laserscan node to a static transformation is a good way to achieve what you're wanting to do.

I think the customary way this is handled is that your URDF for the robot would include the sensor's pose, and the sensor's tf would be published by the robot_state_publisher. Then, when you're lidar (or other sensor) publishes it's point cloud, it uses that tf in the point cloud messages' headers.

If you don't have a URDF for the robot (or don't want to mess around with editing it) then you're idea of setting the target_frame for the pointcloud_to_laserscan node to a static transformation is a good way to achieve what you're wanting to do. do but it is a hack because only pointcloud_to_laserscan will know the correct orientation. Using the URDF is the correct way because all nodes will know the sensor points backwards. In other words, if you add a node in the future that needs that data, you'll have to do a similar hack again unless you edit the URDF.