ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

I think you might not have understood the tutorial or the purpose of the laser_assembler correctly. All transformation information must already be there from some other tf source. The assembler doesn't solve this problem and assumes it is already there.

It will also not even touch the laser scans. Thus what you see in rviz - the correctly transformed scans is actually the input for the laser_assembler. The desired output is: Take a bunch of those single scans (bunch is defined by time stamps) and merge them into one single point cloud in an arbitrary fixed frame.

Laser scans (input) should appear in rviz continuously, the assembled point cloud as a single message once the assembler has assembled one.

I think you might not have understood the tutorial or the purpose of the laser_assembler correctly. All transformation information must already be there from some other tf source. The assembler doesn't solve this problem and assumes it is already there.

It will also not even touch the laser scans. Thus what you see in rviz - the correctly transformed scans is actually the input for the laser_assembler. The desired output is: Take a bunch of those single scans (bunch is defined by time stamps) and merge them into one single point cloud in an arbitrary fixed frame.

Laser scans (input) should appear in rviz continuously, the assembled point cloud as a single message once the assembler has assembled one.

Ad edit: The input the laser_assembler needs is laser scans + matching tf. That is what your robot needs to send. Configure the laser with a frame once when starting and just keep sending that. The tf transform to that laser should represent the current position.