ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

When you move the lidar unit, ROS needs to know where it has been moved to. This is exactly what the transform tells the ROS system: the location and orientation of the link at each point in time. You cannot simply walk around a room with the lidar and create a map in ROS. You need something telling ROS where the lidar is at each point in time. This is the transform's job. I think you would benefit from doing some reading on transforms here.

You could look at a SLAM-based solution, but many of these require some odometry from wheel encoders or the like. I know that Hector SLAM doesn't require odometry, but will only work if you move the lidar in a 2D plane. I'm not aware of any 3D pointcloud-based SLAM solutions that don't require odometry, but they might exist. LOAM Velodyne seems hopeful but does require and IMU.

When you move the lidar unit, ROS needs to know where it has been moved to. This is exactly what the transform tells the ROS system: the location and orientation of the link at each point in time. You cannot simply walk around a room with the lidar and create a map in ROS. You need something telling ROS where the lidar is at each point in time. This is the transform's job. I think you would benefit from doing some reading on transforms here.

You could look at a SLAM-based solution, but many of these require some odometry from wheel encoders or the like. I know that Hector SLAM doesn't require odometry, but will only work if you move the lidar in a 2D plane. I'm not aware of any 3D pointcloud-based SLAM solutions that don't require odometry, but they might exist. LOAM Velodyne seems hopeful but does require and an IMU.