ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Anyway to use the map data from this website for SLAM?

asked 2020-11-22 00:23:28 -0500

sisaha9 gravatar image

updated 2020-11-22 00:25:02 -0500

I was using RTABMAP before since I wanted to be able to differentiate between the green patch of grass and the actual track (which Lidar wouldn't be able to do as it won't hit anything). But it doesn't look like it does that. Was wondering if there was a more appropriate package that makes use of any of the map data from that website.

Edit: Here are examples of some of the races for this track:,

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted

answered 2020-11-28 19:52:10 -0500

matlabbe gravatar image

updated 2021-08-09 10:26:25 -0500


If you want to play PCAP data on ROS, see this page:

For an example with RTAB-Map (>=0.20.7 built with libpointmatcher support), based on this ouster example, you can try the following:

1) Play a PCAP (I tried the 1.4 GB one):

roslaunch velodyne_pointcloud 32e_points.launch pcap:=$HOME/Downloads/velodyne/2016-05-28hdl32.pcap read_once:=true rpm:=1200

2) Add a base_link to make x-axis of the car forward (lidar is looking right)

rosrun tf static_transform_publisher 0 0 0 -1.570796327 0 0 base_link velodyne 100

3) Start mapping:

roslaunch rtabmap_ros rtabmap.launch    \
   use_sim_time:=false \
   depth:=false \
   subscribe_scan_cloud:=true \
   frame_id:=base_link \
   scan_cloud_topic:=/velodyne_points \
   scan_topic:=/disabled \
   scan_cloud_max_points:=34750  \
   scan_cloud_assembling:=true \
   scan_cloud_assembling_time:=0.5 \
   scan_cloud_assembling_voxel_size:=0.5 \
   icp_odometry:=true \
   approx_sync:=false \
   args:="-d  \
      --RGBD/CreateOccupancyGrid false \
      --RGBD/ProximityMaxGraphDepth 0 \
      --RGBD/ProximityPathMaxNeighbors 5 \
      --RGBD/LocalRadius 30 \
      --RGBD/ProximityMaxPaths 1 \
      --Rtabmap/DetectionRate 0 \
      --Icp/PM true \
      --Icp/VoxelSize 0.5   \
      --Icp/MaxTranslation 10   \
      --Icp/MaxCorrespondenceDistance 1.5 \
      --Icp/PMOutlierRatio 0.7 \
      --Icp/Iterations 30 \
      --Icp/PointToPlane true \
      --Icp/PMMatcherKnn 3 \
      --Icp/PMMatcherEpsilon 1 \
      --Icp/PMMatcherIntensity true \
      --Icp/Epsilon 0.0001 \
      --Icp/PointToPlaneK 10 \
      --Icp/PointToPlaneRadius 0 \
      --Icp/CorrespondenceRatio 0.2 \
      --Icp/PointToPlaneGroundNormalsUp 0.8 \
      --Icp/PointToPlaneMinComplexity 0.0" \
      --Odom/ScanKeyFrameThr 0.7 \
      --OdomF2M/ScanMaxSize 10000  \
      --OdomF2M/ScanSubtractRadius 0.5   \
      --Icp/Iterations 10 \
      --Icp/CorrespondenceRatio 0.01 \
      --Icp/MaxTranslation 2   \
      --Icp/PMOutlierRatio 0.4 \
      --Icp/PointToPlane false"

Here are some results: image description

Colored with intensity from LiDAR:

image description

LiDAR Odometry-only (red lines are loop closures):

image description

With loop closure optimization:

image description

To distinguish the track and the grass, the velodyne's intensity channel can give a good idea. We can see it more clearly in this video:

The parameters above are quite very for this dataset to close the loops without a camera. Combining with a stereo camera could help to detect loop closures more robustly.


Integrating a stereo camera is possible (assuming it is already calibrated, synchronized between left and right cameras and provide common stereo topics). You could add to rtabmap.launch command above:

stereo:=true \ 
rgbd_sync:=true \
approx_sync:=true \
approx_rgbd_sync:=false \
left_image_topic:=/stereo_camera/left/image_rect \
right_image_topic:=/stereo_camera/right/image_rect \
left_camera_info_topic:=/stereo_camera/left/camera_info \

This will synchronize stereo data with Lidar data. You may also do stereo visual odometry instead of LiDAR odometry, I think it would be a little more robust because of the lack of geometry in this kind of environment. To use stereo odometry, remove icp_odometry:=true above and set back --Rtabmap/DetectionRate 2. As I did for Lidar, there could be some tunings for visual parameters depending on the resolution of the cameras and this kind of environment. If you want to use external odometry from another visual odometry package, add:

visual_odometry:=false \
edit flag offensive delete link more


Wow that is amazing. I didn't think the Lidar information could actually be useful like that. How do you recommend combining it with a stereo camera? Like would RTABMAP do it automatically or would I have to change a parameter

sisaha9 gravatar image sisaha9  ( 2020-11-29 13:27:54 -0500 )edit

I edited the answer :)

matlabbe gravatar image matlabbe  ( 2020-11-29 16:04:29 -0500 )edit

Thanks! Just curious but this is likely not possible if we use a 2D Lidar right. The intensity channels and stuff come from the fact that the Velodyne Lidar is a 3D Lidar. So we would have to rely more on a stereo map or is there a way to use this 3D data with a 2D Lidar?

sisaha9 gravatar image sisaha9  ( 2020-11-29 16:48:01 -0500 )edit

With a 2D lidar tilted, it could be possible along with stereo odometry to assemble them in 3D. Some 2D lidars (like SICK) have also an intensity (or reflectivity) channel.

matlabbe gravatar image matlabbe  ( 2020-11-29 19:25:55 -0500 )edit

I have a Intel Realsense D455 camera. Was wondering if it was possible to use this data but by converting the depthscans of the Realsense to Laserscans to follow this data

sisaha9 gravatar image sisaha9  ( 2020-12-01 12:57:56 -0500 )edit

No, use rtabmap as stereo slam directly (or rgb-d slam when using depth image from the camera). Those parameters are really tuned for ring-like lidars like velodyne or ouster.

matlabbe gravatar image matlabbe  ( 2020-12-10 12:12:49 -0500 )edit

Question Tools


Asked: 2020-11-22 00:23:28 -0500

Seen: 811 times

Last updated: Aug 09 '21