ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

360 Laser Distance Sensor LDS-01 with two layers above it

asked 2019-10-14 10:16:56 -0500

adel gravatar image

Hello everyone, I added 3 layers more to my turtlebot3 and these layers are above the laser scanner (Laser scanner LDS-01). The performance of the navigation and slam was affected because of the supporters which added to fix the layers, and the scanner sees these supporters as tiny obstacles. I think maybe one can play with the resolution of the sensor to ignore these supporters in such a way they do not exist, but I am not sure and have no idea how can one change this resolution. If anyone has ideas to solve this problem please share it with me. Here you can see a photo of the robot after adding 3 layers:

Thanks in advance

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2019-10-15 02:17:03 -0500

Delb gravatar image

You can achieve this by filtering the laser scans to avoid the obstacles too close. Unfortunatly with the LDS-01 you can't change the param range_min simply with a launch file, you have two options :

  • Create a simple node subscribing to /scan (or the topic published by the lidar if you have changed the default name), filter those values manually and then publish the new laser scan data on a new topic /scan_filtered for example.
  • There is a package doing this for you and allows you to use more complex filters : the laser_filters package.

In both case you take as input the data from the topic /scan to publish them on a new topic /scan_filtered and you just have to change in the slam configuration to use this new topic as input.

edit flag offensive delete link more

answered 2019-10-15 04:15:40 -0500

adel gravatar image

updated 2019-10-15 09:35:18 -0500

Thank for the response, Someone gave me this solution and it works properly: The solution: TB3 uses the hls_lfcd_lds_driver package for sensor data from lidars. modify the minimum range to a value larger than the radius of the robot in the "source code".

But actually It still a small problem, that the robot still sees while navigation a few points around it as an obstacle although there is nothing. And that harms the navigation performance. and leads the robot to stuck in place and performs the recovery behavior many times although there is nothing around it to be stuck. Have you any idea why that happens? you can see an image here:

edit flag offensive delete link more

Question Tools

1 follower


Asked: 2019-10-14 10:16:56 -0500

Seen: 632 times

Last updated: Oct 15 '19