Converting raw lidar data to something more useful

I have a small robot (turtlebot3) with a Lidar. For aa demonstration I would like to process the data coming from the lira in the following way:

1. Pay attention only to a range of angles (e.g. -15 degrees to +15 degrees) Filter it into
1. distance to nearest obstacle
2. rate of change of distance (approach speed)
3. rate of change of approach speed (approach acceleration)

I am assuming that this is elementary and there are nodes/libraries in ROS that do this. It's so similar or closely related to all the computations that are done in localization and slam packages that I almost assume that this logic is already somewhere. Maybe its not factored out though.

Also of interest, if not code, is a paper or writeup giving the "correct" or "best" mathematics for it.

Thanks for your help!

edit retag close merge delete

Sort by » oldest newest most voted I don't know of an existing node that will do what you want. If you need to write it yourself, you'll need to understand the LaserScan and Odometry messages. Your node could listen to the topics for both those message types and keep track of the distances at various angles ranges, where the total angle range is -15 to +15, as you indicate. Each bin could be a range of, say, 5 degrees (or less, if you like).

when a new Odometry message arrives:
save the new robot pose orientation angle

and

when a new LaserScan message arrives:
remember the time of the current scan
associate the latest robot pose orientation angle with the current scan
put the latest scan results into a bin array based on scan angle
adjust the prior scan array to the left or right based on the difference between the current
orientation angle and the orientation angle from the prior scan
calculate the distance difference for each bin between the current scan and the last
calculate the apparent velocities by dividing the distance differences by the time difference

You'll also have to handle these potential problems:

• the angle ranges representing each bin in the arrays should be larger than the difference between adjacent laser scans, to handle the fact that the robot turning angle will not, in general, be a multiple of the scan angle difference
• when you shift a bin array left or right, you'll have to fill in the blank slots with something representing "unknown"
• when calculating distance differences and velocities you'll have to handle those "unknown" values
• the Odometry messages may arrive at a difference cadence from LaserScan messages, making the angle differences less accurate; faster odometry will reduce this error
• you may need to use filtering or a probabilistic approach to handle the appearance of "unknown" distances at the edges of your bins because of robot turning; that is, your navigation algorithm should be able to handle the case when you know the distance at a particular angle but cannot calculate the apparent velocity

Of course I now realize this isn't exactly what you asked for: keeping track of the distance to the _nearest obstacle_. But the problem with keeping track of only one obstacle is that if the robot turns you may see a new "closest" obstacle, making your other calculations meaningless.

more

Thanks for the detailed response! That sounds doable. But it's so similar or closely related to all the computations that are done in localization and slam packages that I almost assume that this logic is already somewhere. Maybe its not factored out though.