Ask Your Question
5

How to deal with slow RPLidar scan rate on fast moving robot?

asked 2016-08-29 11:20:06 -0500

nettercm gravatar image

I'm relatively new to ROS, but I'm not new to Robotics.

I'm trying to understand if and how hector_slam, gmapping and similar packages that are typically used for mapping and/or localization, deal with the following:

Suppose the robot moves at 1m/sec. Suppose the Lidar is completing a 360deg scan at a rate of 5Hz, i.e. it takes 0.2sec for one full scan.

In the time it takes for one scan to complete, the robot will have moved as much as 0.2meters (around 8 inches) if it is going straight at full speed. A similar statement can be made about rotation.

Are the ROS components that consume laser scan messages simply assuming that the robot has not moved during the time it took to complete a scan?

Or is there some logic somewhere that does some sort of motion compensation.

Before I started working with ROS and RPLidar, I was in the process of putting the various pieces together "from scratch". For example I was using parts of the MRPT, specifically ICP, to localize the robot within an existing map / occupancy grid. Also, this occupancy grid was created by simply updating the gird cell after doing all the necessary calculations to map each laser reading into the what's called the "ocometry frame" in ROS speak. I was using the XV-11 laser sensor, instead of RPLidar, but its the same thing at the end of the day...

In order to build up this occupancy grid while the robot is in motion, I found it to be necessary to use odometry information and transform the incoming laser scan information in such a manner that it effectively is "motion compensated".

Is the equivalent of this "motion compensation" happening in ROS anywhere is what I'm asking I guess. Or is the problem simply getting ignored, because most people are using laser scanners with a scan rate that is 10x that of RPLidar?

Thanks.

edit retag flag offensive close merge delete

Comments

Very good question, I guess maybe hector_mapping and gmapping did some "motion compensation".

Shay gravatar image Shay  ( 2016-08-30 02:43:32 -0500 )edit

2 Answers

Sort by » oldest newest most voted
4

answered 2020-04-23 15:31:51 -0500

nettercm gravatar image

updated 2020-04-26 14:48:23 -0500

Updated answer: I think what I'm looking for is LaserProjection::transformLaserScanToPointCloud() from the laser_geometry package.

Old answer: So I have asked this question about 3.5 years ago. I think the answer is "No, ROS will not perform any kind of motion compensation or de-warping on lidar scans that were taken on a moving base". In my experience, gmapping and hector_slam get confused when the robot is moving at anything except snail's pace. Especially when the robot is turning.

Just to illustrate what I'm talking about:

When my robot (red arrow) is stationary in the corner of a hallway, this is what the lidar scan looks like. Very much as expected.

image description

Once the robot starts rotating in place, i.e. turning, the data looks like this however.

image description

Am I the only one having this issue?

Granted, the above example of continuously turning in place is a bit of an extreme case, for the purpose of illustrating the problem and for the purpose of evaluating solutions, but the fact of the matter is that algorithms like hector slam and gmapping don't like the distorted scan data at all. Rotational motion is in fact the biggest issue. Translation (i.e. just going straight) does not cause nearly as much confusion in those algorithms.

How to de-warp?

The basic idea is to look at the robot's motion, i.e. pose as a function of time, and use this information to de-warp the warped "image".

The output of my solution looks like this (white is the output, yellow is the input):

image description

edit flag offensive delete link more

Comments

"No, ROS will not perform any kind of motion compensation or de-warping on lidar scans that were taken on a moving base".

This may come across as apologetic perhaps, but I still wanted to add some nuance here: the behaviour described is part of two components you've used. It does not really make sense to write things as "ROS does (not) do X". The "ROS" is almost meaningless here, as you can only really say something about specific components.

I feel it's important to be mindful of this, as recognising it leads directly to the question: "ok, so these components don't do X, but would there perhaps be other packages which do support X?".

@Martin Günther's response seems to confirm this: there are components which do X, but you do need to alter your application (ie: the set of nodes you use) to enable X in that case.

gvdhoorn gravatar image gvdhoorn  ( 2020-04-24 04:07:46 -0500 )edit

Thanks. Apologies for my imprecise language. What I meant, and should have asked of course, is "Is there a ROS component or set of components that will help me do X"...

nettercm gravatar image nettercm  ( 2020-04-24 09:40:18 -0500 )edit

No need to apologise.

I just wanted to add some nuance here.

There are a lot of ppl/(potential) users who ask question like "can ROS do X?". Your Q&A was a really good 'opportunity' to provide some reasoning as to why such a question doesn't really work with a component based system -- as it's about the components, not the agglomeration.

gvdhoorn gravatar image gvdhoorn  ( 2020-04-25 04:43:13 -0500 )edit
3

answered 2020-04-24 03:34:47 -0500

updated 2020-04-24 03:38:52 -0500

You are correct: AFAIK, hector slam and gmapping don't do any motion compensation and simply assume that the robot's motion is negligible relative to the speed of the laser scanner. In other words, if you have a very slow laser scanner like the RPLidar, your robot must also move very slowly during the mapping process. Not a direct answer to your question, but related: There's a package called laser_assembler that takes single laser scans (e.g., from a 2D laser on a rotating unit) and builds a 3D point cloud from it. It has a parameter called ignore_laser_skew. When this param is set to false, laser_assembler does exactly the kind of motion compensation you're talking about.

Also note that a "motion compensated laser scan" (i.e., where all hits are individually transformed into 3D points based on the motion of the robot) cannot be represented by a sensor_msgs/LaserScan message, but have to be represented as sensor_msgs/PointCloud2.

edit flag offensive delete link more

Comments

1

Thanks! I figured that there had to be something like this. I recall having looked at some other packages in the laser pipeline section. Not sure if I didn't see this particular package or if I simply ignored it because it was targeting 3D.

I will check it out and compare it against my own solution.

Yes, unfortunately I ended up creating something like this myself. I hate re-inventing the wheel but it was a good exercise.

nettercm gravatar image nettercm  ( 2020-04-24 09:44:26 -0500 )edit
1

I updated my answer to add an image that shows the output of my own solution. Its not bad. Still some minor issues / strange artifacts. So I guess before I invest more time in improving this I will check out laser_assembler. My own solution currently is in python and not terribly efficient. I'm starting to run out of steam on my Raspberry Pi 4. I wonder how CPU intensive laser_assembler will be......

nettercm gravatar image nettercm  ( 2020-04-24 09:57:45 -0500 )edit
1

So I looked at the laser_assembler package. I ran the provided example but it did not seem to do any correction. Maybe a setup issue on my end.... Probably something related to tf. Probably won't try to resolve the issue, since laser_assembler addresses a different use case. However, after looking at the laser_assembler source code, I realized that it is simply using LaserProjection::transformLaserScanToPointCloud() from the laser_geometry package to de-warp a single laser scan. Therefore, I think what I want to do is simply use that API directly.

nettercm gravatar image nettercm  ( 2020-04-26 14:46:38 -0500 )edit

Well, for the correction to work you need 3 things: 1. ignore_laser_skew = true 2. The time_increment field of the LaserScan messages needs to be correct (not 0.0) 3. The TF between your fixed frame and the laser needs to correctly represent the motion during the recording of the laser scan (and the fixed frame really needs to be on the "non-moving" part of the world, not rigidly connected to the laser frame). But I agree that laser_assembler addresses a different use case. I just wanted to give some pointers if somebody later runs into the same problem.

Martin Günther gravatar image Martin Günther  ( 2020-04-27 03:28:40 -0500 )edit
1

@nettercm Could you provide some code for your skew compensation solution please? I think this might help a lot of users here (including myself ;))

Mork gravatar image Mork  ( 2020-09-21 02:12:23 -0500 )edit

@Mork the source code is here: link text

It works but it did not improve the performance of mapping and localization as much as I had hoped.

nettercm gravatar image nettercm  ( 2020-09-22 09:41:11 -0500 )edit

Thank you for sharing, I'll look into it!

Mork gravatar image Mork  ( 2020-09-24 03:45:40 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2016-08-29 11:20:06 -0500

Seen: 1,992 times

Last updated: Apr 26 '20