# Modify position in hector_mapping code?

Hi guys,

As the question suggests I am trying to modify the HectorMappingRos.cpp/.h from the hector_mapping package and in particular the position used by the mapping function.

My problem is that when my robot goes through long featureless hallways, hector_mapping does not detect a change in position. So what I'm trying to achieve is using odometry from another sensor when this problem happens. It would be a simple switch: when hector_mapping is saying that the robot isn't moving but the odometry says that it is, then take the position from the odometry data.

I have been going through the code and my guess is that I have to modify line 289 of HectorMappingRos.cpp: "startEstimate = slamProcessor->getLastScanMatchPose();".

startEstimate is a vector with the (x,y) coordinates and the yaw angle. My first attempt was to replace the yaw angle with the angle given by an IMU, but this resulted in very bad mapping. Some features were duplicated with a certain angle offset and one hallway was bent.

Has anyone got an opinion on the matter?

EDIT 1:

I have tried setting the undocumented (as in you won't find it on the official ros hector_mapping page but it is there in the code) "use_map_with_known_poses" to "true". The result is that hector_mapping no longer computes the pose of the robot, so the position remains the initial position overtime and every time the map is updated, the new map is simply put on top of the previous one because the position hasn't changed.

I can't seem to be able to "feed" a position topic to hector_mapping. I do not think that it is presently supported by hector_mapping as I do not see any declaration of a subscriber to a position topic in the code.

EDIT 2:

As suggested I set both "map_with_known_poses" and "use_tf_pose_start_estimate" to true. Then I use a first hector_mapping node to publish the map->odom transform that is then being used by a second hector_mapping node for the mapping. Note that it is only for the 2nd node that the 2 parameters mentioned before should be set to true. Here is the map out of the first node:

And here is the map from the 2nd:

Would anyone know why the 2 maps are not identical?

edit retag close merge delete

Did you set "use_tf_pose_start_estimate" as stefan said?

( 2014-07-09 04:51:52 -0500 )edit

Yes I did. I am running 2 hector_mapping nodes, one for the tf, one for the map. I haven't added odometry yet, so I expected the map to be the same as if there were only one node running but that's not the case. The 2 maps are globally very similar but the new map seems to be more noisy.

( 2014-07-09 06:39:12 -0500 )edit

I edited my question to show the 2 maps.

( 2014-07-09 06:55:51 -0500 )edit

Sort by » oldest newest most voted

The (undocumented) "map_with_known_poses" parameter can be used together with the "use_tf_pose_start_estimate" parameter set to true to perform mapping with known poses. That is, there is no localization performed, instead a map is created based on the provided tf data. So that´s probably not what you want.

Featureless corridors are indeed tricky and motion is not observable in them (solely from scans). There is no provision to improve the system for this use case currently. We played around with estimating a covariance for scanmatching and one for odometry and fusing those, but discretization artifacts due to the grid map representation backend make this non-trivial. If you have odometry information available, I´d recommend trying gmapping. If not, you certainly try changing hecter_mapping code.

more

Thank you for the answer. So if I understand correctly, if I set both of these parameters to true and provide hector_mapping with the map->odom tf or map->base_link tf then the map would be created according to the position given by the tf?

( 2014-07-09 02:26:50 -0500 )edit

Then I could potentially use for example the laser_scan_matcher node to provide the tf or even run two hector_mapping nodes at the same time. One would provide the tf and another just the mapping. In between these 2 nodes I could make my own node to modify the transform according to odometry.

( 2014-07-09 02:29:56 -0500 )edit

Yes, that would probably work.

( 2014-07-09 06:59:10 -0500 )edit

Is there a reason why you left map_with_known_poses undocumented?

( 2015-08-06 07:40:30 -0500 )edit

Hi Antoine. Can you please tell me if you tried it this way and does it work? I'm struggling with this for some time...

( 2017-06-30 08:39:02 -0500 )edit

Best would be to wait for a proper answer from Stefan.

My feeling from this code says the correct way would be: Take the relative odometry pose between the time of the getLastScanMatchPose() and the current scan. Then multiply that on the startEstimate to get a new startEstimate.

more

Thanks dornhege for the quick answer. What do you mean by "multiply that on the startEstimate to get a new startEstimate"? I can not multiply two 3 line (x,y,theta) vectors together.

( 2014-07-08 20:31:30 -0500 )edit

What I have done so far is to compute the odometry deltas add to startEstimate not hurting map quality. Limited the amount of distance the robot can travel between scans, same direction as gradient, but not too long of a jump, preventing teleportation. The robot still gets stuck in long hallways.

( 2018-10-22 13:03:25 -0500 )edit