ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Getting only the first snapshot of the scans with gmapping [closed]

asked 2016-03-22 12:42:02 -0500

McKracken82 gravatar image

Hi, I've gone through one that seems to be a shared problem, as the following links testify

Some of them didn't received a satisfactory answers, while the successful replies to the others are not working for me. I'm trying to make a map with an iRobot Create 2, using the irobotcreate2ros node (https://github.com/MirkoFerrati/irobotcreate2ros), and using a Kinect as a sensor (from which I take the laser scan using the depth2laser node https://github.com/mauriliodc/depth2laser). I changed the tf published by the irobotcreate2ros node, so that the frames are directly odom and base_link, instead of iRobot_0/odom and iRobot_0/base_link (as in the original version), and then I don't need a static_transformation_publisher (it shouldn't affect the general behavior, I suppose).

What happen is that when I start gmapping, it seems to take only the first snapshot from the sensor, showing a small segment of the map, and not updating it while the robot moves. I tried both to see the realtime mapping in rviz and to try with a rosbag, but the result is always the same.

The following picture shows a snapshot or what I see in rviz

Rviz snaptshot

And the following picture instead shows my tfs (that are supposed to be correct, as far as I know)

View_frames output

I already checked that my odometry, the base_link, the xtion and the laser tf are consistent with each other (as they move accordingly in rviz). I've also a bag, but I cannot yet upload it. I'm searching a space where to do it. As soon as I find it, I'll edit this post.

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by McKracken82
close date 2016-03-23 18:21:56.743197

2 Answers

Sort by ยป oldest newest most voted
1

answered 2016-03-23 15:42:15 -0500

McKracken82 gravatar image

Solution found. The problem was a bug in the depth2laser node. When publishing the laser, the time field in the header was always set to 0, so that was impossible to reconstruct the time sequence of the scans. After having fixed that, gmapping was working correctly. I already informed the author of the node about the bug, and it should be now fixed.

edit flag offensive delete link more

Comments

Good work! If it works now, you should close the thread and select this as the correct answer.

Icehawk101 gravatar image Icehawk101  ( 2016-03-23 17:10:37 -0500 )edit
0

answered 2016-03-22 15:30:12 -0500

You need a transform from base_link to camera_link

edit flag offensive delete link more

Comments

I added a transformation as you said, using a static_transformation_publisher, but nothing changed. This is the resulting frame scheme http://imageshack.com/a/img924/2479/9vPNX6.jpg

McKracken82 gravatar image McKracken82  ( 2016-03-23 05:08:42 -0500 )edit

Just to cover all of the bases, you get a map->odom link when running gmapping right?

Icehawk101 gravatar image Icehawk101  ( 2016-03-23 08:28:33 -0500 )edit

yes. In addition, the depth2laser node publishes the tfs base_link, xtion and laser, while my robot node publishes odom and base_link, so that everything results connected. In rviz everything seems to be consistent. It seems to me that camera_link is surrogated by depth2laser.

McKracken82 gravatar image McKracken82  ( 2016-03-23 09:35:47 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2016-03-22 12:42:02 -0500

Seen: 498 times

Last updated: Mar 23 '16