ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
2

how gmapping get odom data

asked 2014-05-07 22:51:24 -0500

vdonkey gravatar image

updated 2014-05-07 23:03:02 -0500

In brief, I am going to use slam_gmapping with kinect. but the result is very bad. however I rotate, move the kinect, the result map in rviz just keep a fan-shaped area no more than 90 degree.

I am not even very clear about what question to ask, I feel not mastering the whole thing :(

so, maybe describe what I did in detail is a good idea. the question will be long, but easy to understand this way:

  1. in ubuntu open a terminal, type

    roscore
    
  2. start kinect driver in another terminal

    rosrun openni_camera openni_node
    
  3. convert depth image into LaserScan

    rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/depth/image_raw
    
  4. run gmapping

    rosrun gmapping slam_gmapping scan:=/scan
    
  5. run rviz to see result "Map", "LaserScan" and "Image"

    rosrun rviz rviz
    

I got warning message at step 2, I hope it doesn't matter because I can see a fairly good "Image" of "/depth/image_raw" topic

[ WARN] [1399535684.197109851]: ~device_id is not set! Using first device.
[ WARN] [1399535684.562974619]: Camera calibration file /home/vdonkey/.ros/camera_info/rgb_A00366913062050A.yaml not found.
[ WARN] [1399535684.563025039]: Using default parameters for RGB camera calibration.

I also got warning at step 4

[ WARN] [1399536805.888066710]: MessageFilter [target=odom ]: Dropped 100.00% of messages so far. Please turn the [ros.gmapping.message_notifier] rosconsole logger to DEBUG for more information.

this warning cannot be ignored, since I cannot see any map showed in rviz after adding a "Map" subscribing "/map" topic. so I add this before step 4

rosparam set /slam_gmapping/odom_frame camera_depth_frame

camera_depth_frame seems to be the default frame id of /scan topic published by depthimage_to_laserscan package

but after that, the warning message of gmapping changed into

[ WARN] [1399537552.284971741]: Failed to compute laser pose, aborting initialization ("base_link" passed to lookupTransform argument target_frame does not exist. )

so I added this before step 4

rosparam set /slam_gmapping/base_frame camera_depth_frame

this time I got no warning message, after setting base_frame and odom_frame, I have "Map", "LaserScan" showed in rviz.

The problem is, nevertheless I rotate/move the kinect, the result map is always facing a same direction. of course so does the laser scan lines. I have some guesses/questions:

  1. maybe I need a Wheel Encoder hardware?
  2. where to buy a proper wheel encoder? how to drive it?
  3. what topic will wheel encoder's driver publish? while gmapping only subscribe /tf and /scan
  4. topics subscribed can be regarded as input, topics published can be regarded as output, how about parameters? input? output? how about base_frame and odom_frame of gmapping ? I think they should affect "/map" topic, an output topic, so that they are some kind of output, but the two "rosparam set" before running gmapping give me hint that they are inputs.
  5. why I got no warning even I didn't do anything to the /tf topic? what should I do about /tf? why I always get "no tf data received" pdf after executing "rosrun tf view_frames" even when gmapping is running?

thank you for watching this long question. I hope I can understand the ... (more)

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2014-05-08 00:07:33 -0500

dornhege gravatar image

1 - 3: Gmapping needs odometry published as /tf. Look into hector_mapping for an approach that doesn't require odometry.

4 Parameters are parameters. If you want to declare them something, they are inputs. Especially something that affects an output is an input. The frames shouldn't be considered something to "tune" as parameters. They need to be set correctly.

5 Because your circumvented the actual odometry by setting the odom frame to the camera frame. This also explains why you don't see the algorithm doing anything.

edit flag offensive delete link more

Comments

@dornhege, Thank you very much. you showed me the way like the signpost at the crossroad, the lighthouse on the sea.

vdonkey gravatar image vdonkey  ( 2014-05-08 15:51:03 -0500 )edit

I tried hector_gmapping, but still get error: [ERROR] [1399602478.182731899]: Transform failed during publishing of map_odom transform: "base_link" passed to lookupTransform argument source_frame does not exist. after I set output_frame_id param of depthimage_to_laserscan to base_link

vdonkey gravatar image vdonkey  ( 2014-05-08 16:36:26 -0500 )edit

I hate tunning these parameters, too. but I just don't know how to provide data by /tf. I read the tutorials of tf(one turtle chasing another), and got some ideas that tf is used for converting x/y/z of one frame into another frame. but how to use it as a topic? please give me some help about tf.

vdonkey gravatar image vdonkey  ( 2014-05-08 16:42:23 -0500 )edit

DO you have the transform base_link to your camera available. I don't know about your setup.In the mean while.; hector_slam might be able to deal with just cmaera_frame as the base frame.

dornhege gravatar image dornhege  ( 2014-05-08 23:56:36 -0500 )edit
1

@vdonkey did you get to the bottom of this? I have a very similar set up and I'm encountering the same issues. Thanks Mark

MarkyMark2012 gravatar image MarkyMark2012  ( 2015-04-23 01:54:49 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2014-05-07 22:51:24 -0500

Seen: 4,530 times

Last updated: May 08 '14