Ask Your Question

robot_localization: time issues

asked 2015-11-06 08:32:10 -0500

jll gravatar image

updated 2015-12-09 10:11:16 -0500


I have been trying to use robot_localization on log files recorded on an Android device and have encountered some difficulty.

Android prints the information from IMU/GPS to a file, and my methodology has been to parse the individually logged files, collect the accelerometer/gyroscope/GPS/etc information into Odometry and GPS messages that I publish to ROS topics. At the same time, I am using rosbag record on these topics to save them in a bag file.

I have been getting erratic data from these 'parsed' bag files. I have been using the timestamps provided by Android and inserting them into the header (after converting to epoch time), because I thought that the robot_localization node would rely on these timestamps. However, further experimentation seems to indicate that the node relies instead on the frequency at which the bag file is played to it, and at which the messages in the bag file are recorded.

I would greatly appreciate it if someone could indicate if this is indeed the case? Moreover, would anyone have suggestions as to how I can get the robot_localization node to use the time intervals indicated in the headers rather than relying on the bagfile's frequency or speed of replay? I have also attached a sample .log file from Android and the code I am using to parse the .log file into published ROS messages which I then record via rosbag.

Log file:

Parser/publisher code:

Thank you!

UPDATE: I was able to resolve the issue by setting use_sim_time to true. In addition, I did not realize that there were additional timestamp header fields in the bagfile that were not being set to the timestamp output by my parser. While the message header timestamps were correct, the bagfile timestamps were not. I resolved this by using the rosbag C++ API to manually write each bag packet's timestamp to match my message header's timestamp.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2015-11-12 19:44:22 -0500

Tom Moore gravatar image

robot_localization uses the time stamps in the headers of its input messages. There is no use of input frequency in timing. At a high level, the order of operations is:

  1. Do a ros::spinOnce(), let all the callbacks fire.
  2. For every message that we receive, stick it in a priority queue based on its timestamp, with earlier stamps coming first
  3. After we've enqueued all messages, we start going through the queue. We predict from the last update time to the measurement's time stamp, then correct. We then pull the next measurement from the queue, predict up to its time stamp, then correct. We repeat this until the queue is empty.

Whether your bag is played back with a rate of 0.1 or 30, the output should be the same. You can verify this by tweaking some of the test launch files in the source code under the test directory and watching the output. Also, see the source code here:

...and here:

Are you sure you're running with use_sim_time set to true? It might be worth double-checking your timing logic for the bag file generation.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2015-11-06 08:32:10 -0500

Seen: 508 times

Last updated: Dec 09 '15