ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
6

TF Lookup would require extrapolation into the future

asked 2014-07-28 14:38:29 -0600

jcerruti gravatar image

updated 2014-08-21 18:04:07 -0600

Murilo F. M. gravatar image

I would appreciate some guidance to understand what may be going on here.

I am trying to use the very nice robot_localization package to integrate GPS, IMU, Optical Flow and a few other sensors to localize a drone.

When running my configuration with sensor data coming from a recorded bag file, I get the following error from the ekf_localization_node node:

[ WARN] [1406560584.464395417, 1406223582.586091240]: Could not obtain transform from utm to nav. Error was Lookup would require extrapolation into the future. Requested time 1406223583.213000000 but the latest data is at time 1406223582.576022606, when looking up transform from frame [utm] to frame [nav]

I've been trying to understand why this is happening and I've gathered the following information:

  • The CPU usage is pretty low. I don't think this could be happening because of lack of processing capacity

  • What the code seems to be trying to do is using lookupTransform to calculate TF using the timestamp of the GPS message received

  • The transformation between [utm] and [nav] is being published at 50Hz. GPS messages are being published at 1Hz.

  • I can see GPS messages being published with timestamps matching the warning messages. I can also see TF publishing transformations matching the timestamp of "the latest data is at time ..." but also many more later messages:

--
      stamp: 
        secs: 1406223582
        nsecs: 576022606
      frame_id: nav
    child_frame_id: utm
    transform: 
      translation: 
        x: -5918718.00919
        y: 1789119.24381
--
      stamp: 
        secs: 1406223582
        nsecs: 596210609
      frame_id: nav
    child_frame_id: utm
    transform: 
      translation: 
        x: -5918718.00919
        y: 1789119.24381
--
      stamp: 
        secs: 1406223582
        nsecs: 616362896
      frame_id: nav
    child_frame_id: utm
    transform: 
      translation: 
        x: -5918718.00919
        y: 1789119.24381
--
      stamp: 
        secs: 1406223582
        nsecs: 636493509
      frame_id: nav
    child_frame_id: utm
    transform: 
      translation: 
        x: -5918718.00919
        y: 1789119.24381
--
...

So if there are other, more recent transforms available, published by TF, why is lookupTransform complaining that the latest data is so old, when in fact there is much newer data being published?

For now, I am using the workaround of requesting the transformation for ros::Time(0) instead of the timestamp for the GPS message, but I am not sure that is the correct thing to do.

Any ideas?

edit retag flag offensive close merge delete

Comments

Would using waitForTransform make sense for your use case?

ahendrix gravatar image ahendrix  ( 2014-07-28 16:31:43 -0600 )edit

I am going to try that out next. Thanks!

jcerruti gravatar image jcerruti  ( 2014-07-29 08:14:37 -0600 )edit

5 Answers

Sort by ยป oldest newest most voted
7

answered 2014-07-28 19:38:04 -0600

tfoote gravatar image

The closest data in time is only relevant for the specific spanning set across the transform tree. My guess is that there is a tf source in you spanning set which is publishing at low rates which is causing your issue. If the value is static you should use a static publisher so that tf can assume that the value has not changed and use the data at any time.

To debug more I suggest taking a look at the output of tf_monitor or view_frames which will both show you latency per link.

Whether or not you can speed up your sources like above your code needs to be robust to this race condition. If it's a high throughput field you should be using a tf MessageFilter or if it's a low throughput process and you can afford to block. waitForTransform will be adequate.

edit flag offensive delete link more

Comments

Thanks Tully. I am more interesting in understanding this in depth than in producing a workaround. My TF tree in this case is very simple: there is only [nav] --> [utm]. view_frames reports an average rate of 50Hz and tf_monitor an Average Delay: -4.77186e-05 and Max Delay: 0 for utm. Given this set-up, which tf source could be the one publishing at too low a rate? If I understand correctly, there is only one transform in the tree and it is publishing at a very decent rate.

jcerruti gravatar image jcerruti  ( 2014-07-29 08:39:46 -0600 )edit

That looks like there's not a delay in your tree. So where is your timestamp that you are querying with coming from? Is it on a machine with a synchronized clock?

tfoote gravatar image tfoote  ( 2014-07-29 14:27:36 -0600 )edit

This is happening entirely on a single computer. The sensor data comes from a bag file which is run through a few nodes (basically localization and helpers). What I am debugging here is not my own code but rather the localization. The timestamp being queried is from the GPS message, which is at 1Hz.

jcerruti gravatar image jcerruti  ( 2014-07-29 14:45:30 -0600 )edit
1

If everything is coming from a bag file the offset existed at the time of recording and the bag is just playing it back as it arrived. You will need to hold the data until the transforms are available. This is the purpose of the tf MessageFilter.

tfoote gravatar image tfoote  ( 2014-07-29 15:15:12 -0600 )edit

The GPS data comes from the bag file, the transforms are being published in real-time. I guess the solution will be to use a message filter or waitForTransform, but I still don't understand why there is such a big delay in processing the TF messages.

jcerruti gravatar image jcerruti  ( 2014-07-29 16:01:35 -0600 )edit

@tfoote is it possible to publish transforms such that a published transform (tf_A) remains valid until a newer transform (tf_A') gets published?

Rufus gravatar image Rufus  ( 2019-11-21 20:28:23 -0600 )edit
2

Please ask this as it's own question it's much more than should be considered in a comment on a 5 year old question.

tfoote gravatar image tfoote  ( 2019-11-22 04:24:43 -0600 )edit
2

answered 2020-11-12 11:56:37 -0600

jin031214 gravatar image

updated 2020-11-12 11:58:26 -0600

I know it's a bit late but I would like to share with everyone that I FIXED THE ISSUE!!!!

add the following lines to base_local_planner/src/goal_functions.cpp

"

#include <tf2_ros transform_listener="">

//on the top of transformGlobalPlan() and getGoalPose() try blocks

tf2_ros::Buffer tfBuffer;

tf2_ros::TransformListener tfListener(tfBuffer);

while(!tfBuffer.canTransform(your frames, ros::Time(), ros::Duration(1.0)));

"

I'M SO FREAKIN HAPPY THAT THIS IS WORKING!!!!!!

edit flag offensive delete link more
1

answered 2014-09-02 15:54:01 -0600

Tom Moore gravatar image

updated 2014-09-10 09:08:22 -0600

Would you happen to have a bag file you can send me? I can take a look and either fix a bug if there is one or help determine what's wrong with your tf data. In the error message you're getting, the requested time is ~0.64 seconds ahead of the most recently available transform.

EDIT: here's the function call I'm using:

tfListener_.lookupTransform(targetFrame, poseTmp.frame_id_, poseTmp.stamp_, targetFrameTrans);

The function this call is in (preparePose) is generically taking in sensor data and trying to convert it to a target frame, which is going to be dependent on your parameters (probably the odom_frame parameter). For @jcerruti, I'm wondering if the issue is the third parameter, which is pulled directly from the message's header (in your case, the GPS data message header). Could your GPS driver be pulling that directly from the GPS unit, and not from your machine? Just a thought.

@mikepurvis, what sensor data message is causing this?

edit flag offensive delete link more

Comments

In my case, it's just ~3ms in the future (scroll right on the error message to see). The larger issue that I'm not understanding is why the node is looking this up at all, given that it's the publisher of it.

mikepurvis gravatar image mikepurvis  ( 2014-09-02 16:07:02 -0600 )edit
0

answered 2022-10-10 09:20:25 -0600

I FIXED THE ISSUE simply by changing the "ros::Time::now()" to "ros::Time(0)". In ros_tf2's wiki tutorial(http://wiki.ros.org/tf2/Tuto..., it is recommended that Time(0) means the last available time of frame_transform infos. BTW perhaps it may also be solved by add "ros::Duration" to the code( just like the blocking way) if the "ros::Time::now()" is required. For me I use this for robot control simulation and I can allow the error of time by this way. Hope it works!

edit flag offensive delete link more
0

answered 2023-07-20 02:39:45 -0600

bhomaidan gravatar image

updated 2023-07-20 02:40:05 -0600

In my case I had to synchronize the time between the ROS_MASTER and the ROS_CLIENT using ntpdate:

sudo ntpdate -q <ROS_MASTER IP>

edit flag offensive delete link more

Question Tools

3 followers

Stats

Asked: 2014-07-28 14:38:29 -0600

Seen: 29,975 times

Last updated: Jul 20 '23