Ask Your Question
4

TF Lookup would require extrapolation into the future

asked 2014-07-28 14:38:29 -0500

jcerruti gravatar image

updated 2014-08-21 18:04:07 -0500

Murilo F. M. gravatar image

I would appreciate some guidance to understand what may be going on here.

I am trying to use the very nice robot_localization package to integrate GPS, IMU, Optical Flow and a few other sensors to localize a drone.

When running my configuration with sensor data coming from a recorded bag file, I get the following error from the ekf_localization_node node:

[ WARN] [1406560584.464395417, 1406223582.586091240]: Could not obtain transform from utm to nav. Error was Lookup would require extrapolation into the future. Requested time 1406223583.213000000 but the latest data is at time 1406223582.576022606, when looking up transform from frame [utm] to frame [nav]

I've been trying to understand why this is happening and I've gathered the following information:

  • The CPU usage is pretty low. I don't think this could be happening because of lack of processing capacity

  • What the code seems to be trying to do is using lookupTransform to calculate TF using the timestamp of the GPS message received

  • The transformation between [utm] and [nav] is being published at 50Hz. GPS messages are being published at 1Hz.

  • I can see GPS messages being published with timestamps matching the warning messages. I can also see TF publishing transformations matching the timestamp of "the latest data is at time ..." but also many more later messages:

--
      stamp: 
        secs: 1406223582
        nsecs: 576022606
      frame_id: nav
    child_frame_id: utm
    transform: 
      translation: 
        x: -5918718.00919
        y: 1789119.24381
--
      stamp: 
        secs: 1406223582
        nsecs: 596210609
      frame_id: nav
    child_frame_id: utm
    transform: 
      translation: 
        x: -5918718.00919
        y: 1789119.24381
--
      stamp: 
        secs: 1406223582
        nsecs: 616362896
      frame_id: nav
    child_frame_id: utm
    transform: 
      translation: 
        x: -5918718.00919
        y: 1789119.24381
--
      stamp: 
        secs: 1406223582
        nsecs: 636493509
      frame_id: nav
    child_frame_id: utm
    transform: 
      translation: 
        x: -5918718.00919
        y: 1789119.24381
--
...

So if there are other, more recent transforms available, published by TF, why is lookupTransform complaining that the latest data is so old, when in fact there is much newer data being published?

For now, I am using the workaround of requesting the transformation for ros::Time(0) instead of the timestamp for the GPS message, but I am not sure that is the correct thing to do.

Any ideas?

edit retag flag offensive close merge delete

Comments

Would using waitForTransform make sense for your use case?

ahendrix gravatar imageahendrix ( 2014-07-28 16:31:43 -0500 )edit

I am going to try that out next. Thanks!

jcerruti gravatar imagejcerruti ( 2014-07-29 08:14:37 -0500 )edit

2 Answers

Sort by ยป oldest newest most voted
5

answered 2014-07-28 19:38:04 -0500

tfoote gravatar image

The closest data in time is only relevant for the specific spanning set across the transform tree. My guess is that there is a tf source in you spanning set which is publishing at low rates which is causing your issue. If the value is static you should use a static publisher so that tf can assume that the value has not changed and use the data at any time.

To debug more I suggest taking a look at the output of tf_monitor or view_frames which will both show you latency per link.

Whether or not you can speed up your sources like above your code needs to be robust to this race condition. If it's a high throughput field you should be using a tf MessageFilter or if it's a low throughput process and you can afford to block. waitForTransform will be adequate.

edit flag offensive delete link more

Comments

Thanks Tully. I am more interesting in understanding this in depth than in producing a workaround. My TF tree in this case is very simple: there is only [nav] --> [utm]. view_frames reports an average rate of 50Hz and tf_monitor an Average Delay: -4.77186e-05 and Max Delay: 0 for utm. Given this set-up, which tf source could be the one publishing at too low a rate? If I understand correctly, there is only one transform in the tree and it is publishing at a very decent rate.

jcerruti gravatar imagejcerruti ( 2014-07-29 08:39:46 -0500 )edit

That looks like there's not a delay in your tree. So where is your timestamp that you are querying with coming from? Is it on a machine with a synchronized clock?

tfoote gravatar imagetfoote ( 2014-07-29 14:27:36 -0500 )edit

This is happening entirely on a single computer. The sensor data comes from a bag file which is run through a few nodes (basically localization and helpers). What I am debugging here is not my own code but rather the localization. The timestamp being queried is from the GPS message, which is at 1Hz.

jcerruti gravatar imagejcerruti ( 2014-07-29 14:45:30 -0500 )edit
1

If everything is coming from a bag file the offset existed at the time of recording and the bag is just playing it back as it arrived. You will need to hold the data until the transforms are available. This is the purpose of the tf MessageFilter.

tfoote gravatar imagetfoote ( 2014-07-29 15:15:12 -0500 )edit

The GPS data comes from the bag file, the transforms are being published in real-time. I guess the solution will be to use a message filter or waitForTransform, but I still don't understand why there is such a big delay in processing the TF messages.

jcerruti gravatar imagejcerruti ( 2014-07-29 16:01:35 -0500 )edit
1

answered 2014-09-02 15:54:01 -0500

Tom Moore gravatar image

updated 2014-09-10 09:08:22 -0500

Would you happen to have a bag file you can send me? I can take a look and either fix a bug if there is one or help determine what's wrong with your tf data. In the error message you're getting, the requested time is ~0.64 seconds ahead of the most recently available transform.

EDIT: here's the function call I'm using:

tfListener_.lookupTransform(targetFrame, poseTmp.frame_id_, poseTmp.stamp_, targetFrameTrans);

The function this call is in (preparePose) is generically taking in sensor data and trying to convert it to a target frame, which is going to be dependent on your parameters (probably the odom_frame parameter). For @jcerruti, I'm wondering if the issue is the third parameter, which is pulled directly from the message's header (in your case, the GPS data message header). Could your GPS driver be pulling that directly from the GPS unit, and not from your machine? Just a thought.

@mikepurvis, what sensor data message is causing this?

edit flag offensive delete link more

Comments

In my case, it's just ~3ms in the future (scroll right on the error message to see). The larger issue that I'm not understanding is why the node is looking this up at all, given that it's the publisher of it.

mikepurvis gravatar imagemikepurvis ( 2014-09-02 16:07:02 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2014-07-28 14:38:29 -0500

Seen: 12,435 times

Last updated: Sep 10 '14