ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

TF_OLD_DATA error when I publish a new topic.

asked 2019-10-17 04:27:27 -0500

lotfishtaine gravatar image

updated 2019-10-18 05:28:02 -0500

Hello,

I'm trying to create a map of the environment using pcl_ndt. I use data from simulated velodyne in gazebo. I'm subscribing to PointCloud2 msg and publishing the genrated map via PointCloud2 msg in new topic /ndt_map

However, after a few seconds of publishing /nadt_map topic, I got this warning :

Warning: TF_OLD_DATA ignoring data from the past for frame base_link at time 36,221 according to authority /ndt_map
Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained
         at line 277 in /tmp/binarydeb/ros-melodic-tf2-0.6.5/src/buffer_core.cpp

I already used this command that should have solved the problem:

rosparam set use_sim_time true

but the error still persist.

I saw that this could happen when reproducing bagfile, but for my case, I'm using online data from the gazebo simulator.

Thanks for helping shedding light on this issue!

UPDATE: this is my callback :

void pts_callback(const sensor_msgs::PointCloud2::ConstPtr& input){
        // Convert the sensor_msgs/PointCloud2 data to pcl/PointCloud
        pcl::fromROSMsg(*input, cloud);

        ndt.align(*outputCloud, init);

        // Get the transformation matrix
        t = ndt.getFinalTransformation();

        // Transforme the cloud with the transformation matrix
        pcl::transformPointCloud(cloud, *transformed_cloud, t);

    tf::TransformBroadcaster br;
        tf::Transform transform;

    //create tf transform from t
    transform = get_tf_transform(t);

        br.sendTransform(tf::StampedTransform(transform, input->header.stamp, "world", "base_link"));

        sensor_msgs::PointCloud2::Ptr map_msg(new sensor_msgs::PointCloud2);
        pcl::toROSMsg(*transformed_cloud, *map_msg);

        ndt_pub.publish(*map_msg);
}

I'm subscribing to VelodyneScan packets with :

points_sub_ = nd.subscribe("velodyne_points", 100000, &pts_callback, this);

and I publish map with :

ndt_pub = nd.advertise<sensor_msgs::PointCloud2>("ndt_map", 1000);
edit retag flag offensive close merge delete

Comments

Do you update the timestamp of the PointCloud2 message before publishing it every time?

Jari gravatar image Jari  ( 2019-10-17 12:26:56 -0500 )edit

Yes, for the broadcasting I use input->header.stamp I updated the question with the callback that I used. See edit above.

lotfishtaine gravatar image lotfishtaine  ( 2019-10-18 05:23:47 -0500 )edit

Unsure about C++ but in python when I create a new message it doesn't set the timestamp (so secs and nsecs both start at 0). That's probably the case when you initialize map_msg. Try setting the stamp explicitly (equal to input->header.stamp like you do for the transform) for the map_msg as well.

Jari gravatar image Jari  ( 2019-10-18 12:08:59 -0500 )edit

I am not 100% sure but i think what is happening is an error caused by your queue size. if you reduce this size, my bet is that you are not going to have anymore the issue.

points_sub_ = nd.subscribe("velodyne_points", 1, &pts_callback, this)

I am going to try why i think is that.

Lets imagine that your callback function is slower than the main frecuency of value subscribing. When this happens, you will be recieving data on which you will not be able to actually run the callback function (because it is already running). This data will be put on queue until the callback function is available to be recalled. this will cause that always will be acummulating more and more data, until it will reach a point when you will be using data with a stamp enough past to be considered "actually past" and this is why it ...(more)

Solrac3589 gravatar image Solrac3589  ( 2019-10-24 02:21:12 -0500 )edit

... If you reduce the queue size, the accumulations are controlled (to the number you give, in you case you can acummulate up to 100000! ) and all the extra messages are discarded. Let me know if this was the issue!

Solrac3589 gravatar image Solrac3589  ( 2019-10-24 02:22:16 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
3

answered 2019-10-18 18:47:00 -0500

tfoote gravatar image

updated 2019-10-23 15:58:27 -0500

TF_OLD_DATA has nothing to do with your callback. It means that you're receiving old data on the /tf topic that's getting dropped.

Please see: http://wiki.ros.org/tf/Errors%20expla...

Updates:

Ok, your structure is a little unusual. You'll need to make sure to validate that your timestamps are always valid then. You might be mixing sim time and real time to have this issue. Also you should check if something else is sending conflicting things for the same

Also you should make sure that your TransformBroadcaster is not dynamically created in each callback. That's a recipie for failure as the connections have to be established after the constructor in the background and it's a race condition whether they are complete before you call sendTransform if you take a look at the tutorials we always recommend creating the publishers outside of your callback, usually as a member so that they can persist and don't need to be reinitialized.

edit flag offensive delete link more

Comments

On the contrary, if you look at the error, the TF_OLD_DATA is related to the base_link frame and the /ndt_map is generated in the callback.

lotfishtaine gravatar image lotfishtaine  ( 2019-10-23 07:42:53 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2019-10-17 04:27:27 -0500

Seen: 1,753 times

Last updated: Oct 23 '19