Tf lookup would require extrapolation into the past
Hello,
I am running ROS melodic on a jetson TX2. I have been getting this error while I am trying to perform autonomous navigation and obstacle avoidance.
tf exception that should never happen for sensor frame: , cloud frame: chassis, lookup would require extrapolation into the past. requested time 1628682262.496646643 but the earliest data is at time 1628682262.665975198, when looking up transform from frame [chassis] to frame [odom] site:answers.ros.org
I have tried all possible solutions like ros::Time(0) but I haven't written any custom code which requests for this kind of transform. I am attaching my costmap common params and both local and global params. I get this error when I use Non Persistent Voxel Layer in the local costmap. When I use Spatio Temporal Voxel Layer in the Local Costmap, I don't get this error. Can't add files because I don't have enough points so please don't mind:
costmap commons:
footprint: [[0.5,0.5], [0.5,-0.5], [-0.5,-0.5], [-0.5,0.5] ]
footprint_padding: 0.01
map_type: voxel
global_frame: odom
robot_base_frame: chassis #chassis
transform_tolerance: 10.0
always_send_full_costmap: true
inflation_layer:
enabled: true
inflation_radius: 0.55 #0.43 circum radius
cost_scaling_factor: 5.0
non_persistent_obstacle_layer:
enabled: true
max_obstacle_height: 2
obstacle_range: 5.0
origin_z: 0.0
z_resolution: 0.2
z_voxels: 10
unknown_threshold: 10
mark_threshold: 0
publish_voxel_map: true
track_unkown_space: true
footprint_clearing_enabled: false
observation_sources: point_cloud_sensor
point_cloud_sensor:
data_type: PointCloud2
topic: /voxel_grid/output
raytrace_range: 10.0
observation_persistence: 0.0
marking: true
min_obstacle_height: 0.5
max_obstacle_height: 1.5
rgbd_obstacle_layer:
enabled: true
voxel_decay: 1
decay_model: 0
voxel_size: 0.05
track_unknown_space: true
observation_persistence: 0.0
max_obstacle_height: 1.0
unknown_threshold: 15
mark_threshold: 5
update_footprint_enabled: true
combination_method: 1
obstacle_range: 8
origin_z: 0.0
publish_voxel_map: false
transform_tolerance: 0.2
mapping_mode: false
map_save_duration: 60
observation_sources: pc2_clear pc2_mark
pc2_mark:
data_type: PointCloud2
topic: voxel_grid/output
marking: true
clearing: false
min_obstacle_height: 0.1
max_obstacle_height: 1.0
expected_update_rate: 0.0
observation_persistence: 0.0
inf_is_valid: false
clear_after_reading: true
voxel_filter: false
pc2_clear:
data_type: PointCloud2
topic: voxel_grid/output
marking: false
clearing: true
min_z: 0
max_z: 10
vertical_fov_angle: 1.1
horizontal_fov_angle: 1.6
decay_acceleration: 15
model_type: 0
local_costmap:
global_frame: odom
robot_base_frame: chassis
update_frequency: 20
publish_frequency: 5
rolling_window: true
static_map: false
width: 20
height: 20
resolution: 0.5
transform_tolerance: 10.0
plugins:
#- {name: static_layer, type: "costmap_2d::StaticLayer"}
#- {name: rgbd_obstacle_layer, type: "spatio_temporal_voxel_layer/SpatioTemporalVoxelLayer"}
- {name: non_persistent_obstacle_layer, type: "costmap_2d::NonPersistentVoxelLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
global_costmap:
global_frame: odom
robot_base_frame: chassis
update_frequency: 5
publish_frequency: 5
rolling_window: false
static_map: true
transform_tolerance: 10.0
resolution: 0.5
height: 500
width: 500
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: rgbd_obstacle_layer, type: "spatio_temporal_voxel_layer/SpatioTemporalVoxelLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
Please let me know how to solve this error as I want to use Non Persistent Voxel Layer in my local costmap over STVL. Thanks in advance.
How are you generating the
odom->chassis
transform? How often is it published to/tf
topic?I am using robot localization package to publish TF between odom and chassis and it is running at 30Hz
That should be more than enough. Do you use multiple hosts in this ros system? Are their time-of-day clocks all properly synchronized to an NTP server?
@cjoshi17 did you find any solution to this? Lately i have been facing the exact same issues on my stack with same config files.