Messages Dropped in tf while running a Gmapping node

asked 2021-08-09 07:01:20 -0500

Tahir M. gravatar image

updated 2021-08-10 04:16:27 -0500

I have mounted an Ouster lidar(OS1 32) on Clearpath Husky and changed the startup launcher on Husky. All of the system starts fine including Ouster but its driver takes some time to launch. I also included a launch file to launch pointcloud_to_laserscan in order to use gmapping. But when I launch gmapping it gives me warning:

[ WARN] [1628510026.166979944]: MessageFilter [target=odom ]: Dropped 100.00% of messages so far. Please turn the [ros.gmapping.message_filter] rosconsole logger to DEBUG for more information.

[ WARN] [1628510026.167108311]: MessageFilter [target=odom ]:   The majority of dropped messages were due to messages growing older than the TF cache time.  The last message's timestamp was: 1521.345187, and the last frame_id was: os_sensor

Also I can't see any map built.

Can anyone give me an idea how to solve this problem?

To me I think the issue is that the ouster node is launched a little late and this time delay is creating the issue (so may be republishing the lidar data with current timestamp solve the issue?) or pointcloud_to_laserscan node publishes on /scan topic whenever there is a subscriber(may be this is causing an issue?)

Launch file for pointcloud_to_laserscan:

<?xml version="1.0"?>
<launch>

  <node pkg="pointcloud_to_laserscan" type="pointcloud_to_laserscan_node" name="ouster_laserscan_node" output="screen" required="true">
    <remap from="/cloud_in" to="/ouster/points"/>
    <remap from="/scan" to="/ouster/scan"/>

    <param name="~/min_height"        value="0.0"/>
    <param name="~/max_height"        value="5.0"/>
    <param name="~/angle_min"         value="-0.3926991"/>
    <param name="~/angle_max"         value="0.3926991"/>
    <param name="~/angle_increment"   value="0.006135"/>
    <param name="~/scan_time"         value="0.3333"/> 
    <param name="~/range_min"         value="0.3"/>
    <param name="~/range_max"         value="45"/>       
    <!-- <param name="~/target_frame"      value=""/> # Leave disabled to output scan in pointcloud frame -->
    <param name="~/concurrency_level" value="0"/>
    <param name="~/use_inf"           value="true"/>
  </node>

</launch>

Any help/suggestions are highly appreciated.

Edit 1:

Just to check what could be the problem I used topic_tools transform node to republish the data just by updating the timestamps and the error is gone. I am really not sure if this is the write thing to do as I am not aware this can make problem?

Edit 2:

Republishing the data.

<?xml version="1.0"?>
<launch>

  <arg name="topic_in"  default="/ouster/points" />
  <arg name="topic_out" default="/ouster/points/filtered" />

  <node name="ouster_points_filter" pkg="topic_tools" type="transform"
        args="$(arg topic_in) $(arg topic_out)
              sensor_msgs/PointCloud2                                            
              'sensor_msgs.msg.PointCloud2(header=std_msgs.msg.Header(stamp=rospy.Time.now(), frame_id=m.header.frame_id), 
                                           height=m.height, width=m.width, fields=m.fields, is_bigendian=m.is_bigendian,
                                           point_step=m.point_step, row_step=m.row_step, data=m.data, is_dense=m.is_dense)'
                                           --import sensor_msgs std_msgs rospy"/>

</launch>
edit retag flag offensive close merge delete

Comments

If the pointcloud_to_laserscan takes too much time too compute, the messages could become too old and then gmapping can't use them in order to build the map. You should check the time it takes for the node to make the conversion.

Alrevan gravatar image Alrevan  ( 2021-08-09 09:20:10 -0500 )edit

This is happening to pointcloud as well. I have tried to have a look in RVIZ and in here it also says the same in odom frame. The same is also true if I configure this to work with hdl_graph_slam.

Tahir M. gravatar image Tahir M.  ( 2021-08-09 09:32:15 -0500 )edit

@Alrevan I've updated the question. May be you can say something about that?

Tahir M. gravatar image Tahir M.  ( 2021-08-10 03:14:34 -0500 )edit

The majority of dropped messages were due to messages growing older than the TF cache time. The last message's timestamp was: 1521.345187, and the last frame_id was: os_sensor

1521.345187 doesn't seem like a timestamp I'd expect to see on real hardware. Are you running Gazebo somewhere at the same time? Is use_sim_time set to true on the parameter server?

Is the clock on the husky perhaps reset to 1970?

gvdhoorn gravatar image gvdhoorn  ( 2021-08-10 03:33:59 -0500 )edit

Neither Gazebo is running, nor use_sim_time is set to true. I am curious if that't not the case due to timestamp, than why the error is disappeared when I replublish the data using topic_toolstransform with current ros time?

I have added the code for republishing.

Tahir M. gravatar image Tahir M.  ( 2021-08-10 04:14:34 -0500 )edit

I am curious if that't not the case due to timestamp, than why the error is disappeared when I replublish the data using topic_toolstransform with current ros time?

the cause is the timestamp. I'm not sure what made you conclude I'm saying something else.

But republishing with now() is very much not what you should do.

You should find out why the timestamps are incorrect.

I would check the clocks on all involved PCs.

gvdhoorn gravatar image gvdhoorn  ( 2021-08-10 05:02:34 -0500 )edit

You mentioned a strong point of which I haven't thought. Clearpath Husky's time I changed to the local time zone but the Ouster I think its timezone is not changed because this is the one making the problem so I think the problem lies in here.

the cause is the timestamp. I'm not sure what made you conclude I'm saying something else.

Sorry for misunderstanding.

But republishing with now() is very much not what you should do.

Yes I know its not good but I just wanted to debug that this is the cause thats the reason I did this.

Tahir M. gravatar image Tahir M.  ( 2021-08-10 06:26:14 -0500 )edit