ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

High CPU usage for AMCL and move_base under namespace

asked 2016-06-21 10:11:41 -0600

amiller27 gravatar image

I am trying to run the ROS navigation stack on a turtlebot, with everything under a namespace (in this case, the namespace is robot_0). The netbook that everything is running on has a dual-core Intel Atom processor with 2GB of RAM, and is running ROS Indigo on Ubuntu 14.04. Everything is fine when I run the sample navigation launch files that do not use namespaces (i.e. minimal.launch from turtlebot_bringup and amcl_demo.launch from turtlebot_navigation). In this configuration, move_base uses approximately 22% of one core, and amcl uses under 10% of one core. Amcl then publishes the transform from the /map frame to the /odom frame at 30Hz, as expected.

However, when I switch to my custom configuration file that has everything running under a namespace, the cpu usage for amcl and for move_base each jump to approximately 80%, pushing the total usage for each core to 100%. Amcl is only able to publish the transform at around 5Hz, and the most recent transform available from /map to /robot_0/odom (the equivalent of /odom under the namespace) is over 2 seconds old. I tried the commonly suggested solutions of turning down costmap publishing frequency, and that didn't help (I also don't think it should be necessary because everything runs fine using the default parameters). The configuration I have causes the following warnings from the costmap:

[ WARN] [1466518518.525473823]: Costmap2DROS transform timeout. Current time: 1466518518.5254, global_pose stamp: 1466518518.0204, tolerance: 0.5000
[ WARN] [1466518518.525571024]: Could not get robot pose, cancelling pose reconfiguration

This warning is published at approximately 2Hz. If I increase the transform_tolerance on the costmaps to 3.5s (as you can see I did in the configurations I included below), then the warnings become much less frequent, but don't always disappear completely.

However, even if I increase the transform tolerance, I still occasionally get the following warnings:

[ WARN] [1466518719.455966268]: MessageFilter [target=map ]: Dropped 100.00% of messages so far. Please turn the [ros.costmap_2d.message_notifier] rosconsole logger to DEBUG for more information.
[ WARN] [1466518719.713827484]: MessageFilter [target=robot_0/odom ]: Dropped 100.00% of messages so far. Please turn the [ros.costmap_2d.message_notifier] rosconsole logger to DEBUG for more information.

The percentage of messages dropped varies from around 90% to 100%.

And I get errors like the following sporadically whether or not I increase the transform_tolerance:

[ERROR] [1466518772.987647336]: Extrapolation Error looking up robot pose: Lookup would require extrapolation into the past.  Requested time 1466518762.913725848 but the earliest data is at time 1466518762.988183070, when looking up transform from frame [robot_0/base_footprint] to frame [map]

I have assumed that all of these problems are caused by amcl not publishing the transform as quickly as it should because of some performance issue, but I have not come across any differences between my launch configuration and the default that would increase the cpu load.

All of the configuration files are located in the testing branch of the git repository here. The tf trees ... (more)

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2017-03-10 09:51:55 -0600

gavran gravatar image

The problem was that you had a circle in your tf tree. I found it out by running roswtf - it informed me that there are cycles and that both robot_state_publisher and camera_base_link are publishing same transforms.

After changing the parameter publish_tf of 3dsensor.launch to false, the circularity was removed and the problem is not there anymore.

edit flag offensive delete link more

Question Tools



Asked: 2016-06-21 10:11:41 -0600

Seen: 1,618 times

Last updated: Mar 10 '17