ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

amiller27's profile - activity

2022-03-07 11:03:12 -0500 received badge  Favorite Question (source)
2022-03-07 09:00:46 -0500 received badge  Nice Question (source)
2019-06-04 14:30:29 -0500 answered a question Could not load panel in rviz -- PluginlibFactory: The plugin for class ...

For posterity: I was having this same problem (also on ROS Indigo and Ubuntu 14.04), it was caused by the fact that AUT

2017-03-13 12:25:42 -0500 received badge  Supporter (source)
2017-03-13 12:25:32 -0500 received badge  Scholar (source)
2016-09-12 11:09:54 -0500 received badge  Famous Question (source)
2016-08-03 04:49:32 -0500 received badge  Notable Question (source)
2016-07-26 12:09:04 -0500 received badge  Popular Question (source)
2016-06-29 05:31:17 -0500 received badge  Enthusiast
2016-06-22 06:54:32 -0500 received badge  Student (source)
2016-06-21 11:17:01 -0500 asked a question High CPU usage for AMCL and move_base under namespace

I am trying to run the ROS navigation stack on a turtlebot, with everything under a namespace (in this case, the namespace is robot_0). The netbook that everything is running on has a dual-core Intel Atom processor with 2GB of RAM, and is running ROS Indigo on Ubuntu 14.04. Everything is fine when I run the sample navigation launch files that do not use namespaces (i.e. minimal.launch from turtlebot_bringup and amcl_demo.launch from turtlebot_navigation). In this configuration, move_base uses approximately 22% of one core, and amcl uses under 10% of one core. Amcl then publishes the transform from the /map frame to the /odom frame at 30Hz, as expected.

However, when I switch to my custom configuration file that has everything running under a namespace, the cpu usage for amcl and for move_base each jump to approximately 80%, pushing the total usage for each core to 100%. Amcl is only able to publish the transform at around 5Hz, and the most recent transform available from /map to /robot_0/odom (the equivalent of /odom under the namespace) is over 2 seconds old. I tried the commonly suggested solutions of turning down costmap publishing frequency, and that didn't help (I also don't think it should be necessary because everything runs fine using the default parameters). The configuration I have causes the following warnings from the costmap:

[ WARN] [1466518518.525473823]: Costmap2DROS transform timeout. Current time: 1466518518.5254, global_pose stamp: 1466518518.0204, tolerance: 0.5000
[ WARN] [1466518518.525571024]: Could not get robot pose, cancelling pose reconfiguration

This warning is published at approximately 2Hz. If I increase the transform_tolerance on the costmaps to 3.5s (as you can see I did in the configurations I included below), then the warnings become much less frequent, but don't always disappear completely.

However, even if I increase the transform tolerance, I still occasionally get the following warnings:

[ WARN] [1466518719.455966268]: MessageFilter [target=map ]: Dropped 100.00% of messages so far. Please turn the [ros.costmap_2d.message_notifier] rosconsole logger to DEBUG for more information.
[ WARN] [1466518719.713827484]: MessageFilter [target=robot_0/odom ]: Dropped 100.00% of messages so far. Please turn the [ros.costmap_2d.message_notifier] rosconsole logger to DEBUG for more information.

The percentage of messages dropped varies from around 90% to 100%.

And I get errors like the following sporadically whether or not I increase the transform_tolerance:

[ERROR] [1466518772.987647336]: Extrapolation Error looking up robot pose: Lookup would require extrapolation into the past.  Requested time 1466518762.913725848 but the earliest data is at time 1466518762.988183070, when looking up transform from frame [robot_0/base_footprint] to frame [map]

I have assumed that all of these problems are caused by amcl not publishing the transform as quickly as it should because of some performance issue, but I have not come across any differences between my launch configuration and the default that would increase the cpu load.

All of the configuration files are located in the testing branch of the git repository here. The tf trees ... (more)