Environment server intermittently fails to sync planning scene. [closed]

asked 2012-05-31 07:03:47 -0600

Adolfo Rodriguez T gravatar image

Using arm_navigation under ROS electric in a single machine setup, I've been experiencing intermittent time delays of 5, 10, 15, ... seconds (on top of the usual plan generation time), and log messages like:

Did not get reply from planning scene client /ompl_planning. Incrementing counter to 1

I'd like to ask if other people have encountered this issue as well, on PR2 or other robots. Some feedback would be appreciated to determine if this deserves a bug report, or the problem lies on my side.


Gory details:

Apparently what's going on is that the environment server is attempting to sync the planning scene by means of action clients connected to servers living in nodes like ompl_planning, trajectory_filter_server, right_arm_kinematics... and times out waiting for a result (hence the 5 x n second delays).

The odd thing is that as far as I could test, the planning scene sync operation does succeed, but the environment server is not aware of it. The following is an excerpt from running a very simple arm_navigation setup:

Timestamp |  Node          |  Message
------------------------------------------------------------------------------------------------------------------
719.729   | /ompl_planning | Accepting goal, id: /environment_server...
719.729   | /ompl_planning | Syncing planning scene
719.729   | /ompl_planning | Reverting planning scene
...       |                |
719.740   | /ompl_planning | Setting status as succeeded on goal, id: /environment_server...
719.740   | /ompl_planning | Publishing result for goal with id: /environment_server...
719.742   | /ompl_planning | Setting took 0.0138
724.765   | /envir_server  | Did not get reply from planning scene client /ompl_planning. Incrementing counter to 1

I verified with rostopic echo that the result is indeed published on /ompl_planning/sync_planning_scene/result at timestamp 719.740, and that the associated action client inside the environment remains in WAITING_FOR_RESULT state.

Finally, this behavior seems to be associated to the computer load. I need to add CPU stress to make it happen. Other stress sources (io, memory, hi-frequency localhost pinging) seem to have a negligible effect.

If you made it this far, thanks for reading :).

NOTE: The issue reported in this question exhibits similar symptoms, but seems that the underlying problem is a different one.

edit retag flag offensive reopen merge delete

Closed for the following reason question is not relevant or outdated by tfoote
close date 2015-03-03 01:48:16.455025