ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
3

Navigation for a Humanoid robot

asked 2013-09-09 21:05:44 -0600

Johannes gravatar image

updated 2014-01-28 17:17:53 -0600

ngrennan gravatar image

Hello, after being quite a bit confused on how to navigate my biped robot in an partly unknown environment (I do have a floorplan, but do not know where obstacles are), I want to share my thoughts and ask you for feedback If my approach makes sense or not.

What I do have:

  • a static OccupancyGrid of the floorplan

  • odometry data from the robot

  • PointCloud from Kinect

  • a desired Goal

My plan now is to use :

  • amcl (with kinect_to_laser, tf, and the static floorplan) to get /map -> /odom

  • costmap_2d (with static floorplan, tf and PointCloud2) to create a map for the global planner

  • navfn (or any other global planner) to get a global path for the robot (with CostMap2D and tf)

  • octomap_server (with PointCloud2 and tf) to get an OccupancyGrid for the step planner (local planner)

  • footstep_planner (with the Occupancy Grid, a local goal on the global path and tf) to finaly get footsteps.

I think the setup should be quite fine, but I'm not shure how to implement this in ROS.

I had a look at the move_base node and the BaseLocalPlanner interface from nav_core and thought to add the octomap_server and the footstep_planner as local_planner but the interface defined in the nav_core does not fit the needs of the foostep planner.

My second approach would be to use costmap2D and navfn as standalone nodes and write a new node that similar to move_base that is cooridnating the planning. But here I'm not sure if this isn't quite a lot of work to run all the nodes from navigation stack as standalone.

It would be great if anyone could give me some suggestions and feedback on the approach itself and what would be the best way to implement this in ROS.

Best Wishes

Johannes

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
1

answered 2013-10-17 02:44:53 -0600

AHornung gravatar image

If you already use OctoMap for a 3D map, then you could also have a look at the humanoid_localization package. You could obtain more accurate estimates since you already use a 3D sensor (and throw information away by converting it to a fake laser scan).

Using the footstep_planner is a good starting point. You should be aware, however, that you will get sub-optimal results if you rely on navfn as a global planner, instead of just using the footstep planner. It will avoid obstacles, while there could be footsteps over / across them. Depending on your robot, this could not be a problem however.

In general I found the ROS nav stack not really suited for a walking humanoid and too complicated to set up, there were too many things that didn't work.

edit flag offensive delete link more

Comments

Here's an example of using a static OctoMap, humanoid_localization, and a dynamically built OctoMap for obstacle avoidance with a humanoid: http://www.youtube.com/watch?v=srcx7lPoIfw

AHornung gravatar image AHornung  ( 2013-10-17 02:46:32 -0600 )edit
0

answered 2013-09-30 19:29:02 -0600

updated 2013-10-15 21:09:08 -0600

Hi Johannes,

This question is not trivial and I am just trying my best to give you some suggestions as I have some experiences on both humanoid robot and mobile navigation. However, I have never combined them. May I ask what platform you are using now? Your plan looks fine. Firstly you need to have amcl to get your current position, and you can definitely use navfn to get your global path. However, when you go to local planner, I think it is designed for mobile base robot. You did not mention what is the interface with your humanoid. If your robot can accept V_x V_y and V_theta, you can try to implement your first plan. Though mobile robot is only two DOF but humanoid robot is three DOF. As long as it is redundant, should be fine. If your humanoid does not accept velocity as input, I suggest you write a small piece of code to convert it.

For your second plan, you really need to spend a lot of time. Especially when you want to use only part of the navigation package. Last time we want to replace the costmap module with our own, finally we give up and rewrite a simple navigation module (also spend one or twon months).

Above all, depend on your focus. If navigation (just move to the goal) is the main focus, you should try first plan, let your humanoid receive velocity input. If your focus is foot step planning (place every step at a specific location), you may need to challenge the second plan.

edit flag offensive delete link more

Comments

I can not use velocity as input, I need to use footstep locations e.g. StepTarget2D from the humanoid_navigation stack. I already started to write a new local planer that uses the footstep planner from the humanoid_navigation stack. But as you mentioned, this will take time till it is stable.

Johannes gravatar image Johannes  ( 2013-10-15 21:52:26 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2013-09-09 21:05:44 -0600

Seen: 894 times

Last updated: Oct 17 '13