ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
2

Navigation stack in 3D

asked 2011-02-17 19:30:43 -0500

Homer Manalo gravatar image

updated 2011-02-17 21:27:31 -0500

Eric Perko gravatar image

Is the ff. possible on the navigation stack:

  • Instead of going left or right when an obstacle in front is detected, it just stops and uses the vertical(z) space to move. Is it possible to modify it like this?
  • The robot is not wheeled(thrusters instead), but moves by differential drive.
  • Is it also possible to replace the laser with a stereo camera instead?

If it is not possible, what are alternatives?

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted
5

answered 2011-02-17 21:56:18 -0500

Eric Perko gravatar image

updated 2011-02-18 06:01:30 -0500

I'll address each of your points individually:

Instead of going left or right when an obstacle in front is detected, it just stops and uses the vertical(z) space to move. Is it possible to modify it like this?

In order to do that, you would have to write at least two new chunks of code. One of them would have to implement the BaseLocalPlanner API and would replace base_local_planner. The second would implement the BaseGlobalPlanner API and replace navfn. Navfn and base_local_planner are the two parts of the navigation stack responsible for creating and executing path plans, so they are the chunks you would need to replace (or extend) first in order to realize motion in the z coordinate. Perhaps one of the more experimental algorithms such as sbpl_lattice_planner or ompl_planner_base could help you out with this (or at least serve as a better starting point). However, I'm not sure that the costmaps currently expose anyway to query free space in the z coordinate.

The robot is not wheeled(thrusters instead), but moves by differential drive.

As long as it still behaves like a differential drive robot, navigation should be able to move it. In the current form, the nav stack would only be able to move it in the x-y plane as it has no concept of moving up or down. You ought to be able to test the 2D case just by following the setup tutorial.

Is it also possible to replace the laser with a stereo camera instead?

Yes, if your stereo camera pipeline eventually outputs a PointCloud where each point should be inserted as either an obstacle or raytraced to to clear out obstacles between the sensor origin and that point. See the costmap_2d docs for more details on what sorts of sensors you can use with it. As @fergs pointed out, you will need to have a new global localization source or you will need to operate the navigation stack in a frame equivalent to the "odom" frame used on ground robots. AMCL cannot currently function with a sensor such as a stereo camera, unless you turn the PointCloud messages into LaserScans.

edit flag offensive delete link more

Comments

One additional requirement -- you will probably need to find a localization replacement for AMCL, as it assumes planar scans from a laser.
fergs gravatar image fergs  ( 2011-02-18 00:50:00 -0500 )edit
My response to this question turned into a question of its own instead. Maybe it will help in the short term: http://answers.ros.org/question/189/navigation-planning-based-on-kinect-data-in-25d
evanmj gravatar image evanmj  ( 2011-02-27 06:18:13 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2011-02-17 19:30:43 -0500

Seen: 2,738 times

Last updated: Feb 17 '11