ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

As per @billy s answer you can use map_server to load an existing map to your environment. To do this, assuming you know the details of the environment you are trying to describe, you can use an image editor to draw and represent the Occupancy Grid Map (OGM). To accompany this you will require a .yaml file that defines parameters about how map_server loads your map and interprets it. The YAML file is as so:

image: testmap.png
resolution: 0.1
origin: [0.0, 0.0, 0.0]
occupied_thresh: 0.65
free_thresh: 0.196
negate: 0

in order, define:

  • The location of the image
  • The resolution meters/pixels (This is based on how you choose to draw the image)
  • 0,0,0 (origin) of your map (set this number to where ever your robot will consistently start in your drawn map) (or you could initialize your robots pose at a point other than map origin)
  • The % that occupied is defined as (leave this default if your map is black and white)
  • The % that free is defined as (leave this default if your map is black and white)
  • Leave this default as if true inverts occupied and free

The restriction of not having environmental sensory data will affect navigation in that you won't have dynamic avoidance. You are restricted to what ever map you draw. Ensure you represent your environment accurately or you might collide with obstacles and ensure you have not added obstacles.

Using a global planner your path will be generated to navigate the environment you have specified in your OGM. The local control is where sensory data is usually important. If you use a blank costmap then a local planner will control a local path to achieve the global plan without dynamic avoidance. Without dynamic information of obstacles, the navigation will assume there are no obstacles and will control the robot to achieve the global plan which has already accounted for the static obstacles you have defined in your OGM.

I can understand that a LIDAR might be expensive however I have personally chosen to use an Xbox Kinect as it is rather cheap and can sense a 3D environment. You do not have to use 3D environments if you choose this path and can squash everything to 2D.

Please consider choosing an answer :)