ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Could use some guidance on how to navigate while avoiding obstacles

asked 2017-10-27 18:10:53 -0500

Auton0mous gravatar image

I have a physical robot that needs to navigate from its current location A to some point B defined by a GPS coord while avoiding all physical obstacles.

The robot is equipped with a GPS of its own and has a LIDAR. Just not sure how to approach this.

The LIDAR has limited range so the robot can't see very far, this can cause it to get stuck in certain situations. If the robot was making a map of the environment as it went along, something like SLAM, then this could be avoided.

I'm just not sure how I can do SLAM while also navigating to a specific goal while also doing obstacle avoidance.

I looked at this tutorial: http://wiki.ros.org/cob_tutorials/Tut... but didn't learn from it.

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
1

answered 2017-10-30 05:58:29 -0500

R. Tellez gravatar image

Your problem requires a lot of work.

First, let's separate your questions into two problems:

  1. How to navigate using SLAM
  2. How to navigate using GPS

Those are two different things and need to be dealt separately.

I'll treat in this answer how to solve the first question, using the ROS navigation stack (that is what is used in the link provided, but the link has already prepared everything for the cob robot).

  • In order to do navigation using a laser for localization and obstacle avoidance, you need to do first a map of the environment you want to move through. For that you need to use the gmapping package. Launch the gmapping package, and then move the robot around using the keyboard or joystick so it can build the map. Below, included an example of gmapping launch file for the Kobuki robot, that you may try to modify for your robot:

    <launch> <arg name="scan_topic" default="kobuki/laser/scan"/> <arg name="base_frame" default="base_footprint"/> <arg name="odom_frame" default="odom"/>
    <node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen"> <remap from="scan" to="$(arg scan_topic)"/> </node> </launch>

  • Once the map is done, you need to save it using the following command:

    rosrun map_server map_saver -f name_of_map

  • Then you are ready to use that map for localization and sending the robot to different locations in the map, while avoiding obstacles. Kill the gmapping node and launch now the localization package (the amcl). Together with the amcl you need to launch the map_server (to use the map you created on the previous step) and the move_base (to make the robot move around while avoiding obstacles). An example launch file for that would be the following:

    <launch> <arg name="map_file" default="$(find turtlebot_navigation_gazebo)/maps/my_map.yaml"/> <node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)"/> <arg name="initial_pose_x" default="0.0"/> <arg name="initial_pose_y" default="0.0"/> <arg name="initial_pose_a" default="0.0"/> <include file="$(find turtlebot_navigation)/launch/includes/amcl/amcl.launch.xml"> <arg name="initial_pose_x" value="$(arg initial_pose_x)"/> <arg name="initial_pose_y" value="$(arg initial_pose_y)"/> <arg name="initial_pose_a" value="$(arg initial_pose_a)"/> </include> <include file="$(find turtlebot_navigation)/launch/includes/move_base.launch.xml"/> </launch>

Here the amcl.launch.xml file:

<launch>
<arg name="use_map_topic"   default="false"/>
<arg name="scan_topic"      default="kobuki/laser/scan"/> 
<arg name="initial_pose_x"  default="0.0"/>
<arg name="initial_pose_y"  default="0.0"/>
<arg name="initial_pose_a"  default="0.0"/>
<arg name="odom_frame_id"   default="odom"/>
<arg name="base_frame_id"   default="base_footprint"/>
<arg name="global_frame_id" default="map"/>
<node pkg="amcl" type="amcl" name="amcl">
<param name="use_map_topic"             value="$(arg use_map_topic)"/>
<!-- Publish scans from best pose at a max of 10 Hz -->
<param name="odom_model_type"           value="diff"/>
<param name="odom_alpha5"               value="0.1"/>
<param name="gui_publish_rate"          value="10.0"/>
<param name="laser_max_beams"             value="60"/>
<param name="laser_max_range"           value="12.0"/>
<param name="min_particles"             value="500"/>
<param name="max_particles"             value="2000"/>
<param name="kld_err"                   value="0.05"/>
<param name="kld_z"                     value="0.99"/>
<param name="odom_alpha1"               value="0.2"/>
<param name="odom_alpha2"               value="0.2"/>
<!-- translation ...
(more)
edit flag offensive delete link more
0

answered 2017-10-28 21:05:05 -0500

Deep gravatar image

I am also working on similar problem. What I have done is I have broken down the whole task into three smaller problem. 1) Goal calculation - I use the LIDAR based SRT algorithm. It is sensor based random tree algo that uses some kind of heuristics to calculate goal position in free space. 2) Obstacle avoidance - I use potential field method to decide the robot velocity and steering direction so that robot can move to desired position calculated in 1). 3) Line/edge SLAM - Based on odometry and LIDAR data I do SLAM to prepare the environment edge map. This will in turn fed into algorithm 1 to further refine goal calculation.

Individually the algorithm are working as intended. However, I am facing some problem while integrating 1) and 2).

I hope this helps. Let me know if you want to know more information.

edit flag offensive delete link more

Question Tools

Stats

Asked: 2017-10-27 18:10:53 -0500

Seen: 2,781 times

Last updated: Oct 30 '17