Obstacle Avoidance while sending goals
Hello,
Is there a way to improve obstacle avoidance while waiting for the robot to achieve a goal? I am using fiducial-based localization and I have an algorithm that enables object avoidance, but after sending a goal and until the robot reaches the goal, the robot might encounter obstacles and goes against them. Is there a way to guarantee obstacle avoidance using the actionlib server sending goals? It might be a stupid question but I am new to ros and I am still figuring how this works.
I am running move_basic to accept commands to move to a given X,Y with a given rotation, this is done using an actionlib simple action client and sending MoveBaseGoal messages to ROS topic /move_base/goal. The robot is a differential robot and I developed a very simple object avoidance algorithm that mainly receives the range of the 5 sonars that the robot has, and rotates depending on the lowest sonar range. The robot only has a raspicam and 5 sonars, so I am using fiducial based localization to have a more accurate localization.
At one point I tried to use AMCL and map server, but because I am using fiducials I was not able to combine these packages.
Looking forward to your reply and thanks in advance.
Catarina
It would help if you could describe your setup a bit more. Which packages are you using? Which nodes? How are they configured? What sort of robot? What does "I have an algorithm that enables object avoidance" mean exactly? Have you implemented your own local planner for instance? Etc.
Obstacle avoidance during execution -- in the context of
move_base
-- is typically performed by local planners. If that is not working for you, then either you are using one which doesn't support this (but afaik all the standard ones do, as it's one of the main tasks of local planners), or there is a configuration issue.It will help board members here if you could provide more information. Otherwise it's going to be difficult to help you.
Thank you so much for your quick response.
I am running move_basic to accept commands to move to a given X,Y with a given rotation, this is done using an actionlib simple action client and sending MoveBaseGoal messages to ROS topic /move_base/goal. The robot is a differential robot and I developed a very simple object avoidance algorithm that mainly receives the range of the 5 sonars that the robot has, and rotates depending on the lowest sonar range. The robot only has a raspicam and 5 sonars, so I am using fiducial based localization to have a more accurate localization.
At one point I tried to use AMCL and map server, but because I am using fiducials I was not able to combine these packages. Any more info please ask.
It would still help if you could tell/show us how you've configured
move_base
.Also:
where does this run? How does it interact with
move_base
? At what point in the control stack does it intervene?Please edit your original question text with all additional information. Use the
edit
button/link for that.I am running a script that sends goals to move_basic and before sending a goal the obstacle avoidance algorithm is run. Move_basic implements a SimpleActionServer, that takes in goals containing geometry_msgs/PoseStamped messages. I would use move_base but I don't have a lidar or something to combine with the fiducial localization and create a global costmap (or maybe I did not think of anything yet). The only sensors that the robot has are a raspicam, 5 sonars, and the wheel encoders.
If you could suggest an implementation or other packages that could work better than this one, having in mind this setup, it would be very much appreciated.
According to wiki/move_basic - Obstacle Avoidance it does support avoiding obstacles, but only:
At that point:
have you made sure to provide it with your sonar sensor data?
Yes I have and it has worked, but works ~= 20 % of the time. Do you know a better approach? Or how to optimize accuracy?
Sonar sensors, especially if you only have 5, give a very incomplete and noisy view of the surroundings. I'm not surprised it's not working reliably. There is definitely a reason for the use of $ 10k+ laser scanners on 'commercial' solutions.
Get a (cheap) laser scanner. Even an inexpensive one will most likely already significantly improve performance.
Alternatively: get a depth sensor (ie: "3d camera") and use depthimage_to_laserscan to convert the depth image to a laserscan and feed that to
move_basic
.Sorry to ask you this question, I am still trying to figure all these layers, but can I change the behavior of move_basic? Perhaps instead of stopping rotating in place? Do I have to install the git repository to my workspace and change the move_basic scripts?
Well I was trying to avoid buying more hardware but it's starting to feel inevitable. Thanks!