# mutual robot recognition for collision avoidance in navigation

Hi, I am posing this question since I haven't found a complete answer anywhere else. I'll briefly go over what are my needs.

I need two Turtlebot3 to recognize each other as an obstacle during navigation so that one doesn't crash onto another. With the standard configuration of the navigation stack, that doesn't come out of the box.
So it led me into thinking two possible solutions:

• add a costmap layer to my navigation stack. Unfortunately I understand that it's only extensible via C++ language and since I don't know much of it and, for the purpose of my project, don't intend to learn it, I see this as a possible but less viable solution
• using a computer vision package that allows me to recognize a robot by its shape. This, however, lets me recognize it but not avoid it during the computation of a navigation plan. If I'm mistaken and there's a way to do it, please let me know.

I don't know if there are any other possible solutions to this problem, these are the only that come to my mind.

EDIT: I had to clarify: I am using the standard global planner and a DWA planner, my robots are equipped with lasers and I'm using AMCL to estimate their positions. Regarding the map, it's not fixed: may be known, may be unknown aswell (and will therefore launch gmapping instead of AMCL).

edit retag close merge delete

Can you please clarify: Do you have a laser scanner that can detect the other robots? Have you considered sending robots localization poses to each other?

( 2019-11-21 08:41:43 -0500 )edit

Yes I have, I'm using a Turtlebot3 so I thought it was implied

( 2019-11-21 08:51:08 -0500 )edit

And why does it not pick up the other robots in the laser scanner data?

( 2019-11-21 09:04:20 -0500 )edit

It does, but it doesn't put them (their position) into the costmaps.

( 2019-11-21 09:08:23 -0500 )edit

Sort by » oldest newest most voted

Usually, this is achieved just by local perception. Aren't you using any range sensors? However, here is a potential way:

I assume that you know the poses of the turtlebots, i.e. you can subscribe to their odometry resp. AMCL poses.

Indeed, you could write your own costmap layer plugin. However, you can try the navigation stack in combination with the teb_local_planner as it supports additional custom obstacles. You could write a small script as described here that publishes the position of the other turtlebots as obstacle message to each turtlebot. You will have a minor communication delay, however, it might be negligible.

more

I have had great success with teb_local_planner where I was sending robot poses as dynamic obstacles to all other robots. I will soon publish the code for that

( 2019-11-21 08:40:31 -0500 )edit

Hi, thanks for the answer. I'm using DWA as local planner and I don't think I would change it for now, I'll keep TEB in mind anyway.
I didn't think about working with the different AMCL poses, but I think there's an issue. I might be wrong, but once a move_base_simple/goal has been set, the robot will drive towards it. If I run a separate node that says "if you're gonna cross positions with another robot (based on its AMCL pose), steer away from it", I'm not sure that will 100% work, because if you send concurrent messages to cmd_vel to "steer away from it", the original trajectory will be modified. Am I wrong to say this? Because I think it's like using a turtlebot_teleop while the robot is navigating.

( 2019-11-21 09:07:40 -0500 )edit

## Stats

Seen: 121 times

Last updated: Nov 21 '19