ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Accuracy of Autonomous navigation using navigation stack in ROS

asked 2013-12-07 18:06:15 -0500

RB gravatar image

Hi, For a complex maze environment or unstructured indoor environment (without any inclined floor); is there any possibility that the localization and obstacle detection mechanism of ROS fails while moving a robot autonomously using High Level Navigation?

Thanks in advance

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
2

answered 2013-12-07 18:47:51 -0500

mirzashah gravatar image

updated 2013-12-07 18:53:25 -0500

Oh yeah failure can happen. It tends to happen in real life more often that one would like. It's not so much a factor of the algorithms (as in the ROS part of it) but rather how good your sensors are...i.e. how good is your 3D sensor (in terms of both range and accuracy) and how good is your odometry. Lasers work better than Kinect/Primesense as they're more accurate and give longer range (I think you can only get about 10-15 meters with Kinect). If you're in a really big sparse area...and you're not detecting any features with your 3D sensor, then it all comes down to how good your dead reckoning is...meaning localization algorithms have nothing else to work with so you lose localization fast. Kinect also suffers with very sunny rooms and cannot detect close objects. If you're in a tight maze where the walls are only a meter apart, a Kinect would probably be useless unless you sort of wiggled while you drove forward. Regarding odometry, if you have good wheel encoders and IMU, you'll get a good estimate. You also have to consider the type of floor your robot is driving on...slipping wheels mess up the wheel encoders. You can also use tricks like "navigational markers"...e.g. special tags or beacons that help correct the position of the robot automatically.

To add to the problems, neither laser/kinect can can deal with glass/mirrors...the only sensor that can help you with that is acoustic (sonars)...bump sensors work too but you have to hit the obstacle. The ROS nav stack currently doesn't have supports for sonars...but it's straightforward to add in as a "costmap plugin" if you're down with C++.

edit flag offensive delete link more

Comments

1

Thanks for the all the information you have provided @mirzashah. Using rviz and selecting the goal through rviz window, make me believe that ROS autonomous navigation is error free. Since in lots of videos I have seen that most robots move precisely in the environment.

RB gravatar image RB  ( 2013-12-08 05:55:39 -0500 )edit

Again, it will work error-free in the right environment...but unexpected things can sometimes cause you to lose localization. Also never believe what you see in videos, especially in the fields of robotics and artificial intelligence! Often times they tend to leave out the parts where the robot goes out of control and starts driving over children and puppies ;)

mirzashah gravatar image mirzashah  ( 2013-12-08 13:00:25 -0500 )edit

Can autonomous navigator of ROS give two different completion times for two experiments in the same environment and having same starting and ending positions?

RB gravatar image RB  ( 2014-10-13 08:39:26 -0500 )edit

Theoretically, I think it is possible, the reason is that most path planning algorithms are non-deterministic, which means you are likely to get different routes planned even with the same start and finish points. But that doesn't mean that it's always going to be different.

Ammar Albakri gravatar image Ammar Albakri  ( 2022-09-16 01:59:57 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2013-12-07 18:06:15 -0500

Seen: 1,323 times

Last updated: Dec 07 '13