ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

best way for odometry

asked 2020-08-08 07:54:37 -0600

dinesh gravatar image

We are trying to implement hector mapping and path planning, autonomous navigation in our ground and air related robots. My query is: 1. What is the best way to get best odometry data in case of ground vehicle? Is using encoders enough? Is encoders and imu gives best odometry for localization? What is role of gps and is their any beneift of using gps for localization purpose? 2. How good it is to only use the acml package for localization of robot? Is it accurate enought for localization without using encoders and other sensors?

edit retag flag offensive close merge delete

Comments

All of your questions could be correctly answered different ways depending on how you are approaching the issue and what your use case it. I suggest you research these on the answers.org and other places and if you have a more specific question after doing some research, you will get more meaningful input.

billy gravatar image billy  ( 2020-08-08 15:00:28 -0600 )edit

1 Answer

Sort by ยป oldest newest most voted
0

answered 2020-08-11 02:19:47 -0600

KalanaR gravatar image

updated 2020-08-11 02:23:57 -0600

It depends on your approach, environment and the resources you have. I'll try to give an rough idea on this. Buckle up.

First as the way i understand, localization is finding where the robot is in the environment. For humans its counting steps. For robots, that is calculating distance using encoders (most robots comes equipped with encoders as far as i know). The issue with this approach is that due to mechanical issues, wheel slipping and what not, this encoder reading becomes buggy. So if you consider this in formal way "World frame -> robot frame" becomes "world frame -> drifted frame -> robot frame". But the main thing to remember is that this encoder values are continuous (value does not jump from time to time).

So now if you want to calculate the real position of the robot, you need to calculate the new "world frame -> drifted frame " separately. (you can still find drifted frame -> robot frame using encoders. To find this you can use a different approach. what you do is use an independent input to evaluate the environment. AMCL algorithm uses laser scans, SLAM uses images and point clouds, KALMAN filters use GPS, IMU and other sensors etc. using these you can calculate the error which is "drifted frame -> robot frame". unlike encoders,, since you are calculating an error this is not continuous. value jumps suddenly.

when you are capturing sensor reading, usually its done with the encoder values since its continuous. later on you can transform it to correct value after calculating the error. This is for ground robots. For flying robots, usually IMU and GPS combination is used. Now to come for you questions,

  1. For a ground vehicles, generally encoders are not enough. would need an additional support using AMCL, SLAM or KALMAN Filters. But since you are using hector mapping (which is a SLAM approach) you would be fine with just encoders for odometry. (here it says that hector mapping doesn't need odometry. but not sure if its just that package or algorithm itself. you would have to dig a bit about that). You might be able to use GPS combined with encoder for more accuracy but not sure whether it would be an overkill.

  2. AMCL is a support system for localization and what it does is, identifying many possible locations based probability and odometry and then picking out the best candidate location using laser scans (as far as i know) so you would need the encoders for AMCL.

Hope this clarifies your questions

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2020-08-08 07:54:37 -0600

Seen: 768 times

Last updated: Aug 11 '20