ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

It depends on your approach, environment and the resources you have. I'll try to give an rough idea on this. Buckle up.

First as the way i understand, localization is finding where the robot is in the environment. For humans its counting steps. For robots, that is calculating distance using encoders (most robots comes equipped with encoders as far as i know). The issue with this approach is that due to mechanical issues, wheel slipping and what not, this encoder reading becomes buggy. So if you consider this in formal way "World frame -> robot frame" becomes "world frame -> drifted frame -> robot frame". But the main thing to remember is that this encoder values are continuous (value does not jump from time to time).

So now if you want to calculate the real position of the robot, you need to calculate the new "world frame -> drifted frame " separately. (you can still find drifted frame -> robot frame using encoders. To find this you can use a different approach. what you do is use an independent input to evaluate the environment. AMCL algorithm uses laser scans, SLAM uses images and point clouds, KALMAN filters use GPS, IMU and other sensors etc. using these you can calculate the error which is "drifted frame -> robot frame". unlike encoders,, since you are calculating an error this is not continuous. value jumps suddenly.

when you are capturing sensor reading, usually its done with the encoder values since its continuous. later on you can transform it to correct value after calculating the error. Now to come for you questions,

  1. For a ground vehicles, generally encoders are not enough. would need an additional support using AMCL, SLAM or KALMAN Filters. But since you are using hector mapping (which is a SLAM approach) you would be fine with just encoders for odometry. (here it says that hector mapping doesn't need odometry. but not sure if its jsut that package or algorithm itself. you would have to dig a bit about that. you can use gps combined with encoder for more accuracy but not sure whether it would be an overkill.

  2. AMCL is a support system for localization and what it does is, identifying many possible locations based probability and odometry and then picking out the best candidate location using laser scans (as far as i know) so you would need the encoders for AMCL.

It depends on your approach, environment and the resources you have. I'll try to give an rough idea on this. Buckle up.

First as the way i understand, localization is finding where the robot is in the environment. For humans its counting steps. For robots, that is calculating distance using encoders (most robots comes equipped with encoders as far as i know). The issue with this approach is that due to mechanical issues, wheel slipping and what not, this encoder reading becomes buggy. So if you consider this in formal way "World frame -> robot frame" becomes "world frame -> drifted frame -> robot frame". But the main thing to remember is that this encoder values are continuous (value does not jump from time to time).

So now if you want to calculate the real position of the robot, you need to calculate the new "world frame -> drifted frame " separately. (you can still find drifted frame -> robot frame using encoders. To find this you can use a different approach. what you do is use an independent input to evaluate the environment. AMCL algorithm uses laser scans, SLAM uses images and point clouds, KALMAN filters use GPS, IMU and other sensors etc. using these you can calculate the error which is "drifted frame -> robot frame". unlike encoders,, since you are calculating an error this is not continuous. value jumps suddenly.

when you are capturing sensor reading, usually its done with the encoder values since its continuous. later on you can transform it to correct value after calculating the error. Now to come for you questions,

  1. For a ground vehicles, generally encoders are not enough. would need an additional support using AMCL, SLAM or KALMAN Filters. But since you are using hector mapping (which is a SLAM approach) you would be fine with just encoders for odometry. (here it says that hector mapping doesn't need odometry. but not sure if its jsut just that package or algorithm itself. you would have to dig a bit about that. you can use gps GPS combined with encoder for more accuracy but not sure whether it would be an overkill.

  2. AMCL is a support system for localization and what it does is, identifying many possible locations based probability and odometry and then picking out the best candidate location using laser scans (as far as i know) so you would need the encoders for AMCL.

Hope this clarifies your questions

It depends on your approach, environment and the resources you have. I'll try to give an rough idea on this. Buckle up.

First as the way i understand, localization is finding where the robot is in the environment. For humans its counting steps. For robots, that is calculating distance using encoders (most robots comes equipped with encoders as far as i know). The issue with this approach is that due to mechanical issues, wheel slipping and what not, this encoder reading becomes buggy. So if you consider this in formal way "World frame -> robot frame" becomes "world frame -> drifted frame -> robot frame". But the main thing to remember is that this encoder values are continuous (value does not jump from time to time).

So now if you want to calculate the real position of the robot, you need to calculate the new "world frame -> drifted frame " separately. (you can still find drifted frame -> robot frame using encoders. To find this you can use a different approach. what you do is use an independent input to evaluate the environment. AMCL algorithm uses laser scans, SLAM uses images and point clouds, KALMAN filters use GPS, IMU and other sensors etc. using these you can calculate the error which is "drifted frame -> robot frame". unlike encoders,, since you are calculating an error this is not continuous. value jumps suddenly.

when you are capturing sensor reading, usually its done with the encoder values since its continuous. later on you can transform it to correct value after calculating the error. This is for ground robots. For flying robots, usually IMU and GPS combination is used. Now to come for you questions,

  1. For a ground vehicles, generally encoders are not enough. would need an additional support using AMCL, SLAM or KALMAN Filters. But since you are using hector mapping (which is a SLAM approach) you would be fine with just encoders for odometry. (here it says that hector mapping doesn't need odometry. but not sure if its just that package or algorithm itself. you would have to dig a bit about that. you can that). You might be able to use GPS combined with encoder for more accuracy but not sure whether it would be an overkill.

  2. AMCL is a support system for localization and what it does is, identifying many possible locations based probability and odometry and then picking out the best candidate location using laser scans (as far as i know) so you would need the encoders for AMCL.

Hope this clarifies your questions