ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Localization and mapping on an open field

asked 2019-02-23 06:26:34 -0500

the3kr gravatar image

updated 2019-02-23 06:36:49 -0500

How can a robot be localized on an open field ? One option I've considered is having landmarks on the four corners of the field then the robot localizes itself relative to the landmarks on this corners. Are there other approaches for this ?

Also while moving on this pitch, I'll like the robot to drop off certain items it's carrying and keep track of the location of this items on the pitch and at the end have a map with the location of items on the pitch.

Any suggestion or idea is welcomed.

Thanks.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
2

answered 2019-02-24 01:20:03 -0500

renangm gravatar image

Apart from GPS, localization in a feature-less environment is difficult. If you don't have strict positioning requirements, and your field has a clear sky-view, I would recommend an RTK such as the EMLID Reach.

Assuming you have a wheeled robot, the simplest solution is to measure the wheel rotations (with encoders) and integrate them in odometry measurements. In this case, the localization precision depends on the encoders' quality and the terrain's roughness (among others). Typically, odometry measurements are fused with other sources of localization such as GPS, accelerometers and gyroscopes. Have a look at Extended/Unscented Kalman Filtering and the robot_localization package. However, odometry and kalman filter estimators provide dead reckoning localization - i.e., the current estimate depends on the past estimates, which accumulates error over time and causes the robot's position to drift. To avoid that, we fuse them with a non-dead reckoning sensor, such as GPS.

Your idea of using landmarks sounds like beacon-based localization, where we typically triangulate the robot's position using the distances from it to each beacon. However, one of the problems with this approach is how to obtain this distance accurately. The most basic solution I can think of is to use ultrasound emitters/receivers, measure the sound travel time between robot and beacons, and calculate the distances using the speed of sound in air. In this case, be careful of sound reflection/absorption and interference. Also, as is the case with GPS, this does not give you the robot's orientation (pose is 3D position + orientation), and you would have to use another sensor to capture that information.

If you have a horizon with distinctive features (e.g., buildings, a city skyline), you could try using computer vision methods such as visual odometry, though the localization quality may vary, depending on the available features. There are also motion tracking systems, but as with beacon-based solutions, these require sensors external to the robot.

With regards to locating the items on a field, if you know the dimensions of your robot, you can record the robot's position when it drops an item and transform this position from the robot's center to the tip of the manipulator, where the item is being held. If you save the item locations on a list, it's easy to plot them later in a map.

Sorry for the wall of text, localization is a broad subject, especially in regards to an open field. If you want to know more about the subject, I recommend the Introduction to Autonomous Mobile Robots book, as well as the Probabilistic Robotics book for Kalman Filters (and much more).

edit flag offensive delete link more

Comments

1

Thanks for detail answer, It's really helpful. I'll explore the GPS and sensor fusion approach as well. I'm currently taking SLAM course by Cyrill Stachniss and reading the recommended sections from probabilistic robotics.

the3kr gravatar image the3kr  ( 2019-02-24 14:10:57 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2019-02-23 06:26:34 -0500

Seen: 925 times

Last updated: Feb 24 '19