Ask Your Question

Autonomous docking

asked 2019-11-18 07:07:21 -0500

Yehor gravatar image


As a part of my thesis I want to perform autonomous docking for my robot. I saw that xiaomi vacuum cleaner perform autonomous docking only with LIDAR. I am trying to do the same but can't understand the logic. I bought dock station and have LIDAR. Does anybody do something like this and can help me or give any suggestions?

Thanks in advance, Regards

edit retag flag offensive close merge delete



Maybe some kind of scan matching method? Does your docking station have some unique features that you could detect with the lidar? Maybe use something like RANSAC to fit the lidar data over a model of the docking station.

It is really hard to help you if you don't provide more information about your use-case. If the location of the docking station is static and your environment is not very dynamic you could also use AMCL to just determine your robot pose in a map and define where the docking station is on that map.

MCornelis gravatar image MCornelis  ( 2019-11-18 10:28:30 -0500 )edit

Yes, the dock station has some patter, it has like pits. Can you please describe what is RANSAC and how can I scan matching method? Thank you

Only AMCL is not enough((( I want to use AMCL ro reach the region of the station and then perform docking

Yehor gravatar image Yehor  ( 2019-11-19 00:55:00 -0500 )edit

There are many methods to fit/match laser data to features/models. The way RANSAC (RANdom SAmple Consensus) works is by assuming your docking station is in a certain position, then counting how many of your datapoints (laserpoints) are on top of, or close to, your model, this will give you inliers (points that agree with your guess) and outliers (points that don't agree with your guess). You do this for an "n" amount of guesses and then you pick the one with the greatest amount of inliers (better if you also introduce some threshold where inliers/expected inliers > 0.8 or something). It is a really crude "brute-force" type method, but since you already have an initial guess from AMCL it could work. I'm not saying this is the best solution or that it is easy to implement, but it is 1 way of doing things ...(more)

MCornelis gravatar image MCornelis  ( 2019-11-19 02:26:44 -0500 )edit

Additionally, if you are free to change the world/docking station/environment as you please (not sure if this is a constraint in your project) then you could consider adding a feature to the docking station yourself. If you are allowed to add something that is very easily detected by a Lidar, why go through the effort of implementing or coming up with a fancy algorithm? Don't solve a problem that you created! If there is an easier solution go for it!

MCornelis gravatar image MCornelis  ( 2019-11-19 02:28:33 -0500 )edit

Thank you for you suggestion, I will try to implement laser/scan matching method and if I fail I will try something else.

Thank you, for the idea)))

Yehor gravatar image Yehor  ( 2019-11-20 01:03:58 -0500 )edit

Hİ Yehor. How you find any good solution? I have the same project. Using Lidar and reflectors to pose our robot. If possible , write a navigation code to autonomous docking with this pose data.

bfdmetu gravatar image bfdmetu  ( 2020-12-28 05:03:10 -0500 )edit

@bfdmetu Hi, I simply moving with navigation stack to the goal in front of the dock station on the map firstly. And then I am looking for cluster with higher intensity in front of the robot. Because, I have attached reflection material on the dock station which has mostly always higher intensity than other things around the dock station.

Yehor gravatar image Yehor  ( 2020-12-28 05:08:33 -0500 )edit

1) How can you find higher intensity cluster. How can you use lidar data? I echo topic info /scan but these are meaningless for me :) is there any package or something ?

2) Can you get robot pose info or can you update amcl_pose with this way

bfdmetu gravatar image bfdmetu  ( 2020-12-28 05:24:44 -0500 )edit

3 Answers

Sort by » oldest newest most voted

answered 2019-11-20 12:21:49 -0500

duck-development gravatar image

updated 2019-11-20 12:22:32 -0500

The xiomi is using reflectors inside the dock

Here is a video I maked to explain it.

The magic is you getting not only the distances but also something like remission / intensity values from the sensor.

With the Tape you create unique pattern. So you could get the distance only with the remission and the angle and do not need distance.

edit flag offensive delete link more


Yes, I saw your video, and actualy thanks you for that!!! I also broke down a station and saw that.

However, as you know the lidar LaserScan msg provides intensity as well as distance, and I have already tried to check it. The intensity was stable 2.0 with my lidar(

Yehor gravatar image Yehor  ( 2019-11-20 14:42:48 -0500 )edit

Witch lds do you use?

duck-development gravatar image duck-development  ( 2019-11-20 14:44:47 -0500 )edit

I am using ydlidar x4, this is quite simple lidar

Yehor gravatar image Yehor  ( 2019-11-20 14:48:16 -0500 )edit

You may close this question if you know how tue dock works

duck-development gravatar image duck-development  ( 2019-11-20 16:07:33 -0500 )edit

I look at the driver there seems something to bee linke intensity,

duck-development gravatar image duck-development  ( 2019-11-20 16:12:53 -0500 )edit

Yes it is there, the lidar post intensity within LaserScan msg, but the intensity is stable.

Yehor gravatar image Yehor  ( 2019-11-26 00:43:53 -0500 )edit

answered 2019-11-19 18:37:04 -0500

billy gravatar image

I have a DIY robot that docks itself on the charger. I use AMCL and Navigation stack to put the robot in front of charger and then switch to a camera that can see the charger and drives straight towards it with steering adjustments based location of charger in the camera.

You could also do it using laser feedback to control steering by having a feature at the height of the laser that stands out from the wall behind the charger.Your node then finds the feature closest to the laser and adjust steering to keep that feature straight ahead and move forward.

Additionally you could use mechanical features that force robot and dock into alignment as robot approaches. I haven't needed that but it may help with something like a laser that has lower spacial resolution than a camera.

edit flag offensive delete link more


Thank you, it is a nice idea! However, this approach requires perfect odometry and mapping, in my case the odometry is not so good. I have to improve it to implement.

Thank you

Yehor gravatar image Yehor  ( 2019-11-20 01:08:22 -0500 )edit

It really doesn't require perfect odom. In fact it doesn't require encoders at all. It uses the laser to get in front of charger and then alternate (laser or camera) for docking, For Nav Stack laser scan matcher could be used for odom if native encoder based odom is suffering. It does require you know where the charger is though..

billy gravatar image billy  ( 2019-11-20 20:59:14 -0500 )edit

I have two questions: How did you hardcode the absolute position of the dock station on the map, and the second how to use only lidar to simulate odometry, because to get to some position on the map you have to have distance.

Yehor gravatar image Yehor  ( 2019-11-26 00:47:13 -0500 )edit

answered 2019-11-18 07:31:09 -0500

pmuthu2s gravatar image

Maybe the above link helps!


edit flag offensive delete link more



Thank you for your link, but kobuki robot autonomous docking approach based on IR emitters and receivers. I want to perform it with LIDAR

Yehor gravatar image Yehor  ( 2019-11-18 07:33:54 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2019-11-18 07:07:21 -0500

Seen: 480 times

Last updated: Dec 28 '20