ROS navigation with combined odometry
Hello,
I have been trying to find the best solution to create a robot with move_base + AMCL + combined sensory odometry and I have a few questions.
I will be using a robot with Kinect depth sensor and wheel encoders.
Now I know that AMCL uses /odom and Laser/PointCloud as input to determine the robot localization.
What I am worried about is would the robot is going to be navigating terrain which is slippery such as peagravel etc.
My questions are:
1) would the robot "realise" if for example the wheels have turned on a slippery floor without actually moving the robot by usint the Laser/PointCloud and encoder odometry alone? Because I fear that this could mark that the robot has moved forward on the navigation map but actually it stayed behind in reality.
I was thinking that this problem could be tackled by introducing an IMU and Visual Odometry which can be combined using robot_pose_ekf in order to create a more accurate odometry (if I am assuming right).
2) However I have read that this is a bad idea somehow because move_base and robot_pose_ekf don't work well together. Is this the case?
3) Wouldn't something like this work to override the move_base odometry intake (including IMU or VO)?
<remap from="odom" to="odometry/filtered"/>
I would appreaciate any clarifications you can provide on this matter.
Thanks.
Edit: Thanks very much to both for your input. I have ordered an IMU sensor and I will have a try in the coming days.
Would you happen to know of a an online example which combines odomery from IMU and Encoders and uses move_base?
I have found a bit of difficulty to find ready examples of the desired solution and hence is why I was thinking that such thing couldn't work.
In any case I will post feedback when my sensor arrives and I try this out.
I reorganized your question because you actually have 3 of them.
check the robot_localization package for examples on sensor fusion.