Navigation of mobile robot in 3d map
Hello,
I'm working on a project where we want to be able to navigate a robot trough a field with uneven terrain, using a laser scanner and/or Kinect (Kinect version 1). I am looking for the best solution to do this. Our plan is to map the field before we start driving. Next, we want to set goals for the robot to drive to.
Available hardware for navigation: 1. 360 degrees 2d lidar scanner 2. Microsoft Kinect 3. 6-DOF IMU.
of course, we have a complete URDF file and drivers for publishing data into ROS.
What would be the best way to make the robot do successful navigation. Is there a stack or package available that takes all this data and outputs the desired speeds for the robot?
I hope to get a reaction soon!
Kind regards, Jesse
Edit: Hello @M@t! Thank you for this quick and extensive response! I never expected that! I have enough information for now to get started thanks to you. If you like to know more about my project, read below.
Project
We are going to compete in the FieldRobotEvent in Britain this year. The task is to design a robot that is able to drive through a set of lanes of corn plants, such as the image below.
The event is divided into 5 tasks. Each one gets harder. We mainly want to focus on task 1 and 2 because we cannot do 3, 4 and 5 without the basis of task 1 and 2.
What we have
A team member is currently designing the hardware. In a couple of weeks, we will have a robot equipped with encoders on all wheels, a (2D) LiDAR scanner, a 6DOF IMU and a Kinect (360). I am fully responsible for the software.
Problems to solve
Task 1 and 2 are the tasks where we need to drive trough the lanes of corn as quickly and accurately as possible. this has to be done fully autonomously. The robot thus needs to know where it is and how to drive trough the lanes. applying an RANSAC algorithm or something is so-so okay, but does not work in practice (I saw that last year with a robot from another team).
Why ROS
I figured, if we can obtain the map beforehand we can autonomously drive through it. The next logical thing would be to use ROS to solve the navigation, goal and pose problems.
The problem here is that while localization algorithms like AMCL like a stable ground. In practice and in the competition the robot can tilt up to 10 degrees sideways (or forward for that matter). This would, for instance, cause problems with the laser scanner because the scanner won't scan the flat plane anymore.
Again, thank you for your elaborate response!
Reading "field" and "uneven terrain" reminded me of the questions by @M@t. Perhaps he can share his experiences. I would definitely check out some of his questions & answers here on ROS Answers.
One more suggestion:
try to start creating a simulation model of your robot and the target environment now / as soon as possible. That will allow you to start working immediately, but also makes experimentation really trivial.