What tf frames are required for autonomous navigation?
I am new trying to implement autonomous navigation on my 2WD drive robot. I am acquiring laser scan using rplidar using rplidarros, and using the scan with hectormapping to get odom as well. There is no other source of odometry.
I want to know what tf frames are necessary to implement the navigation stack? Currently I have tf tree which looks like this:
map -> odom -> basefootprint -> baselink -> laser
I see turtlebot and others have very complicated robotstatepublishers. Is that absolutely necessary to do the job I want to do? I'm using ROS Kinetic on Ubuntu 16.04
Asked by parzival on 2019-08-11 07:43:02 UTC
Answers
what you have will work.
The essential transforms are the map to odom transform:
map -> odom
the odom to robot transform:
odom -> base_footprint
and then robot transform to the published frames of any sensor data, which in your case is just:
base_link -> laser
Most robots end up with a lot of static transforms, because their are usually many small parts seperating the transform from base_footprint to whatever sensors the robot has, so instead of base_link -> laser, you end up with transforms that have more degrees of seperation, like this:
base_link ->chassis->laser_mount->laser
But those degrees of seperation aren't necessary, all that matters is that navigation has a way to transform from the sensor frame to your robot's base frame.
Asked by johnconn on 2019-08-11 10:51:07 UTC
Comments
Thanks! I just saw some robots having very detailed tf_tree with even wheels and such transforms due to which I got this doubt
Asked by parzival on 2019-08-12 03:59:43 UTC
There are several REPs with relevant information such as REP 105 for mobile platforms
Asked by tfoote on 2019-08-12 04:06:01 UTC
Comments