ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
I'll try and explain this in two cases, first a simulation in gazebo and second a real world robot. This should clear up exactly what ROS is and isn't doing.
In the case of gazebo this is running a full physical simulation of the world so this is working out how the body of the robot moves as the wheels turn. However gazebo can't (shouldn't) tell the ROS TF system exactly where the robot is because this would be cheating. For this you will need some sort of perception system, this could be as simple as wheel odometry which integrates it's movement over time to estimate it's location. Or it could be a full visual SLAM system that creates a map of its environment and localises itself within it. In the case of gazebo (the simulator) this will produce the sensor outputs that are used by some other ROS modes to estimate the robots location.
In the case of an actual robot the localisation nodes should work in exactly the same way except their receiving data from real sensors as opposed to a simulation.
I'm summary there are two key points. In the simulation the robot moves because gazebo is simulating the physics. Secondly and separately ROS is using some sort of perception algorithm to estimate the location of the robot based upon some sensor outputs.
Hope this makes sense.