Gazebo in the hardware loop
Hi there, I tried really hard to find questions with the similar topic and I could not. Which makes me believe that the case I am looking for is rather uncommon. We would like to use Gazebo in the hardware loop with our outdoor robot. For instance lets say that we would like to test a pose estimation algorithm that combines measurements from wheel odometry and IMU sensor. Instead of going out on the field looking for different surfaces we would like to simulate the wheel odometry in the simulator and get the IMU measurements from the actual sensor/robot. In general it turns out that the trips to the field could be very time costly in the development process and we would like to minimize them.
So the question is, does anyone have a similar case or experience using Gazebo this way? If not any suggestions on how to start are welcome. Please note that we are still working ourselves deep into Gazebo.
EDIT1: That was a good answer Lenz. Though I have to defend my case. The idea is that the Gazebo world and the actual world are practically identical in terms of the geometry but different in terms of the ground material. Having that we would then drive the robot in the real world, have the IMU measurements based on the actual robot pose but take the odometry from the Gazebo where we would simulate surfaces with the different friction coefficients. This way we could also simulate different lighting conditions, different textures for vision-based localization (it is hard right now and ever to get snow in California:)), etc.
Thx, D.