ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Gazebo in the hardware loop

asked 2012-09-23 20:38:49 -0500

dejanpan gravatar image

updated 2012-09-24 11:00:30 -0500

Hi there, I tried really hard to find questions with the similar topic and I could not. Which makes me believe that the case I am looking for is rather uncommon. We would like to use Gazebo in the hardware loop with our outdoor robot. For instance lets say that we would like to test a pose estimation algorithm that combines measurements from wheel odometry and IMU sensor. Instead of going out on the field looking for different surfaces we would like to simulate the wheel odometry in the simulator and get the IMU measurements from the actual sensor/robot. In general it turns out that the trips to the field could be very time costly in the development process and we would like to minimize them.

So the question is, does anyone have a similar case or experience using Gazebo this way? If not any suggestions on how to start are welcome. Please note that we are still working ourselves deep into Gazebo.

EDIT1: That was a good answer Lenz. Though I have to defend my case. The idea is that the Gazebo world and the actual world are practically identical in terms of the geometry but different in terms of the ground material. Having that we would then drive the robot in the real world, have the IMU measurements based on the actual robot pose but take the odometry from the Gazebo where we would simulate surfaces with the different friction coefficients. This way we could also simulate different lighting conditions, different textures for vision-based localization (it is hard right now and ever to get snow in California:)), etc.

Thx, D.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
3

answered 2012-09-23 22:27:05 -0500

Lorenz gravatar image

updated 2012-09-24 22:28:36 -0500

While I think that combining gazebo and real sensors and actuators should definitely be possible, I'm not sure if it would really help, in particular in the example you gave us.

The IMU sensor and odometry are not independent from each other, i.e. whenever the robot moves, the odometry will be updated but the IMU will also generate data based on that exact movement. In other words, you needed to move the IMU sensor the same way the robot moves in simulation and they need to be absolutely synchronized.

Time synchronization and too slow or too fast simulation might cause other problems. For instance, you need to make sure that your sensors use sim time, not wall time. Otherwise time stamps would never fit.

Edit Thanks for the more detailed explanation, Dejan. I think though that you might run into synchronization problems (simulation might run slower than reality). You need to make sure that the pose of the real robot is aligned with the simulated robot. Not sure how hard that is. But maybe you can use another method for absolutely localizing the robot, e.g. GPS or markers. But then the question would be, what do you do if the two versions of the robot drift apart? Another concern would be the realism of the simulation. In particular friction models might not be modeled perfectly in game engines such as ODE or Bullet (to be tested tough). Tuning the parameters right could help but could be quite some effort.

edit flag offensive delete link more

Comments

Thx Lenz, all very valid thoughts. In our company we have experience following this approach, with a much more simplistic simulator, and the opinion is that it paid off big time.

dejanpan gravatar image dejanpan  ( 2012-09-25 13:42:39 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2012-09-23 20:38:49 -0500

Seen: 1,943 times

Last updated: Sep 24 '12