ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

How can I dynamically get the coordinate of a model from gazebo?

asked 2017-03-27 07:22:01 -0600

Turtle gravatar image

I'm using Gazebo/ROS with TurtleBot. My robot needs to find the distance from itself to all the obstacles that exist in the environment. I believe Laserscan has this information, but I am not sure how can I get the position of the models from Gazebo and reuse them in ROS with rospy. Can someone help me with some examples/tutorials?

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2017-03-27 12:36:49 -0600

tfoote gravatar image

The Gazebo ROS interface is designed to simulate the interface that the robot would see in the real world as closely as possible. The generic goal is that the robot software should not know the difference between the real world and the simulated world.

Because of that the position of all obstacles is not made available from the simulator. You can use tools like laser scanners to make estimates of the observable obstacles. This is one of the fundamental elements of robotics to build a model of the world around the robot to allow it to operate. There are many different approaches for different application areas. Some are more mature like 2D navigation or plain object recognition

If you really want all the obstacles with ground trute from the simulator you need to write a plugin and give it a ROS interface. To make that happen please read through some of the Gazebo Tutorials on that topic.

edit flag offensive delete link more


Thank you for the answer! You say I can "use tools like laser scanners to make estimates of the observable obstacles". I have searched about this but have not found a good solution yet. Do you have any code examples about how I can use laser scanner for this problem?

Turtle gravatar image Turtle  ( 2017-03-28 05:21:23 -0600 )edit

answered 2017-03-27 17:05:02 -0600

ElizabethA gravatar image

updated 2017-03-27 17:11:29 -0600

Like tfoote said, there are two different ideas going on here.

  1. If YOU need to know the positions of the obstacles in Gazebo (for testing or something), there's the get_model_state command in the Gazebo API that you can use in a plugin or a script with rospy or roscpp (C++) to get the world/global/groundtruth coordinates of models in the Gazebo simulation.

  2. However, it appears you need your ROBOT to find the positions of the obstacles. That's more difficult.

I use the Turtlebot with Gazebo, and when I need to drive from one point to the next, I use the AMCL package to make a map of the environment, and then I plug the map into AMCL and use the RViz GUI to set a goal and it drives the Turtlebot to where I want it. In this case, you're making a map of a room by collecting enough laserscan data to outline entire objects.

AMCL also publishes where it thinks the robot is in the environment as you drive on the AMCL/amcl_pose topic.
Here's more on AMCL:

The tough part - and what I don't know the answer to, but the object recognition link from tfoote might help - is figuring out/somehow identifying that the blob at (3,4) is a couch, or a chair, etc.

Here are links to the Turtlebot/Gazebo/AMCL/RViz tutorials. To be clear, I'm not suggesting that they are a solution to your problem. But they're a good demonstration of how localization in Gazebo - using the Kinect data as a makeshift laserscan - works.

edit flag offensive delete link more


Thank you for the answer! I have done the navigation part, and yes as you say, I need to know how the robot can find the positions of the obstacles around.

Turtle gravatar image Turtle  ( 2017-03-28 05:16:29 -0600 )edit

Question Tools



Asked: 2017-03-27 07:22:01 -0600

Seen: 5,962 times

Last updated: Mar 27 '17