ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Like tfoote said, there are two different ideas going on here.

  1. If YOU need to know the positions of the obstacles in Gazebo (for testing or something), there's the get_model_state command in the Gazebo API that you can use in a plugin or a script with rospy or roscpp (C++) to get the world/global/groundtruth coordinates of models in the Gazebo simulation.

  2. However, it appears you need your ROBOT to find the positions of the obstacles. That's more difficult.

I use the Turtlebot with Gazebo, and when I need to drive from one point to the next, I use the AMCL package to make a map of the environment, and then I plug the map into AMCL and use the RViz GUI to set a goal and it drives the Turtlebot to where I want it. In this case, you're making a map of a room by collecting enough laserscan data to outline entire objects. The tough part - and what I don't know the answer to, but the object recognition link from tfoote might help - is figuring out/somehow identifying that the blob at (3,4) is a couch, or a chair, etc.

Here are links to the Turtlebot/Gazebo/AMCL/RViz tutorials. To be clear, I'm not suggesting that they are a solution to your problem. But they're a good demonstration of how localization in Gazebo - using the Kinect data as a makeshift laserscan - works.

http://wiki.ros.org/turtlebot_navigation/Tutorials/Build%20a%20map%20with%20SLAM

http://wiki.ros.org/turtlebot_navigation/Tutorials/Autonomously%20navigate%20in%20a%20known%20map

Like tfoote said, there are two different ideas going on here.

  1. If YOU need to know the positions of the obstacles in Gazebo (for testing or something), there's the get_model_state command in the Gazebo API that you can use in a plugin or a script with rospy or roscpp (C++) to get the world/global/groundtruth coordinates of models in the Gazebo simulation.

  2. However, it appears you need your ROBOT to find the positions of the obstacles. That's more difficult.

I use the Turtlebot with Gazebo, and when I need to drive from one point to the next, I use the AMCL package to make a map of the environment, and then I plug the map into AMCL and use the RViz GUI to set a goal and it drives the Turtlebot to where I want it. In this case, you're making a map of a room by collecting enough laserscan data to outline entire objects.

AMCL also publishes where it thinks the robot is in the environment as you drive on the AMCL/amcl_pose topic as you drive.
Here's more on AMCL: http://wiki.ros.org/amcl

The tough part - and what I don't know the answer to, but the object recognition link from tfoote might help - is figuring out/somehow identifying that the blob at (3,4) is a couch, or a chair, etc.

Here are links to the Turtlebot/Gazebo/AMCL/RViz tutorials. To be clear, I'm not suggesting that they are a solution to your problem. But they're a good demonstration of how localization in Gazebo - using the Kinect data as a makeshift laserscan - works.

http://wiki.ros.org/turtlebot_navigation/Tutorials/Build%20a%20map%20with%20SLAM

http://wiki.ros.org/turtlebot_navigation/Tutorials/Autonomously%20navigate%20in%20a%20known%20map

Like tfoote said, there are two different ideas going on here.

  1. If YOU need to know the positions of the obstacles in Gazebo (for testing or something), there's the get_model_state command in the Gazebo API that you can use in a plugin or a script with rospy or roscpp (C++) to get the world/global/groundtruth coordinates of models in the Gazebo simulation.

  2. However, it appears you need your ROBOT to find the positions of the obstacles. That's more difficult.

I use the Turtlebot with Gazebo, and when I need to drive from one point to the next, I use the AMCL package to make a map of the environment, and then I plug the map into AMCL and use the RViz GUI to set a goal and it drives the Turtlebot to where I want it. In this case, you're making a map of a room by collecting enough laserscan data to outline entire objects.

AMCL also publishes where it thinks the robot is in the environment as you drive on the AMCL/amcl_pose topic as you drive. topic.
Here's more on AMCL: http://wiki.ros.org/amcl

The tough part - and what I don't know the answer to, but the object recognition link from tfoote might help - is figuring out/somehow identifying that the blob at (3,4) is a couch, or a chair, etc.

Here are links to the Turtlebot/Gazebo/AMCL/RViz tutorials. To be clear, I'm not suggesting that they are a solution to your problem. But they're a good demonstration of how localization in Gazebo - using the Kinect data as a makeshift laserscan - works.

http://wiki.ros.org/turtlebot_navigation/Tutorials/Build%20a%20map%20with%20SLAM

http://wiki.ros.org/turtlebot_navigation/Tutorials/Autonomously%20navigate%20in%20a%20known%20map