Ask Your Question

Howto create semantic maps using ROS and Kinect?

asked 2011-04-30 01:10:28 -0500

Sudarshan P gravatar image

updated 2014-01-28 17:09:36 -0500

ngrennan gravatar image

I wish to create a robotic "Seeing Eye Dog" to assist the blind. It will perceive the world using a kinect sensor and will be based on turtlebot or bilibot or something similar. The bot will initially create a

  1. Semantic map of the environment. - Walls, Doors, Floors, Windows, Switchboards, Trees etc.

  2. A database of frequently encountered structures such as furnitures, gadgets, people and so on.

It will probably construct the maps and models using RGBDSLAM... may use the octomap_mapping stack and then do further analysis.

I have gone through the introductory tutorials of ROS and tried out the new rgbdslam on a PC using kinect.

It appears as though ROS is evolving fast especially in the area of Semantic mapping. From the papers by Radu Bogdan Rusu, it appears that such analysis especially in kitchens has been already coded.

I want to avoid reinventing the wheel. What I would like to know is, what are the readymade building blocks(stacks) that would be useful to my project? How to get started? I am novice in ROS so all help is appreciated.

In return I will help ROS by documenting my explorations as a tutorial for "Creating Semantic Maps using ROS and kinect".

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2011-06-17 10:42:02 -0500

Mac gravatar image

You've listed some excellent starting points (RGBDSLAM, Octomap, etc.). For object detection, you should look at TOD; object recognition is substantially in-progress, so it's going to be changing quickly.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2011-04-30 01:10:28 -0500

Seen: 1,213 times

Last updated: Jun 17 '11