ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

How to detect objects using kinect sensor?

asked 2019-04-04 08:31:19 -0600

GLV gravatar image

updated 2022-08-28 10:57:41 -0600

lucasw gravatar image

Hi There,

I want to implement Spanning Tree Coverage (STC) algrithm uisng turtlebot. For that I want to detect obstacle in the path of turtlebot. And what are the methods to detect obstacles using kinect sensor.

Thanks in Advance.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2019-04-07 23:05:47 -0600

TharushiDeSilva gravatar image

updated 2019-04-07 23:07:13 -0600

You can use the aid of a map to identify obstacles. since you have a 3D sensor, I can recommend octomap. Octomap is a 3D occupancy grid, and it is lightweight. So if you need to identify obstacles, you can simply look for occupant cells in the grid. If you are only looking forward to identify obstacles in robot's 2D navigation plane, you can refer to the robot's projected map.

There are several other mapping mechanisms used in ROS (RTABMap, gmapping etc.) Using a map can be useful in your future implementations like navigation.

You can get more ideas from ROS navigation stack too. See how it identifies obstacles within a map.

edit flag offensive delete link more


Thanks #TharushiDesilva. Is octomap and RTABMap methods help me to find obstacles in unknown area(without knowning map)?

GLV gravatar image GLV  ( 2019-04-07 23:42:35 -0600 )edit

No. Those techniques are originally mapping techniques. They are popularly used in navigation stack to identify obstacles. They process the point cloud generated by Kinect sensor and arrange them into 3D (solid) representation of the environment. Basically they can identify obstacles and free space and represent them in a map.

If you are not looking for a map, you can process the Point cloud output from the Kinect sensor using the PCL library. I think the given tutorial has instructions for a simple program to handle a point cloud. You can study the parameters included in a point cloud. (hopefully rgb values, distance, angle etc.)

I suggested the mapping techniques because they easily lead you to the next level of the system. But if you are not looking for navigation using a map, then it will not be useful.

TharushiDeSilva gravatar image TharushiDeSilva  ( 2019-04-07 23:56:19 -0600 )edit

once again thanks for quick reply #TharushiDeSilva. Actually my aim is about turtlebot need to cover an entire area. if turtlebot encounters any obstacle in its path it have to be recognised the obstacle and marked it as occupied cell and moves to another direction. like that it have to be cover entire area(entire unoccupied cells) . Any methods to do this pls mention here and is it possible to get Coordinate values(x,y) of obstacles using PCL library.

GLV gravatar image GLV  ( 2019-04-08 00:59:18 -0600 )edit

I have not directly used PCL. Since it's a very popular library, there should be a lot of tutorials, and examples available. As you are going to detect and record the obstacle cells by yourselves, you should remember to mark them relative to your world coordinate frame. for this you have to understand coordinate frames in your system.

  1. The position values obtained from the point cloud are relative to the kinect sensor. they are in the camera's coordinated frame.
  2. The robot's pose should be updated relative to the world coordinate frame, which means the starting position of the turtlebot. Robot's pose in the world coordinate frame can be obtained from odometry data.
  3. consider the physical x,y,z distance and rotation between robot's base and the camera.

Use tf transforms between frames.

TharushiDeSilva gravatar image TharushiDeSilva  ( 2019-04-08 01:31:01 -0600 )edit

Thank you so much it helps me.

GLV gravatar image GLV  ( 2019-04-08 01:39:13 -0600 )edit

Question Tools

1 follower


Asked: 2019-04-04 08:31:19 -0600

Seen: 777 times

Last updated: Apr 07 '19