ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

There are at least two ways to use the depth data from the Kinect. First, you mention OpenCV. If you want to use OpenCV with the depth data you would probably want to subscribe to the /camera/depth/image topic, which provides an "depth" image. Each pixel value is the distance measurement (as a 32-bit float, in meters). You would probably want to use cv_bridge to convert the ROS topic into an OpenCV cv::Mat -- there is a very good tutorial on cv_bridge. A simple piece of example code could be found in our depth_viewer package, which includes a C++ node that uses cv_bridge and converts the 32-bit float to a nicely scaled 8-bit grayscale image and displays it using OpenCV functions. You can use just about any OpenCV functions on the depth image, and of course segmentation runs fairly easy on a depth image compared to an RGB one.

The other possibility is to use the point cloud representation, on the /camera/depth/points or /camera/rgb/points topic (the latter having RGB data in addition to the standard XYZ data). Rather than using OpenCV, you would want to use the Point Cloud Library, which has many great features and accompanying tutorials.

There are at least two ways to use the depth data from the Kinect. First, you mention OpenCV. If you want to use OpenCV with the depth data you would probably want to subscribe to the /camera/depth/image topic, which provides an a "depth" image. Each pixel value is the distance measurement in meters (as a 32-bit float, in meters). float). You would probably want to use cv_bridge to convert the ROS topic into an OpenCV cv::Mat -- there is a very good tutorial on cv_bridge. A simple piece of example code could be found in our depth_viewer package, which includes a C++ node that uses cv_bridge and converts the 32-bit float to a nicely scaled 8-bit grayscale image and displays it using OpenCV functions. You can use just about any OpenCV functions on the depth image, and of course segmentation runs fairly easy on a depth image compared to an RGB one.

The other possibility is to use the point cloud representation, on the /camera/depth/points or /camera/rgb/points topic (the latter having RGB data in addition to the standard XYZ data). Rather than using OpenCV, you would want to use the Point Cloud Library, which has many great features and accompanying tutorials.