ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
-6

Getting point cloud data from the kinect

asked 2011-05-23 18:12:55 -0500

lakshmen gravatar image

updated 2016-10-24 08:33:19 -0500

ngrennan gravatar image

hi guys, i am new to kinect. I need point cloud data from the depth camera in kinect. Any of you have any idea how to go about doing it? i am using ROS diamondback and Ubuntu.

edit retag flag offensive close merge delete

6 Answers

Sort by » oldest newest most voted
5

answered 2011-05-23 20:13:53 -0500

updated 2011-05-24 00:16:22 -0500

here and Here are the description how to install the driver and its ros-wrapper and how to run them. Then listen to one of the .../points topics as described in 10. or 11. here.

The driver publishes:

* /camera/depth/camera_info : Camera parameters for the IR (depth) camera
* /camera/depth/image : single channel floating point (float32) depth image, containing the depth in meters.
* /camera/depth/points : point cloud without color information
* /camera/rgb/camera_info : Camera parameters for the RGB camera
* /camera/rgb/image_color : RGB image
* /camera/rgb/image_mono : Grayscale image
* /camera/rgb/points : point cloud containing RGB value for each point
edit flag offensive delete link more

Comments

how only to get the /camera/depth/image from the picture??
lakshmen gravatar image lakshmen  ( 2011-05-24 02:30:24 -0500 )edit
I think you should go through the tutorials i linked to, to get a foundation of how ROS works
Felix Endres gravatar image Felix Endres  ( 2011-05-24 03:23:08 -0500 )edit
go thru all the tutorials?
lakshmen gravatar image lakshmen  ( 2011-05-24 04:54:51 -0500 )edit
what would be a good choice to convert the point cloud into the 3d model? Will using Meshlab be a good choice?
lakshmen gravatar image lakshmen  ( 2011-05-24 05:18:19 -0500 )edit
Yes all. You could omit those with service/client, but it won't do no harm. Also you can omit either python or c++. If you want to process the point clouds, I'd use c++, though. What kind of 3d model do you want to convert to?
Felix Endres gravatar image Felix Endres  ( 2011-05-25 01:37:25 -0500 )edit
2

answered 2011-05-24 06:17:59 -0500

Aslund gravatar image

A good start how to use ROS can be found here. When you reach step 11 then you learn how to publish and subscribe in ROS. For point clouds using the Kinect camera, then you need to setup you subscriber to retrieve point clouds, which is found under the sensor_msgs documentation. Down below you can find the simple schematic for a class that reads the point cloud data from the kinect.

#include <ros/ros.h>
#include <pcl_ros/point_cloud.h>
#include <sensor_msgs/PointCloud2.h>

using namespace ros;
using namespace sensor_msgs;

class processPoint {
    NodeHandle nh;
    Subscriber sub;

public:
    processPoint() {
        sub = nh.subscribe<sensor_msgs::PointCloud2>("/camera/rgb/points", 1, &processPoint::processCloud, this);
    }

    ~processPoint() {   
     }

    void processCloud( const sensor_msgs::PointCloud2ConstPtr& cloud );
};

#include "processPoint.hpp"

void processPoint::processCloud( const sensor_msgs::PointCloud2ConstPtr& cloud ) {

}

int main(int argc, char **argv) {

    init(argc, argv, "processPoint");

    processPoint processPoint;

    Rate spin_rate(1);

    while( ok() ) {
        spinOnce();

        spin_rate.sleep();
    }
}
edit flag offensive delete link more
1

answered 2011-06-19 15:20:06 -0500

lakshmen gravatar image

I have been working on the depth pointer( the pointer that stores the depth data for the Kinect). If you are using the Openkinect driver and not using the ROS and writing the code yourself, the function you should be looking for is

void depth_cb(freenect_device *dev, void *v_depth, uint32_t timestamp)

and the data is stored in the v_depth pointer. You can take the data from the v_depth pointer. Hope this is useful for those start out finding the depth data.Thanks to those who helped, appreciate their help.

edit flag offensive delete link more
0

answered 2011-05-24 00:05:31 -0500

lakshmen gravatar image

hi... thanks.... i have done that... now how to get the data...is it using /camera/depth/image to get the depth data?? am i right?how to publish the /camera/depth/image? or how to get the depth data from the kinect driver?

edit flag offensive delete link more
0

answered 2011-05-25 20:19:18 -0500

lakshmen gravatar image

Hi friend, have u used OpenKinect instead of Openni( or the one u suggested)? Will the code u suggested be different? does this code apply to all drivers?

edit flag offensive delete link more

Comments

Please do your homework (i.e., the tutorials) first. If you still have problems after that, this should enable you to post much more specific questions. Explain exactly what you did, what you expected to happen and what didn't work.
Martin Günther gravatar image Martin Günther  ( 2011-05-25 23:13:07 -0500 )edit
0

answered 2011-05-26 01:20:59 -0500

lakshmen gravatar image

let me explain more in detail. After the post by aslund, i did go thru the tutorials. i reached step 8. My supervisor stopped me and wanted me to use openkinect to get the point cloud data. He told me to do the tutorials later. not at the current moment.i need the data for further processing and i using cppview to get the data. I wanted me to urgently to get the point cloud data.so was looking at ways to get it. taught of using this code to get the point cloud data into distance. float gamma[2048];

for (size_t i=0; i<2048; i++) { const float k1 = 1.1863; const float k2 = 2842.5; const float k3 = 0.1236; gamma[i] = k3 * tan(i/k2 + k1); }

Now you have a lookup table for all possible depth values. The second step is to reproject the depth data into Euclidean space.

uint16_t depth_data[640480]; // depth data from Kinect in a 1-D array float x[480640]; // point cloud x values float y[480640]; // point cloud y values float z[480640]; // point cloud z values...you may want to put these into a vector or something // camera intrinsic parameters, representative values, see http://nicolas.burrus.name/index.php/Research/KinectCalibration for more info float cx = 320.0; // center of projection float cy = 240.0; // center of projection float fx = 600.0; // focal length in pixels float fy = 600.0; // focal length in pixels for (size_t v=0, n=0 ; v<480 ; v++) { for (size_t u=0 ; u<640 ; u++, n++) { // note that values will be in meters, with the camera at the origin, and the Kinect has a minimum range of ~0.5 meters x[n] = (u - cx) * gamma[depth_data[n]] / fx; y[n] = (v - cy) * gamma[depth_data[n]] / fy; z[n] = gamma[depth_data[n]]; } would you suggest me to use this code??

edit flag offensive delete link more

Comments

The data you get from kinect via ROS is already 3D data. Just got through the basic tutorials, that shouldnt take long and use the code sniplet provided above.
dornhege gravatar image dornhege  ( 2011-05-26 03:26:07 -0500 )edit
You can just use the raw library without ROS, but obviously you will loose the ROS interface and thus the multitude of software that comes with it! I would say that learning ROS basics and getting the kinect to work there is probably less complicated than using the raw library.
dornhege gravatar image dornhege  ( 2011-05-26 22:21:49 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2011-05-23 18:12:55 -0500

Seen: 15,798 times

Last updated: Jun 19 '11