ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

How to process pointclouds remotely, on workstation?

asked 2014-04-14 07:04:07 -0500

oswinium gravatar image

updated 2014-04-23 07:31:21 -0500

So I would like to run the PCL People Detection Code for the Kinect of my TurtleBot. However I want to do most of the 'work' on my (more capable) workstation, rather than the TurtleBot's netbook, to speed up things considerably.

  1. So how do I run openni_launch/openni.launch (or rgdb_launch/kinect_frames.launch?) such that it does no processing, but just publishes raw data? I have looked into the launch files and noticed that a lot of processing args are set to true by default like here and here for example. So do I just set the ones that I don't need to false?

  2. Having set the above args to false, and desiring to move the task of processing to my workstation, now I assume I have to, from my workstation, subscribe to this 'raw topic' (published to by the Kinect), retrieve the raw data, and do the processing (whatever the above args do) on the workstation. How would I go about doing that? I can't find the source code where these args are defined (in functions or whatever) and used!

Thank you.

EDIT: Based on my research this is what I now run:

  1. rosrun openni_launch openni.launch with all processing modules turned off. The following are the relevant (not all) camera topics active.


  2. ROS_NAMESPACE=camera/rgb rosrun image_proc image_proc

    ROS_NAMESPACE=camera/depth_registered rosrun image_proc image_proc

The above two, according to this, should give me the required topics - rgb/image_rect_color and depth_registered/image_rect - I need for converting to rgb pointclouds according to this.

  1. Before converting to pointcloud though, I convert to metric by rosrun nodelet nodelet load depth_image_proc/convert_metric camera/camera_nodelet_manager --no-bond

  2. Finally, I run rosrun nodelet nodelet load depth_image_proc/point_cloud_xyzrgb camera/camera_nodelet_manager --no-bond.

Now here's the problem: I do get the required /camera/depth_registered/points topic active, but I do NOT have any information on it, as evinced by a rostopic echo. Am I converting incorrectly?

EDIT 2 I have realized that running image_proc for camera/rgb 'works' because the topics it publishes to actually have infomation being published onto it; however, for camera/depth_registered, the topic being published to i.e. /camera/depth_registered/image_rect, has no information on it. As a result, the next step i.e. running depth_image_proc/point_cloud_xyzrgb would not work as it subscribes to an empty topic.

P.S. @bhaskara I know you are familiar with this ( ) so could you help me? @derekjchow you too please :)

Thanks guys!

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2014-04-15 07:56:35 -0500

derekjchow gravatar image

The preferred method for sending Kinect data over WiFi is to send the depth image, then convert to a point cloud on the processing PC.

In your situation set up distributed ROS ( ), then run "roslaunch openni_launch openni.launch" on the turtlebot.

On your host PC, you can run depth_image_proc/point_cloud_xyz to convert from a depth image to a point cloud message.

edit flag offensive delete link more


So do I have to set the processing modules to "false" when I run openni.launch on the turtlebot?

oswinium gravatar image oswinium  ( 2014-04-17 06:02:46 -0500 )edit

answered 2014-04-14 08:24:28 -0500

paulbovbel gravatar image

You will have to do some minimal amount of processing on the Turtlebot to turn the Kinect's data into a pointcloud that can be serialized and transferred to your workstation, which is what those modules in openni launch are doing.

You might also find that pointclouds are very large, and you may want to run them through a voxel filter before sending them over your network.

edit flag offensive delete link more


Agreed (about having to do *some* processing on the TBot). So a couple of questions, if you don't mind: 1. Do I need to run *all* those modules? 2. What's the rawest form of data I can transfer from the TBot to the Workstation? And which topic would this be published to? Thank you :)

oswinium gravatar image oswinium  ( 2014-04-15 04:18:16 -0500 )edit

That magic happens in Take a look through the different launch files, as well as openni_driver in the openni_camera package, and openni.launch in openni_launch. You will see that only the raw depth and color images are published by the driver.

paulbovbel gravatar image paulbovbel  ( 2014-04-15 06:34:15 -0500 )edit

You could try having the nodes under rgbd_launch - processing.launch.xml launch on your remote workstation instead of the turtlebot, but you will probably run into a few issues, namely synchronization and bandwidth.

paulbovbel gravatar image paulbovbel  ( 2014-04-15 06:40:22 -0500 )edit

If you look at the nodes in depth image proc, you'll see that they require that the images are synchronized (for good reason). If you start streaming raw images over the network, you will find you have a pretty low throughput, and it's unlikely they'll be synchronized.

paulbovbel gravatar image paulbovbel  ( 2014-04-15 06:41:23 -0500 )edit

Your best bet may be to run republish nodes from to stream compressed images from the turtlebot and decompress them on the workstation.

paulbovbel gravatar image paulbovbel  ( 2014-04-15 06:43:06 -0500 )edit

What do you think of the method suggested by @derekjchow and @bhaskara (see edited portion of my question)... thanks!

oswinium gravatar image oswinium  ( 2014-04-22 10:46:00 -0500 )edit

image_rect is inactive because you have disabled rgb processing

paulbovbel gravatar image paulbovbel  ( 2014-04-23 07:50:35 -0500 )edit

@oswinium were you able to figure out a good way to do this?

eric_cartman gravatar image eric_cartman  ( 2019-01-31 17:33:34 -0500 )edit

Question Tools



Asked: 2014-04-14 07:04:07 -0500

Seen: 2,052 times

Last updated: Apr 23 '14