Ask Your Question
0

streaming pointcloud to remote workstation

asked 2013-06-11 19:57:55 -0600

jihoonl gravatar image

Hello,

I am trying to use pointcloud generated by a robot on remote workstation. However, I figured that susbscribing pointcloud2 is eating too much bandwidth.

Could anybody teach me what is the best way to get pointcloud data from a robot and use it on the remote workstation?

My current idea is instead of subscribing pointcloud2 data directly, setting up compressed version of pointcloud_xyzrgb image_pipeline on the remote workstation and re-produce pointcloud2 data.

Thanks, Jihoon

edit retag flag offensive close merge delete

4 Answers

Sort by ยป oldest newest most voted
0

answered 2013-06-11 22:18:13 -0600

Philip gravatar image

updated 2013-06-11 22:19:16 -0600

In our setup, we've split the openni_launch-launch-file into two separate launch-files:

  • The one running on the machine which is connected to the kinect (in your case: on the robot) only loads the driver and publishes raw images and the tf-data
  • On a remote machine, image_proc is launched which calculates and publishes the actual point clouds

This way, only the raw 2d data (depth/image_raw, rgb/image_raw) is transmitted over the network.

edit flag offensive delete link more

Comments

Looks good thanks! Have you tried compressed image_transport too?

jihoonl gravatar imagejihoonl ( 2013-06-11 23:37:05 -0600 )edit

Would it be possible to post your splitted launch files? Thanks! :)

Ben_S gravatar imageBen_S ( 2013-06-12 00:57:20 -0600 )edit

I have similar but a little different launch file created for turtlebot pointcloud streaming. Please refer this.https://gist.github.com/jihoonl/5770238

jihoonl gravatar imagejihoonl ( 2013-06-12 14:16:45 -0600 )edit
0

answered 2020-02-11 09:38:05 -0600

bmegli gravatar image

Outside of ROS for software encoding/decoding is possible to use 3D-HEVC reference implementation.

For lossy hardware accelerated encoding/decoding you may have a look at this example which directly maps Realsense camera 16 bit depth data to HEVC Main10 profile (only Intel and Linux at the moment).

edit flag offensive delete link more
0

answered 2013-06-11 21:58:48 -0600

tfoote gravatar image

A common approach is to use a throttle node which will simply downsample the rate at which they are published over the network to save bandwidth over the wifi. It is convenient because you can add it to a running system without changing much.

edit flag offensive delete link more

Comments

Thank you for suggestion. I agree that throttling make things easier. But I wanted to keep throttling option as a backup. Since webtools group has developed depthcloud_encoder(http://www.ros.org/wiki/depthcloud_encoder) to stream pointcloud on the web without downsampling, I wished to

jihoonl gravatar imagejihoonl ( 2013-06-11 23:31:25 -0600 )edit

find similar but easier solution for a robot to desktop solution.

jihoonl gravatar imagejihoonl ( 2013-06-11 23:32:44 -0600 )edit
0

answered 2013-06-11 21:02:26 -0600

I'm not sure if this is the best way, but it is easy to implement: You can subscribe to the compressed version of the depth image (and color image if necessary) and construct the pointcloud on the workstation.

edit flag offensive delete link more

Comments

Yeah, that's my current plan. Use depth_image_proc/pointcloud_xyzrgb image pipeline on the workstation and reproduces it.

jihoonl gravatar imagejihoonl ( 2013-06-11 21:06:26 -0600 )edit

I wondered how the other people do and see if there is better solution.

jihoonl gravatar imagejihoonl ( 2013-06-11 21:07:03 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

Stats

Asked: 2013-06-11 19:57:55 -0600

Seen: 631 times

Last updated: Jun 11 '13