Ask Your Question

RGB and Depth sensor_msgs::Image vs XYZRGB sensor_msgs::PointCloud2

asked 2016-10-17 05:11:06 -0500

jpgc gravatar image


So I have ROS running across multiple machines on a limited network, and I have two possibilities, transfer a XYZRGB sensor_msgs::PointCloud2 directly or two sensor_msgs::Image (one rgb and one depth) and assemble the cloud remotely from the RGBD data and the camera intrinsics. The assembled cloud is equal to the original sensor_msgs::PointCloud2, i checked. From the data I collected, I noticed that transferring two sensor_msgs::Image was 85% faster in the worst case scenario, and i dont know why.

Is there any reason to why it would be faster?

Thanks in advance.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2018-09-12 02:52:55 -0500

harderthan gravatar image

Hello, @jpgc .

I didn't understood this, "assemble the cloud remotely from the RGBD data and the camera intrinsics.", you expained. Could you tell me more info?

And, Have you checked the range of data that was transfer?

Generally, FOVs on camera and points sensor are different. If the ASSEMBLE you said is a progress to project points sencoordinate to image coordinate, it is a normal issue.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2016-10-17 05:11:06 -0500

Seen: 515 times

Last updated: Sep 12 '18