RGB and Depth sensor_msgs::Image vs XYZRGB sensor_msgs::PointCloud2
Hello,
So I have ROS running across multiple machines on a limited network, and I have two possibilities, transfer a XYZRGB sensormsgs::PointCloud2 directly or two sensormsgs::Image (one rgb and one depth) and assemble the cloud remotely from the RGBD data and the camera intrinsics. The assembled cloud is equal to the original sensormsgs::PointCloud2, i checked. From the data I collected, I noticed that transferring two sensormsgs::Image was 85% faster in the worst case scenario, and i dont know why.
Is there any reason to why it would be faster?
Thanks in advance.
Asked by jpgc on 2016-10-17 05:11:06 UTC
Answers
Hello, @jpgc .
I didn't understood this, "assemble the cloud remotely from the RGBD data and the camera intrinsics.", you expained. Could you tell me more info?
And, Have you checked the range of data that was transfer?
Generally, FOVs on camera and points sensor are different. If the ASSEMBLE you said is a progress to project points sencoordinate to image coordinate, it is a normal issue.
Asked by harderthan on 2018-09-12 02:52:55 UTC
Comments