Ask Your Question
0

RGB and Depth sensor_msgs::Image vs XYZRGB sensor_msgs::PointCloud2

asked 2016-10-17 05:11:06 -0600

jpgc gravatar image

Hello,

So I have ROS running across multiple machines on a limited network, and I have two possibilities, transfer a XYZRGB sensor_msgs::PointCloud2 directly or two sensor_msgs::Image (one rgb and one depth) and assemble the cloud remotely from the RGBD data and the camera intrinsics. The assembled cloud is equal to the original sensor_msgs::PointCloud2, i checked. From the data I collected, I noticed that transferring two sensor_msgs::Image was 85% faster in the worst case scenario, and i dont know why.

Is there any reason to why it would be faster?

Thanks in advance.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2018-09-12 02:52:55 -0600

Hello, @jpgc .

I didn't understood this, "assemble the cloud remotely from the RGBD data and the camera intrinsics.", you expained. Could you tell me more info?

And, Have you checked the range of data that was transfer?

Generally, FOVs on camera and points sensor are different. If the ASSEMBLE you said is a progress to project points sencoordinate to image coordinate, it is a normal issue.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2016-10-17 05:11:06 -0600

Seen: 289 times

Last updated: Sep 12 '18