Robotics StackExchange | Archived questions

How to get XYZ and RGB of each pixel from a sensormsg image ?

Hi,

The Sensormsg Image contains its data as a blob. I am wondering how to acquire the XYZ and RGB values of each pixel of the image data(depth image and color image respectively).

I am using an Intel Realsense D435 using ROS Kinetic

Can somebody please help?

Asked by akcast on 2018-12-18 09:09:27 UTC

Comments

Answers

You'll want to use the depth image proc package this can subscribe to the registered color and depth images from the camera and combine them into a color point cloud message (XYZRGB).

You very rarely have to deal with blob data in messages directly, there are a range of converters and methods built into ROS allowing you to access the formatted data directly. Hope this gets you moving in the right direction.

Asked by PeteBlackerThe3rd on 2018-12-19 05:45:36 UTC

Comments

Hi, thanks for your answer. Using depth_image_proc/point_cloud_xyzrgb, I would get a pointcloud2 message which I can visualise in Rviz and see the pointcloud. But what I am trying is get the xyz,rgb values for each point in pointcloud2 msg. Would that also be possible using depth_image_proc ?

Asked by akcast on 2018-12-19 05:58:04 UTC

Yes. You would use depth_image_proc to create a new PointCloud2 topic. Then your node would subscribe to this topic and then be able to access the XYZ and RGB values of each point.

Asked by PeteBlackerThe3rd on 2018-12-19 06:26:27 UTC

Thanks alot I was able to get the fields values from the pointcloud2.

For performance reasons (as I am using Ros sharp to communicate these msgs over websocket to Unity), is there a way to get xyz for each pixel from a depth image separately and rgb for each px from a color image separately.

Asked by akcast on 2018-12-19 07:21:43 UTC