ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

How does rviz create point cloud from depth image

asked 2021-09-07 02:21:16 -0500

Bender_From_Futurama gravatar image

updated 2022-03-25 17:58:34 -0500

lucasw gravatar image

I am using an intel realsense L515. While in rviz I opened the topic /camera/depth/image_rect_raw. If you choose DepthCloud, it displays as a point cloud, and if you choose camera it shows a depth image.

My question is how does rviz generate a pointcloud from a depth image? I inspected this topic and it is of type sensor_msgs/Image.

When using echo to view the contents of the message, the values appear to be 8 bits.

From my knowledge, the realsense returns a 16 bit depth image that is used to calculate a point cloud.

Furthermore, I calculated the point cloud with the 16 bit depth values and overplayed it on the one rviz created from the 8bit depth image... and they matched exactly.

Is there something I am missing? How is the point cloud calculated this accurately from only 8 bit values?

I hope this makes sense. Thanks in advance

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2021-09-07 15:29:54 -0500

Mike Scheutzow gravatar image

updated 2021-09-08 07:43:24 -0500

The depth of your image message is not necessarily 8-bits. You need to look at the meta-data fields at the start of the Image message:

uint32 height
uint32 width
string encoding
uint8 is_bigendian
uint32 step
uint8[] data

To allow this message to support many varied formats, that data field is a binary blob that must be parsed using the meta-data to turn it back into actual rectangular Image data.

Update: the routines to do this parsing are provided by ROS. The first step for most is to convert the Image message into an opencv v2 multi-dimensional array. See packages cv2 and cv_bridge. In python, the call looks like this:

cv_img = bridge.imgmsg_to_cv2(msg, 'bgr8')
edit flag offensive delete link more

Comments

So does the encoding field determine what is in the data field, even though it is uint8? Would you please mind explaining some more on this, or even point me somewhere where I can figure out how this works? The message is of type sensor_msgs/Image

Bender_From_Futurama gravatar image Bender_From_Futurama  ( 2021-09-08 00:49:07 -0500 )edit

Serialisation to a byte array does not mean deserialisation also results in (unsigned) bytes.

Multiple bytes could be taken together to form words (ie: int16), or even wider integers or floats.

The uint8 array is just the final encoding of the buffer.

This is not ROS specific btw.

The metadata referred to by @Mike Scheutzow helps the deserialiser figure out how many elements to pack together to get back the original data.

gvdhoorn gravatar image gvdhoorn  ( 2021-09-08 02:46:29 -0500 )edit

Thank you. I've done some digging in the data and I finally see what you meant by the metadata

Bender_From_Futurama gravatar image Bender_From_Futurama  ( 2021-09-10 03:50:51 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2021-09-07 02:21:16 -0500

Seen: 917 times

Last updated: Sep 08 '21