ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

MATLAB 16uc1 or 32fc1 Conversion

asked 2018-05-23 10:48:05 -0500

updated 2018-05-23 12:43:23 -0500

I have some image processing code in MATLAB and am attempting to use it with ROS. I read in a PointCloud2 object and execute readRGB(ptcloud) and readXYZ(ptcloud) where I get two MATLAB images each of size 480x640x3.

Q1: I assume that a depth image of size NxMx3 gives X, Y, and Z distances as the three channels. Is this correct??

I then execute my processing and need to send a depth image back encoded in either 16uc1 or 32fc1. I have not found a built-in MATLAB function for this. I can write a conversion script myself but I am unfamiliar with these formats.

Q2: What is the general algorithm to convert my NxMx3 matrix into either format?

My depth camera node publishes a depth image in the 16uc1 format. When I subscribe to that in MATLAB and execute readImage(ros_image) I get an image of size 480x640x1 in MATLAB but when I execute rostopic echo /ros_depth_image_topic I see the data is given as "data: array: type uint8, length: 614400".

Q3: I notice this is 4806402 so why does MATLAB only read in a 480x640x1 image? How do the 614400 data points translate to only 307200 data points in MATLAB?

Thank you for any assistance offered!

================================================================================

Update 1: I went back over REP-118 -- Depth Images and understand the the distance in a depth image is the distance from the camera's z-axis. Therefore why would MATLAB's readXYZ(ptcloud) function give me a 3 channel image when a depth image is only 1 channel? Also: the 16UC1 states here that it supports int16 but I believe ROS wants uint16. When I get 2x the data represented as UInt8 does that just mean there are two 8-bit words basically being concatenated into one 16-bit data point? In other words, is each pixel represented as two UInt8 data points in the 16UC1?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2018-05-23 18:44:50 -0500

Hopefully I can shed some light on this for you.

Q1: Reading a PointCloud2 using readXYZ(ptCloud) is not producing a depth image it's producing a structured point cloud, they are different data structures. Each pixel is either null (zero usually) or a Cartesian point in the sensors optical frame. By convention this has Z pointing into the view, X pointing right and Y pointing down.

Q2: You're trying to convert a structured point cloud into a depth image, this is easy you just throw away the X and Y data. The Z values are the depth, if you know the lens model and calibration parameters of the depth camera then you can always re-create those X and Y values later. To convert to unsigned 16 bit integers use the uint16() function or to convert to 32 bit floating points use the single() function.

Q3: The array size in the data array is simply the number of bytes of data stored, each pixel of your depth image requires two bytes. So there are 480 x 640 x 2 bytes in the message.

Re Update: As said readXYZ() does not give you a depth image. Your last point is correct each pixel is two Uint8's otherwise known as bytes.

Hope this helps.

edit flag offensive delete link more

Comments

Thank you! I could not find any information about readXYZ and could only guess as to the meaning of the channels.

cmfuhrman gravatar image cmfuhrman  ( 2018-05-23 19:24:34 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2018-05-23 10:48:05 -0500

Seen: 1,278 times

Last updated: May 23 '18