Ask Your Question

ROS image messages to OpenCV images- question about data encodings

asked 2019-11-14 18:42:14 -0500

shuttlebug gravatar image

Hi all,

I am converting depth images from ROS to OpenCV using this tutorial, and it is working well. However, I am using this with two different imagers, a KinectV2 and a Stereolab Zed. When I look at the cv_ptr->image from each of these, the scale is off by a factor of 1000 (i.e. the Kinect matrix contains values ~1000-5000, while the Zed's are ~1-5). I'd like to write imager independent code, but I'm new to working with opencv and ros in general; might this scale factor have something to do with how the ros image messages are encoded, or is this simply inherent to the camera?

From their respective ROS wrappers, the Kinect sensor_msgs/Image has an encoding of 16UC1 and step of 1920, while the Zed has encoding 32FC1, step 5120. The data is of course of type uint8[]. My thought is that one of the sensor_msgs/Image parameters might be responsible for the scaling difference when using CvBridge, and I could republish or convert my ROS messages so that the c++ code can be independent of the imager.

I apologize for the general question, but I'm quite stuck with what may be possible future directions and I've exhausted all other google searches and posts about using ROS and OpenCV. Any thoughts are greatly appreciated!

I am using Ubuntu 18.04 ROS melodic.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2019-11-15 03:42:59 -0500

gvdhoorn gravatar image

updated 2019-11-15 03:56:04 -0500

Edit: Ah, looks like both drivers are actually doing things right, see REP 118: Depth Images. Either INT encodings should be used with millimetres or FLOAT encodings with metres.

And depth_image_proc/convert_metric is a nodelet that can be used to convert between the different encodings.

Original answer: I haven't checked at all (I'm not into computer vision), so take this with a bag of salt, but: the ratio you mention (1000) seems like it could be the difference between millimetres and metres. ROS uses metres for distances (REP 103), so it would make sense to have depth images use metres for distance-per-pixel as well. I don't know whether that is actually the case (ie: whether REP 103 is adhered to in that context), but if it is, then the Zed's values would seem to be OK, while the Kinect's seem off.

Again: it could be that it's the other way around (depth images are supposed to use mm, not metres) and then the converse of what I wrote above would be true.

edit flag offensive delete link more


Related Q&As: #q209450, #q295611, #q315122 and #q318519.

gvdhoorn gravatar image gvdhoorn  ( 2019-11-15 03:46:30 -0500 )edit

Thank you so much for the response! I'll give your referenced nodelet a shot in a couple days and report back with how it goes.

shuttlebug gravatar image shuttlebug  ( 2019-11-15 18:22:42 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools


Asked: 2019-11-14 18:42:14 -0500

Seen: 69 times

Last updated: Nov 15 '19