ROS image messages to OpenCV images- question about data encodings
Hi all,
I am converting depth images from ROS to OpenCV using this tutorial, and it is working well. However, I am using this with two different imagers, a KinectV2 and a Stereolab Zed. When I look at the cv_ptr->image from each of these, the scale is off by a factor of 1000 (i.e. the Kinect matrix contains values ~1000-5000, while the Zed's are ~1-5). I'd like to write imager independent code, but I'm new to working with opencv and ros in general; might this scale factor have something to do with how the ros image messages are encoded, or is this simply inherent to the camera?
From their respective ROS wrappers, the Kinect sensor_msgs/Image has an encoding of 16UC1 and step of 1920, while the Zed has encoding 32FC1, step 5120. The data is of course of type uint8[]. My thought is that one of the sensor_msgs/Image parameters might be responsible for the scaling difference when using CvBridge, and I could republish or convert my ROS messages so that the c++ code can be independent of the imager.
I apologize for the general question, but I'm quite stuck with what may be possible future directions and I've exhausted all other google searches and posts about using ROS and OpenCV. Any thoughts are greatly appreciated!
I am using Ubuntu 18.04 ROS melodic.