ROS Kinect OpenCV for depth information of a particular pixel in real-time. [Solved]
I followed this tutorial and now I'm able to do image processing with OpenCV in ROS. but I want some more guidance. I want to extract the exact depth information using OpenCV ,ROS, Kinect. Now I'm able to detect a ball. I have the U,V coordinates of that ball. How can I get exact position of that ball with respect to Kinect in 3-dimensions? I mean x,y,z coordinate of the center of the ball if my Kinect camera is at origin for instance.
I hope you understood my problem.
Thanking you Waiting for your valuable replay.
Asked by PKumars on 2016-02-17 05:40:04 UTC
Answers
If you know the focal length of the camera and the depth (distance to the object from the camera), you can use pin-hole camera model to convert pixel values to real-world coordinates.
Focal length of the camera can be found be exploring the camera_info
topic that will be published (The value will be px and you have to convert that to mm).
Assuming that you the pixel coordinates of the region of interest, you can get the depth from the depth image published by kinect.
Asked by Willson Amalraj on 2016-02-17 21:33:08 UTC
Comments
Hello I have the u,v coordinate of pixel. I'm subscribing two topics."/camera/rgb/image_raw" and "/camera/depth_registered/points" and using two callback function. but I don't know how to use these u,v for extraction of depth of that pixel. I mean how to transfer u,v from rgb_cb to depth_cb?
Asked by PKumars on 2016-02-18 05:41:59 UTC
I solved the previous problem. now I want to get exact coordinates of my specific object. I'm subscribing /camera/rgb/image_color and /camera/depth/image_raw. I have u,v pixel coordinate from RGB image. i followed this http://nicolas.burrus.name/index.php/... . but couldn't.
Asked by PKumars on 2016-02-24 06:59:49 UTC
If you subscribe to both /camera/rgb/image
and /camera/depth_registered/image
you should have to video streams (or maybe you want the image_rect topics). One is the standard RGB image, the other is the depth image that has been transformed into the same frame as the RGB image. A feature located at u,v in the RGB image should also exist at u,v in the depth image. From that information, you should know the u,v and the real-world Z distance. Then I'd recommend using the image_geometry to calculate X,Y. In Python you could use something like image_geometry.PinholeCameraModel.projectPixelTo3dRay
or image_geometry.PinholeCameraModel.getDeltaX
(and getDeltaY). If you are in C++ you could use the corresponding functions.
Asked by jarvisschultz on 2016-02-18 10:02:25 UTC
Comments
I'm trying to synchronize two topics but no result. If I can use two call backs then how can I transfer u,v from one callback(rgb_callback) to another callback(depth_callback)? I want to transfer so that I can find depth information. I hope you understood my point.
Asked by PKumars on 2016-02-18 14:29:08 UTC
If you need the callbacks synchronized, I'd recommend a Time Synchronizer from the message_filters
package.
Asked by jarvisschultz on 2016-02-18 14:55:08 UTC
I tried to synchronize but I have some problem and I created a new Issue.
http://answers.ros.org/question/227059/why-rgb-and-depth-image-synchronization-not-working/
Asked by PKumars on 2016-02-18 15:32:10 UTC
I solved the previous problem. now I want to get exact coordinates of my specific object. I'm subscribing /camera/rgb/image_color and /camera/depth/image_raw. I have u,v pixel coordinate from RGB image. i followed this http://nicolas.burrus.name/index.php/Research/KinectCalibration. but couldn't.
Asked by PKumars on 2016-02-23 05:20:17 UTC
Comments