ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Stereo Vision Distance from Disparity Accuracy Measurement Testing

asked 2013-05-07 11:06:55 -0500

Gaviria R gravatar image

updated 2013-05-07 11:07:45 -0500

Hi,

SO I have successfully managed to calibrate my two stereo cameras and thery are running the stereo_image_proc. I am able to view the disparity image stream generated using the image_view node. Before moving onto any mapping I would like to run an experiment on the disparity images in order to check the accuracy and range of the distance by using the values generated by stereo_image_proc node.

I would like to take a series of images at a known of a textured box positioned at a different known distances of a particular range. What would be the best approach for me to calculate the depth? Here are my ideas but I dont know if they are in right or not:

  1. Write a C++ subscriber to the stereo_msgs::DisparityImage topic which then takes the disparity image and the f (focal length) and T(baseline) parameters. Converts the disparity image into OpenCV format so that I can calculate the depth for each pixel (disparity) value using the formula z = fT/d and then map the calculated distances on an array or an OpenCV image which can then be indexed to extract a specific distance.

    2.Write a C++ Subscriber that attempts at extracting the z cordinate value from either the stereo_msgs::pointcloud or stereo_msgs::pointcloud2 topic for a corresponding x and y coordinate.

Apologies if this has already been discussed but ive looked around and havn't found a solution, or maybe im going at it the completely wrong way. I would really appreciate some help.

Thanks

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2014-10-07 15:38:02 -0500

benabruzzo gravatar image

I have done something similar. Instead of using the depth map that is auto-generated, I wrote some of my own image processing looking for markers that I know will be in the image. Then using the coordinates of center pixel of each marker from both synchronized frames, I calculate the distance using triangulation.

The c++ node that I wrote subscribes the images themselves.

edit flag offensive delete link more

Question Tools

Stats

Asked: 2013-05-07 11:06:55 -0500

Seen: 2,153 times

Last updated: Oct 07 '14