ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Convert coordinates 2d to 3D point, theoretical question

asked 2012-09-01 07:19:56 -0500

Alberto Martín gravatar image

updated 2014-04-20 14:09:29 -0500

ngrennan gravatar image

Hi, this is my first post.

I am a student and I'm doing a project with ARDrone to follow a red ball.

I have a question about robotics and ROS, with ARDrone locate the red ball in the image, which gives me 2D coordinates and I want to convert the coordinates 2D to 3D point relative to the robot in the reference system of the robot. I've looking for internet and I am somewhat confused.

Does anyone tell me where I can find the theory and ROS packages could be used to implement it?

thanks

edit retag flag offensive close merge delete

2 Answers

Sort by » oldest newest most voted
8

answered 2012-09-01 12:13:08 -0500

jbohren gravatar image

A monocular camera gives you a 2D projection of 3D space. This means that if you know the location (x,y) of something in the image, there are infinite possible locations in 3D space. Specifically, that 2D point (x,y) defines a line in 3D space shooting out of the camera towards the object.

Mathematically, this projection is a linear operation, and can be represented as a 4x3 linear operator (matrix) normally called the camera projection matrix. You can see that this matrix is not invertible (you can't just reverse the transform).

For this reason, in order to determine which location along that line corresponds to the original point, you need more information. In the general case, this extra data comes from a second camera with a view of the point in question. Then the 3D point can be reconstructed from two 2D points that are known to correspond to the same 3D point. Such reconstruction is done using epipolar geometry.

Unfortunately, your AR drone only has a monocular camera, so stereo vision is not an option. However, since your red ball has non-zero area, one option is to estimate the distance to the ball by the apparent size of the ball in the image. You can derive this from the pinhole camera model equations and normal trigonometry.

All you need for this is the focal length of your ARDrone's camera and the actual width of the ball. If you don't know it, you can get the focal length by running the ROS camera calibration frontend on the images from your ARDrone.

edit flag offensive delete link more
0

answered 2019-01-22 01:58:30 -0500

In the case of calibrated monocular camera, you can implement extended Kalman filter to estimate the 3d coordinate of the red ball. Your extended KF should use the data of the fused sensors and accurately estimate position of ball provided the state transition is modeled correctly.

edit flag offensive delete link more

Question Tools

Stats

Asked: 2012-09-01 07:19:56 -0500

Seen: 5,846 times

Last updated: Sep 03 '12