ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
2

How to retrieve XYZ co-ordinates from a raw depth image?

asked 2014-08-13 12:45:25 -0600

neb42 gravatar image

updated 2014-08-13 13:00:44 -0600

I am taking a depth image from /camera/depth_registered/image_raw and thresholding it to find objects within a certain range. I would like to find the xyz co-ordinates of these objects in order to calculates relevant velocities. I'm trying to avoid using PCL as this gave me a lot of noise when I tried to use it to detect the obstacles. I've been told that I could find the co-ordinates using tf but I have no idea what to do with it.

edit retag flag offensive close merge delete

Comments

1

I've merged your questions; please don't ask duplicate questions.

ahendrix gravatar image ahendrix  ( 2014-08-13 12:59:13 -0600 )edit

Sorry opened a new account and thought my first one didn't post. Wasn't that clear.

neb42 gravatar image neb42  ( 2014-08-13 13:00:32 -0600 )edit

2 Answers

Sort by ยป oldest newest most voted
2

answered 2014-08-13 22:05:24 -0600

sai gravatar image

link 1 : https://github.com/ccny-ros-pkg/ccny_...

link 2 : https://github.com/ccny-ros-pkg/ccny_...

In the link1, the function takes in a depth image, camera intrinsic parameters of a depth camera (Kinect or Asus) and creates a point cloud from that. fx, fy, cx, cy are the camera parameters.

In the second link, it takes an additional RGB image too and gives a colored point cloud.

Good luck

edit flag offensive delete link more

Comments

In the first link what would I have to subscribe to in order to get the const cv::Mat& intr_rect_ir? Is that the rectified ir image?

neb42 gravatar image neb42  ( 2014-08-20 13:11:25 -0600 )edit

If you see closely, in the next few lines of code, cx,cy,fx and fy are calculated from intr_rect_ir. It basically is matrix representation of camera parameters. if you know the camera parameters, you can either directly give them or subscribe to the camera_info topic from Kinect and copy the parameters into a OpencvMat. It would be recommended to use calibrated parameters of "IR depth camera" of Kinect or else all the Kinects have same IR camera matrix.

sai gravatar image sai  ( 2014-08-20 17:11:33 -0600 )edit

In another part of my code I am using the rgb camera stream and can therefore not subscribe to both the rgb and ir stream. Would it still give me the same point cloud if I use the rgb camera info?

neb42 gravatar image neb42  ( 2014-08-21 08:49:43 -0600 )edit

no, you cannot use like that. The RGB camera parameters are different from the depth camera parameters and both are always constant. So instead of subscribing to the camera parameters, you can simply hard code into the program.

sai gravatar image sai  ( 2014-08-21 10:09:42 -0600 )edit

When I checked the ir camera info it is exactly the same as the rgb info. Is this possible?

neb42 gravatar image neb42  ( 2014-08-22 08:44:42 -0600 )edit

may be you are seeing something wrong. Just to say that the camera parameters will be different , have a look at this link http://vision.in.tum.de/data/datasets...

in the sections, "calibration of the color camera" and "calibration of infrared camera", you can find that the camera parameters are different.

sai gravatar image sai  ( 2014-08-22 10:50:30 -0600 )edit

sir what does this line means in the link u've given: PointT& pt = cloud.points[v*w + u]; I was also doing similar thing, so got here, but didn't understand above line, else is good.

dinesh gravatar image dinesh  ( 2016-06-24 15:44:26 -0600 )edit
0

answered 2014-08-14 02:24:25 -0600

NEngelhard gravatar image

I think, you are mixing two things. The PCL can be used to get the position of an object in your pointcloud. TF gives you an easy way to create trees of coordinate systems and compute relative positions between frames in your tree. Your problem could be solved by using the PCL to find the objects insert their positions as tf-frames into the tf-tree and then use the tf-functions to compute their velocities. (Although tf only supports positions, you'd have to compute the velocity yourself)

edit flag offensive delete link more

Comments

Hello, would you explain how to read x,y,z co-ordinate or position of an object of point cloud?

Karz gravatar image Karz  ( 2015-09-14 02:27:07 -0600 )edit

Question Tools

Stats

Asked: 2014-08-13 12:45:25 -0600

Seen: 8,567 times

Last updated: Aug 14 '14