ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Frame of reference of depth sensors

asked 2016-07-04 05:55:23 -0500

KamalDSOberoi gravatar image

Hello everyone,

I am working on a project to do the 3D reconstruction of the environment using ASUS Xtion pro and using openni2_launch, openni2_camera and rgbd_launch packages to get the data from the device drivers.

In rgbd_launch, there is a file kinect_frames.launch in which four frames - camera_rgb_frame, camera_depth_frame, camera_rgb_optical_frame and camera_depth_optical_frame, with different orientations are defined. And there is a frame camera_link in which all of these frames are transformed. You can see the static transform between these frames here.

My question is that when the point cloud is created by the device, which frame among the above mentioned frames is considered as reference? Also when the image (rgb and/or depth) is acquired by the device, which frame is the reference frame?

The orientation of the base_link or camera_link is different from camera_rgb_optical_frame or camera_depth_optical_frame as mentioned in REP-103. Hence my confusion.

Any guidance is appreciated. Thanks in advance !!

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2016-07-04 22:16:00 -0500

ahendrix gravatar image

The reference frame should be listed in the frame_id field in the header of each message.

edit flag offensive delete link more

Comments

Hi. Thanks for your answer. But if we talk in more general sense than ROS, do you know if there is any predefined convention by the manufacturer (Microsoft or Asus) that a particular reference frame will be used to display the point cloud?

KamalDSOberoi gravatar image KamalDSOberoi  ( 2016-07-05 02:42:21 -0500 )edit

Most depth sensors capture depth as the distance along the ray from the pixel on the image plane, through the center of the image lens. This is a fairly standard pinhole camera model, but reporting depth instead of light intensity or color.

ahendrix gravatar image ahendrix  ( 2016-07-05 03:06:41 -0500 )edit

Some sensors (like the kinect and Asus) appear to have low distortion, or they correct for lens distortion in hardware, but otherwise they don't usually transform the data into a different reference frame.

ahendrix gravatar image ahendrix  ( 2016-07-05 03:08:07 -0500 )edit

Image coordinates correspond to the *_optical_frame, and I believe they are used for the captured rgb and depth images. Sensor-relative coordinates correspond to the camera_depth_frame and are used for the computed point cloud.

ahendrix gravatar image ahendrix  ( 2016-07-05 03:10:30 -0500 )edit

Note that the sensor sends a depth image over USB, and the ROS driver publishes that depth image in addition to converting it into a 3D point cloud and publishing that point cloud, too.

ahendrix gravatar image ahendrix  ( 2016-07-05 03:12:51 -0500 )edit

IMO there should be a reference frame in which the frames for depth and rgb cameras are fixed (static transform) and the point cloud is displayed. What do you think?

KamalDSOberoi gravatar image KamalDSOberoi  ( 2016-07-05 04:24:35 -0500 )edit

Question Tools

Stats

Asked: 2016-07-04 05:55:23 -0500

Seen: 934 times

Last updated: Jul 04 '16