ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
2

Align Point clouds from multiple kinects

asked 2013-05-04 03:52:00 -0500

Faizan A. gravatar image

Hi, I am using two Kinects to capture an object. In this scenario the cameras are rigidly attached to a surface and their poses are also fixed. I have obtained the relative camera poses using "camera_pose_calibration" package, calibrated for rgb images. I want to visualize the point clouds from both Kinects in Rviz but the point clouds i obtain are not aligned. In some cases I get one of the point clouds completely inverted with respect to the other when viewed from the /world as fixed frame in Rviz. In other cases the alignment is not satisfactory.

How can i achieve reasonable alignment between the point clouds from multiple Kinects and visualize the result in Rviz?

Additional info: I am using ROS fuerte and platform is Ubuntu 12.04. In Rviz I am displaying PointCloud2 and the depth registration is turned on so the topics are /kinect1/depth_registered/points and /kinect2/depth_registered/points respectively.

edit retag flag offensive close merge delete

Comments

1

Are you able to see both the /kinect1 and /kinect2 tf frames simultaneously?

georgebrindeiro gravatar image georgebrindeiro  ( 2013-05-05 12:07:00 -0500 )edit
1

yes. i would upload the result of tf view_frames here but i dont have enough karma so I uploaded it on Dropbox here https://www.dropbox.com/s/cvn1nilsn089xyo/calibration_2Kinects.pdf

Faizan A. gravatar image Faizan A.  ( 2013-05-06 00:16:22 -0500 )edit

I think your problem is related to the fact that you have three separate tf trees. The point clouds are defined in terms of the depth_frame, but you only have the rgb_frames' position relative to world. It is reasonable to expect that this could be easily resolved, but unfortunately it isn't...

georgebrindeiro gravatar image georgebrindeiro  ( 2013-05-09 15:20:17 -0500 )edit

Ideally you would get the tf between /world and /kinectX_rgb_frame and things would connect in a graph-like form, but that's not possible currently. We can only have one parent to each child, as seen here: http://answers.ros.org/question/56646/relative-tf-between-two-cameras-looking-at-the-same-ar-m

georgebrindeiro gravatar image georgebrindeiro  ( 2013-05-09 15:22:31 -0500 )edit

That's true. It can also be verified. The /world is arbitrarily chosen, and sometimes during the configuration it was aligned perfectly with one of the /kinect frames. i.e. the /world and /kinect1 had the same coordinates and orientation. At the point the point clouds were perfectly aligned.

Faizan A. gravatar image Faizan A.  ( 2013-05-12 21:33:33 -0500 )edit

So do you have a work around? how can I align multiple point clouds using camera_pose_calibration? As I understand you are also trying to achieve similar results. Were you able to achieve that using multiple AR markers to estimate camera poses relative to each other? Any help would be appreciated.

Faizan A. gravatar image Faizan A.  ( 2013-05-12 21:37:51 -0500 )edit
1

I haven't continued in that direction yet because I have been involved in some other developments in my lab, but you should be able to write a node that reads in all clouds you are interested in merging, read all relevant transforms and publish a single cloud on a different topic.

georgebrindeiro gravatar image georgebrindeiro  ( 2013-05-13 08:22:01 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
3

answered 2013-05-21 23:54:38 -0500

Faizan A. gravatar image

updated 2013-05-22 03:09:34 -0500

So I found a work around and posting it here so it would be useful to others.

First calculate the camera poses using a temporary namespace for the cameras. e.g i used kinect1_temp and kinect2_temp. Once the calibration file is saved you would have transforms from /world to /kinect1_temp_rgb_optical_frame and /kinect2_temp_rgb_optical_frame. Now launch the tf publisher and launch the cameras using namespaces that you would use later e.g. kinect1 and kinect2. Calculate the transforms from kinect1_rgb_optical_frame to kinect1_link. Use this transform to link kinect1_temp_rgb_optical_frame and kinect1_link. The corresponding tf tree can be seen here.

Since Tf tree does not support cyclic trees, it is workaround using dummy namespaces assuming the camera positions remain constant.

Hope it helps someone.

edit flag offensive delete link more

Question Tools

3 followers

Stats

Asked: 2013-05-04 03:52:00 -0500

Seen: 1,672 times

Last updated: May 22 '13