Generating and merging point clouds

asked 2016-10-26 05:45:25 -0500

maxerl gravatar image

Hi there,

first at all, I'm using two Kinect2 cameras with iai_kinect2. They are calibrated and working. I got a working TF-tree, so i can display them both in RViz. I know about the TOF problem, so this isn't the problem.

If I'm watching/using the PointCloud2 the performance is above 3fps. So this is the reason why i used the DepthCloud-Plugin in Rviz, which give me up to 30 fps.

What i want to do now: Merging both datasets to one. I got the CameraInfo, tf-tree and the data. I wrote a new node which merged the two pointcloud2-topics, but the performance is very bad.

My idea is to build a new pointcloud out of the depth and color image-topics. But I don't able to do this... I googled, searched, tested for 4 weeks now... no result. I really don't know how to do it. Maybe it's possible to merge the two depth-image first before creating the pointcloud, or is it better to merge the two pointclouds?

I'm really confused right now... I wanted to write this (c++):

  1. a callback function with depth, colorimage and camerainfo.
  2. using transformlistener to transform it into the world frame
  3. getting opencv image via cv_bridge
  4. generating Mat using transformed cv_depthimage and colorized via colorimage
  5. getting pointcloud2 and publish

I don't know where I have to use the camerainfo and where and how to combine the second camera dataset.

Maybe I'm stupid but I can't even do step 3...

Thanks for reading and I'm looking forward for some helpful answer.

edit retag flag offensive close merge delete