Robotics StackExchange | Archived questions

Point Cloud Stitching and processing

I have pointclouds from two realsense D415 cameras, which is mounted in a way to have overlap. I am able to stitch the output pointclouds in realtime to get a single larger FOV pointcloud by using PCL ICP to find the transforms and transforming one pointcloud to match with the other. Now I would like to detect planes on this output pointcloud and further detect people with the help of the detected planes and ground plane. I have implemented the same with PCL libraries once again.

Now issues arise in two cases: (1) The final stitched output is unordered leading to a lot of PCL functions not being able to use the pointcloud. To overcome this , say I resize my pointcloud to match the final dimensions with height not being 1, I run into (2).

(2) Upon passing the resized pointcloud I get an error from the normal algorithm I am using as Input not from a projective device, using only XXXX points (XXXX is a very small fraction of the total number of points in the pointcloud). Any other available normal algorithm performs really slow and is not capable of being used in real time applications.

Any ideas how to proceed with this? Happy to provide more information.

Asked by Pran-Seven on 2022-10-28 04:41:31 UTC

Comments

I've closed this as it's not really a ROS question. We use PCL with/in ROS, but PCL is not part of ROS.

Questions like these should be posted to a more appropriate venue. The main PCL site these day is pointclouds.org.

Asked by gvdhoorn on 2022-10-29 01:14:06 UTC

Hi, sorry for that. I would also like to know if there are other methods apart from pcl for robust stitching of pointclouds and keep them ordered, would love some help on that as well.

Asked by Pran-Seven on 2022-10-29 01:16:07 UTC

@Pran-Seven if you know the mounted position of the cameras, why can't you transform all the points to the same transfrom frame? Why do you need to use a more complicated stitching algorithm?

Asked by Mike Scheutzow on 2022-10-29 11:04:11 UTC

Just transforming all the points would not account for the overlap right? Because currently before applying the stitching, I am indeed transforming them to the base link, which gives me a bigger pointcloud yes, but does not merge the repeated objects.

Asked by Pran-Seven on 2022-10-31 02:52:53 UTC

The meaning of "repeated objects" is not clear to me. A pointcloud consists of a set of infinitely-small points. Why should your processing care if two points happen to be rather close to each other in 3d-space?

Asked by Mike Scheutzow on 2022-11-01 12:12:13 UTC

Hey, thank you for your reply. By repeated points, I mean the objects/scenes seen by both the camera because of the overlap. This seems to cause an issue, because what seems to be happening is, there are two layers of points in the same place. I am attaching an image for reference. What seems to be happening in this case is the nearest neighbour algorithm is failing with an error Input on.not from a projective device as there are two layers in the same region. This multiple layer is definitely caused by an error in my align+merge, but this is the best transformation matrix I have been able to achieve and since the scene will not remain the same, I can not hard code it to overcome the error. Now I am wondering if I can somehow delete these common points from the second pointcloud before merging them.

Asked by Pran-Seven on 2022-11-01 12:24:35 UTC

@Pran-Seven: There are various filters in PCL. You can apply VoxelGrid filter to eliminate such points. Thanks to all great developers in ROS, you don't need to code anything to do it. You can just write a lunch file. Please see the following tutorial: http://wiki.ros.org/pcl_ros/Tutorials/VoxelGrid%20filtering

Asked by ravijoshi on 2022-11-02 08:01:14 UTC

@ravijoshi thanks for your response! I did try voxel grid down sampling, but owing to the large dataset size, I could only get it working with leaf size as 0.1f and at that leaf size, the output pointcloud is kinda useless for my further processing

Asked by Pran-Seven on 2022-11-04 03:07:57 UTC

owing to the large dataset size ....

What does it mean?

It seems you have two realsense D415 cameras. Long back, I combined point clouds from 3 Kinect v2 cameras without any issues in combining them. I was using Intel i9 processor with 32 GB RAM in C++. You may check the following: https://github.com/ravijo/multiple_kinect_baxter_calibration/blob/c7f4fb18ef9d7a88b15a9a7398b1ce524a7c7565/src/merge_pc.cpp

Asked by ravijoshi on 2022-11-04 03:39:39 UTC

Thank you for the repo! By that I mean when I set the leaf size as 0.01 or 0.001 to retain most points, I get the same number of points in the output cloud as the input cloud. I went through your code, my issue lies in the fact that the final pointcloud after combining is unordered as I need ordered pointcloud for further processing, any idea how to maintain the orderliness or how to convert unordered to ordered? Thank you!

Asked by Pran-Seven on 2022-11-04 03:49:28 UTC

Answers