Point cloud / Mesh reconstruction of an object
Hi!
I am looking for a solution to create a "complete" point cloud / mesh of an object using a custom depth camera (kinect v2, kinect for azure). I have now come to know the rtabmap package (other ideas also welcome), that is intended for localization and mapping of robots. The two use cases seem similar. Therefore, I have a couple of questions regarding rtabmap and my use case:
- In my scenario it would be easier to move the object, not the sensor. rtabmap aims to ignore these kind of movements, but is there a way to make it happen? Possibly it would work by masking out the object from the scene, such that rtabmap is "fooled" into thinking the camera moved?
- Since I only want to obtain the point cloud of an object, I would anyway prefer to remove other parts from the scene. I imagine I can do that by masking the depth images (i.e. setting non-interest parts to 0) or by providing only the part of the point cloud of interest via the subscribed topic "scan_cloud". Does rtabmap work without any images but only the point cloud?
Thank you!
P.S. Is there a tutorial that explains rtabmapviz in more detail?