Need some help with Mapping and Azure Kinect
Hello,
i am a newbie, a first time user of RTAB-MAP and ROS. But after days of trouble shooting and reading forum messages a think it's time to just ask.
First of all, i think it is good to explain my goal. I want to make as good as (it gets with a Azure Kinect) possible pointclouds, from rooms/houses/buildings and maybe some scans of accutools.
What i want from that is:
- 2d plans (with angles between walls) and height.
- Measure some random things (at home) like windows (frames).
- And if thats working allright, use software (with recognition) to make a 3d file/project from the cloudpoints (for VR and or designing and building cabinets)
I know that ROS and RTAB-Map is more designed for robot navigation, but if all goes well, i want to build a robot for scanning/mapping. Number one priority for now, is quality of the pointcloud.
So a couple of questions and a problem:
- The most important one: can RTAB-Map deliver good enough pointclouds for my goals? (icm Azure Kinect)
- What are the device specific settings that i can alter for best results of points? Maybe for the kinect the field of view, min and max distance because of distortion, or is that all (better) handled by ROS/RTAB-MAP?
- If nr. 1 is Yes; what is the best way to do this. Set decimation of the points as low as possible while scanning, or do that afterwards? I also could not find a clear answer if RTAB-Map saves all info to make an as good as possible pointcloud, or is a lot of information gone after decimation and filtering?
I believe i saw all photos from a scan, but depth images where in another format as extracted from Azure Recorder's mkv. So does it contain all info, and if ROS or RTAB-Map isnt build / cannot produce best possible pointcloud, can other software handle this format depth images?
Till now i can only produce i pointcloud from what i believe is always the last viewing point. First i thought it had to do with not loading all pointclouds. But now i think it has to do with not getting any loopclosures. I filmed it with hand so i think that has to do with to fast movement, and i also did sometimes point it upwards for filming the cealing. But even then, i think there are long enough parts that should have more info for a larger pointcloud than just the end scene (last picture). I dont mind my pc is calculating for days to get it. Just curious of that is possible, or if i have to film it more slowly and carefully.
In another forum question i saw that rtab-map is saving/using only 1 fps for its map and also that 30 fps is way to fast for it. Is that true with all graphicscards? (maybe only an answer on that particular question) Because i ordered an Jetson AGX to ...
You appear to have posted a duplicate of your own question. Could you please check #q366175 and delete either that one or this one? Thanks.