On the problem of sensor fusion mapping
I have a robot with an RGB - D camera and a 2D laser radar. I want to use the RGB - D camera to detect obstacles that are not detected by the laser radar and then convert the data into laser messages. Finally, the laser message after the fusion of the two is imported into gmapping to build a complete two-dimensional grid map. I wonder if this idea is feasible, or if there are any open source projects that can be used directly.
Asked by wugeshenhuaking on 2022-10-02 22:05:28 UTC
Answers
After several days of investigation, my current idea is:
Use depthimage_ to_ Laser package extracts obstacles from the depth image obtained by the depth camera
Coordinate transformation between calibration depth camera and lidar
Fusion the scanning data extracted by the depth camera into the lidar data
Publish the fused data for common SLAM mapping algorithms
I don't know if there is a better way
Asked by wugeshenhuaking on 2022-10-06 05:43:21 UTC
Comments