Robotics StackExchange | Archived questions

On the problem of sensor fusion mapping

I have a robot with an RGB - D camera and a 2D laser radar. I want to use the RGB - D camera to detect obstacles that are not detected by the laser radar and then convert the data into laser messages. Finally, the laser message after the fusion of the two is imported into gmapping to build a complete two-dimensional grid map. I wonder if this idea is feasible, or if there are any open source projects that can be used directly.

Asked by wugeshenhuaking on 2022-10-02 22:05:28 UTC

Comments

Answers

After several days of investigation, my current idea is:

  1. Use depthimage_ to_ Laser package extracts obstacles from the depth image obtained by the depth camera

  2. Coordinate transformation between calibration depth camera and lidar

  3. Fusion the scanning data extracted by the depth camera into the lidar data

  4. Publish the fused data for common SLAM mapping algorithms

I don't know if there is a better way

Asked by wugeshenhuaking on 2022-10-06 05:43:21 UTC

Comments