Sensor data fusion Autoware
I have learned that in Autoware.AI they are fusing all data from Sensors like Cameras, LIDARS, IMUS, and GPS and using it as ROS Navigation Stack but I couldn't clearly understand how you are fusing all data from Sensors using ROS as middleware. Can anyone describe or explain how are we going to fuse data from different sensors and use it for the perception of Cars?
I know that there is a robot localization package which is used to fuse data from sensors but I am not understanding it clearly how are we fusing all data from sensors