Robotics StackExchange | Archived questions

Sensor data fusion Autoware

I have learned that in Autoware.AI they are fusing all data from Sensors like Cameras, LIDARS, IMUS, and GPS and using it as ROS Navigation Stack but I couldn't clearly understand how you are fusing all data from Sensors using ROS as middleware. Can anyone describe or explain how are we going to fuse data from different sensors and use it for the perception of Cars?

Asked by AM97 on 2019-06-17 04:25:13 UTC

Comments

I know that there is a robot localization package which is used to fuse data from sensors but I am not understanding it clearly how are we fusing all data from sensors

Asked by AM97 on 2019-06-17 11:42:03 UTC

Answers

You can definitely find more about how that's done using Autoware's repository and documentations (including publications). One easy example is Lidar and Camera data fusion: Essentially Lidar detects the objects and knows where it is placed at, on the other hand Camera detects what is the object, then the combo of two gives you "what's where". Lidars IMUs and GPS can be fused to get a better localization and/or resets ndt_matching if localizer is lost.

https://github.com/CPFL/Autoware-Manuals/blob/master/en/Autoware_QuickStart_v1.1.pdf

Asked by cassini.huygens on 2019-06-17 19:50:24 UTC

Comments

I understand that but I am not getting how are we doing it physically, for example, How are fusing the data from LIDAR and camera.

Asked by AM97 on 2019-06-18 01:12:28 UTC

@AM97 What do you mean physically? Do you mean how the data from the two sensors are related to each other? If yes, there is a LiDAR-camera calibration publisher node in Autoware which publishes the relative position information between the sensors.

Asked by i_robot_flight on 2019-07-01 16:45:56 UTC