The robot_localization
package was not named well, and so I can understand the confusion of what it actually does. Its primary purpose is to perform multi-sensor fusion. See the wiki for more details:
http://docs.ros.org/melodic/api/robot...
The IMU and odometry data can be readily fused into one of the state estimation nodes in r_l
. For the laser data, you're going to need a node that performs scan-to-map matching (e.g., amcl
). You would then fuse the output of that node into, e.g., the r_l
EKF. Careful, though: amcl
uses your robot's odometry to project the current hypothesis set forward in time, so by fusing it with your odometry and IMU data, you'll be effectively including your wheel odometry data twice.
A better model for using amcl
is probably to fuse the wheel encoder and IMU data alone in an EKF that generates the odom->base_link transform (set the world_frame
parameter to odom), and then let amcl
use that as its odometry information. amcl
would then generate the map->base_link transform.