How to fuse some sensors data to obtain accurate robot localization?
have a mobile robot equipped with 2DLidar, Stereo RGBD Camera , IMU Unit and wheels encoders. I performed ORB_SLAM2 and got the pose but sometimes its lost so its not robust enough especially when robot turns. So I would like to use other sensors on board to get better pose estimation (Localisation) of the robot.
So I have the following topics that can help me for the lokalization.
/robot/data/vehicle_state : Current speed and yawrate calculated from wheel speed.
/robot/data/twist :Twist data created from vehicle speed and IMU data.
/robot/data/vslam_localization/pose :Output pose from ORB_SLAM
/camera/left/image_raw and /camera/right/image_raw : Images from camera
/scan : YDLidar scan
/imu/sensor_msgs/Imu : IMU data
And when use EKF then
/robot/localization/ekf_base/set_pose External command to reset pose of EKF for base_link
/robot/localization/ekf_odom/set_pose External command to reset pose of EKF for odom
And the following tf:
map : map frame
odom : odom frame
base_link : vehicle pose (center of driving wheel axis)
slam_base : a frame where ORB_SLAM is working
So my question is can be used robot_localization package or google cartograper_ros for robot localization and how?
Thanks