How to fuse some sensors data to obtain accurate robot localization?

asked 2023-03-29 23:24:58 -0500

Astronaut gravatar image

updated 2023-03-29 23:27:59 -0500

have a mobile robot equipped with 2DLidar, Stereo RGBD Camera , IMU Unit and wheels encoders. I performed ORB_SLAM2 and got the pose but sometimes its lost so its not robust enough especially when robot turns. So I would like to use other sensors on board to get better pose estimation (Localisation) of the robot.

So I have the following topics that can help me for the lokalization.

  1. /robot/data/vehicle_state : Current speed and yawrate calculated from wheel speed.

  2. /robot/data/twist :Twist data created from vehicle speed and IMU data.

  3. /robot/data/vslam_localization/pose :Output pose from ORB_SLAM

  4. /camera/left/image_raw and /camera/right/image_raw : Images from camera

  5. /scan : YDLidar scan

  6. /imu/sensor_msgs/Imu : IMU data

And when use EKF then

  1. /robot/localization/ekf_base/set_pose External command to reset pose of EKF for base_link

  2. /robot/localization/ekf_odom/set_pose External command to reset pose of EKF for odom

And the following tf:

map : map frame

odom : odom frame

base_link : vehicle pose (center of driving wheel axis)

slam_base : a frame where ORB_SLAM is working

So my question is can be used robot_localization package or google cartograper_ros for robot localization and how?

Thanks

edit retag flag offensive close merge delete