How to correctly incorporate sensor fusion for odometry in RTAB-Map SLAM Algorithm?
Hello,
I have built a four-wheeled differential drive robot that is equipped with an Intel Realsense D435 RGBD camera sensor, LD14 Lidar sensor, BNO055 9-DOF IMU, and a set of LM393 wheel encoders. I have been trying to perform SLAM using the RTAB-Map algorithm and I have received mixed results. Originally, I was only using RGBD visual odometry from the camera as input for the RTAB SLAM algorithm while also using the Lidar for odometry correction along with building the 2D occupancy grid. The results were pretty decent as can be seen in the screenshot of the generated point cloud + the occupancy grid. However, as is known, it is very easy to lose the odometry using visual odometry alone, so I wanted to implement readings from my other sensors in combination with the RGBD visual odometry in an extended kalman filter node (from robot_localization) for more robust odometry readings. The resulting occupancy grid + point cloud, however, did not turn out as well as the one when only visual odometry was used as can be seen from this screenshot. The launch file that initializes sensor messages, rtabmap, and the ekf can be seen below:
<launch>
<arg name="rviz" default="true" />
<arg name="frame_id" default="base_link"/>
<arg name="vis_odom_topic" default="/vo"/>
<arg name="twist_topic" default="/wheel_twist"/>
<arg name="imu_topic" default="/imu/data" />
<arg name="imu_remove_gravitational_acceleration" default="true" />
<arg name="localization" default="false"/>
<include file="$(find rtabmap_ros)/launch/rtabmap_odom.launch">
<arg unless="$(arg localization)" name="rtabmap_args" value="--delete_db_on_start"/>
<arg name="frame_id" value="$(arg frame_id)"/>
<arg name="subscribe_rgbd" value="true"/>
<arg name="subscribe_scan" value="true"/>
<arg name="rgbd_topic" value="/rgbd_image"/>
<arg name="compressed" value="true"/>
<arg name="visual_odometry" value="true"/>
<arg name="queue_size" value="30"/>
<arg name="rviz" value="$(arg rviz)"/>
<arg if="$(arg rviz)" name="rtabmapviz" value="false"/>
</include>
<!--Launch static transform between base_link and camera_link-->
<node name="base_to_camera_static_publisher" pkg="tf2_ros" type="static_transform_publisher" args="0.2531 0 0.0859 0 0 0 base_link camera_link"/>
<!-- Initializing teleop node-->
<remap from="vel_cmd" to="mega_one/vel_cmd"/>
<node pkg="devbot_teleop" type="teleop_control.py" name="teleop_node" />
<!-- Initializing nodes that will provide other sources of odometry-->
<!-- Launching wheel twist node-->
<remap from="twist" to="mega_one/twist" />
<include file="$(find base_controller_twist_converter)/launch/base_controller_twist.launch">
<!-- Set custom parameters if needed
<param name="linear_velocity_stdev" type="double" value="" />
<param name="angular_velocity_stdev" type="double" value="" />
-->
</include>
<!-- Launching imu node-->
<remap from="bno055" to="mega_two/bno055" />
<include file="$(find bno055_imu_ros)/launch/bno055_imu.launch">
<!-- Set custom parameters if needed
<param name="linear_velocity_stdev" type="double" value="" />
<param name="angular_velocity_stdev" type="double" value="" />
-->
</include>
<!-- Odometry fusion (EKF), refer to demo launch file in robot_localization for more info -->
<node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true" output="screen">
<param name="frequency" value="50"/>
<param name="sensor_timeout" value="0.1"/>
<param name="two_d_mode ...
This may not help but try using the robot_pose_ekf package it transforms odom->base_footprint. and maybe try setting visual_odometry to false