Sensor Fusion: Frame Offset for robot_location (ZED and IMU)
UPDATE:
Previous Question:I am little new to the EKF world, I was trying to follow various solutions posted by people in ROS world to receive the sensor fusion ideal for my robot. I am trying to understand how to implement the sensor fusion using ZED and IMU: MPU6050.
My ZED is mounted 38" (+Z direction) high from the base, and 18" (-X direction) backwards from the center of the base at 25 degree angles down. While my IMU is located respect to ZED at 9" from ZED (+X direction) and 9" (-X direction) backwards from the center of the base.
I have looked at :
https://answers.ros.org/question/239326/sensor-frame-offset-for-robot_localization-absolute-odometer-offset/
https://answers.ros.org/question/221586/how-to-fuse-imu-visual-odometry-using-robot_localization/
http://docs.ros.org/kinetic/api/robot_localization/html/preparing_sensor_data.html
https://answers.ros.org/question/216750/robot_localization-how-to-set-up-tf/
https://github.com/introlab/rtabmap_ros/issues/35
https://github.com/introlab/rtabmap_ros/blob/master/launch/tests/sensor_fusion.launch
I am unable to produce the results I want. What I am trying to do: Using ZED camera VO, fusing with with IMU MPU6050 odometry using EKF for a powered wheelchair such that I can generate elevation mapping of the room. For now I am bypassing the odom generated by the wheelchair encoders.
Update: This is the code I constructed after trying to following all the other solutions, however I still can't generate the fusion. I was able to run code, but I still have issues with my tf. I am not sure how to fix it, can you probably tell me what requires to be done for tf. I did try to follow the link you suggested for tf wiki, however I am still confused.
Updated Code:
<launch>
<arg name="rgb_frame_id" default="camera_link" />
<arg name="rgb_topic" default="/camera/rgb/image_rect_color" />
<arg name="depth_topic" default="/camera/depth/depth_registered"/>
<arg name="rgb_camera_info_topic" default="/camera/rgb/camera_info" />
<arg name="imu_topic" default="/imu/data" />
<arg name="imu_ignore_acc" default="true" />
<arg name="imu_remove_gravitational_acceleration" default="false" />
<!-- Localization-only mode -->
<arg name="localization" default="false"/>
<arg if="$(arg localization)" name="rtabmap_args" default=""/>
<arg unless="$(arg localization)" name="rtabmap_args" default="--delete_db_on_start"/>
<group ns="rtabmap">
<!-- Visual Odometry -->
<node pkg="rtabmap_ros" type="rgbd_odometry" name="visual_odometry" output="screen" args="$(arg rtabmap_args)">
<remap from="rgb/image" to="$(arg rgb_topic)"/>
<remap from="depth/image" to="$(arg depth_topic)"/>
<remap from="rgb/camera_info" to="$(arg rgb_camera_info_topic)"/>
<remap from="odom" to="/vo"/>
<param name="frame_id" type="string" value="$(arg rgb_frame_id)"/>
<param name="publish_tf" type="bool" value="false"/>
<param name="publish_null_when_lost" type="bool" value="true"/>
<param name="guess_from_tf" type="bool" value="true"/>
<param name="Odom/FillInfoData" type="string" value="true"/>
<param name="Odom/ResetCountdown" type="string" value="1"/>
<param name="Vis/FeatureType" type="string" value="6"/>
<param name="OdomF2M/MaxSize" type="string" value="1000"/>
</node>
<!-- SLAM -->
<node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="$(arg rtabmap_args)">
<param name="frame_id" type="string" value="$(arg rgb_frame_id)"/>
<remap from="rgb/image" to="$(arg rgb_topic)"/>
<remap from="depth/image" to="$(arg depth_topic)"/>
<remap from="rgb/camera_info" to="$(arg rgb_camera_info_topic)"/>
<remap from="odom" to="/odometry/filtered"/>
<param name="Kp/DetectorStrategy" type="string" value="6"/> <!-- use same features as odom -->
<!-- localization mode -->
<param name="Mem/InitWMWithAllNodes" type="string" value="$(arg localization)"/>
<param if="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="false"/>
<param unless="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="true"/>
</node>
</group>
<!--Zed Camera Launch-->
<group ns="zed">
<node name="zed_wrapper_node" pkg="zed_wrapper" type="zed_wrapper_node" >
<param name="resolution" value="3" />
<param name="quality" value="3" />
<param name="sensing_mode" value="1" />
<param name="frame_rate" value="60" />
<param name="odometry_DB" value="" />
<param name="openni_depth_mode" value="0" />
<param name="rgb_topic" value="rgb/image_rect_color" />
<param name="rgb_cam_info_topic" value="rgb/camera_info" />
<param name="rgb_frame_id" value="zed_tracked_frame" />
<param name="left_topic" value="left/image_rect_color" />
<param name="left_cam_info_topic" value="left/camera_info" />
<param name="left_frame_id" value="zed_tracked_frame" />
<param name="right_topic" value="right/image_rect_color" />
<param name="right_cam_info_topic" value="right/camera_info" />
<param name="right_frame_id" value="zed_tracked_frame" />
<param name="depth_topic" value="depth/image_rect_color" />
<param name="depth_cam_info_topic" value="depth/camera_info" />
<param name="depth_frame_id" value="zed_tracked_frame" />
<param name="point_cloud_topic" value="point_cloud/cloud" />
<param name="cloud_frame_id" value="zed_tracked_frame" />
<param name="odometry_topic" value="odom" />
<param name="odometry_frame_id" value="odom" />
<param name="odometry_transform_frame_id" value="zed_tracked_frame" />
</node>
</group>
<!-- IMU: MPU6050 Launch -->
<node pkg="mpu6050_serial_to_imu" type="mpu6050_serial_to_imu_node" name="mpu6050_serial_to_imu_node">
<param name="frame_id" value="imu_link"/>
<param name="imu0_remove_gravitational_acceleration" type="bool" value="true" />
</node>
<!-- TF frame relationship -->
<arg name="camera" default="zed"/>
<arg name="tf_prefix" default="" />
<arg name="pi/2" value="1.5707963267948966" />
<arg name="optical_rotate" value="0 0 0 -$(arg pi/2) 0 -$(arg pi/2)" />
<arg name="5*pi/36" value="0.436332" />
<node pkg="tf2_ros" type="static_transform_publisher" name="$(arg camera)_odom_link1"
args="0 0 0 0 $(arg 5*pi/36) 0 odom $(arg tf_prefix) zed_initial_frame" />
<node pkg="tf" type="static_transform_publisher" name="zed_to_imu_tf"
args="0 0 0 0 0 0 1 /zed_initial_frame /imu_link 100" />
<!-- Odometry fusion (EKF)-->
<node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true">
<rosparam name="frequency" value="50" />
<rosparam name="sensor_timeout" value="0.1" />
<rosparam name="two_d_mode" value="false" />
<rosparam name="transfomr_time_offset" value="0.0" />
<rosparam name="odom_frame" value="odom" />
<rosparam name="base_link" value="$(arg frame_id)" />
<rosparam name="world_frame" value="odom" />
<rosparam name="odom0" value="/camera/odom" />
<rosparam name="imu0" value="$(arg imu_topic)"/>
<!-- The order of the values is x, y, z, roll, pitch, yaw, vx, vy, vz, vroll, vpitch, vyaw, ax, ay, az. -->
<rosparam param="odom0_config">[true, true, true,
false, false, false,
true, true, true,
false, false, false,
false, false, false]</rosparam>
<rosparam param="imu0_config">[false, false, false,
true, true, true,
false, false, false,
true, true, true,
false, false, false] </rosparam>
<rosparam name="odom0_differential" value="true" />
<rosparam name="imu0_differential" value="false" />
<rosparam name="odom0_relative" value="false" />
<rosparam name="imu0_relative" value="true" />
<rosparam name="imu0_remove_gravitational_acceleration" value="true"/>
<rosparam name="print_diagnostics" value="true" />
<!-- ======== ADVANCED PARAMETERS ======== -->
<rosparam name="odom0_queue_size" value="50" />
<rosparam name="imu0_queue_size" value="50" />
<!-- The values are ordered as x, y, z, roll, pitch, yaw, vx, vy, vz, vroll, vpitch, vyaw, ax, ay, az. -->
<rosparam param="process_noise_covariance">[0.005, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.005, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0.006, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.003, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.003, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.006, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.0025, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.0025, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.004, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.001, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.001, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.002, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.001, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.001, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0015]</rosparam>
<!-- The values are ordered as x, y,
z, roll, pitch, yaw, vx, vy, vz, vroll, vpitch, vyaw, ax, ay, az. -->
<rosparam param="initial_estimate_covariance">[1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]</rosparam>
</node>
<!--Elevation Mapping -->
<node pkg="grid_map_visualization" type="grid_map_visualization" name="elevation_map_fused_visualization" output="screen">
<param name="grid_map_topic" value="/elevation_mapping/elevation_map" />
<rosparam command="load" file="$(find elevation_mapping_demos)/config/visualization/fused.yaml" />
</node>
<node pkg="grid_map_visualization" type="grid_map_visualization" name="elevation_map_raw_visualization" output="screen">
<param name="grid_map_topic" value="/elevation_mapping/elevation_map_raw" />
<rosparam command="load" file="$(find elevation_mapping_demos)/config/visualization/raw.yaml" />
</node>
</launch>
Thanks in advance.
Asked by hdbot on 2018-03-02 10:04:12 UTC
Answers
Wow, there's a lot going on here. If I were you, I would start small, and work my way up. I'd get those wheel encoders involved again, and start with an extremely simple EKF setup that just fuses the wheel encoder data with the IMU data. Make sure that is behaving and doing what you expect, then add a new sensor. Make sure you know the coordinate frames of all your sensor data and have your transforms set up accordingly.
Asked by Tom Moore on 2018-03-08 10:45:24 UTC
Comments
Hey,Tom.Can you help me see where the problem is, thanks; https://answers.ros.org/question/284325/imu_gps-robot_localization-and-navsat_transform_node/
Asked by omoyouo on 2018-03-09 04:37:47 UTC
Comments
I see several problems with your set up: 1. ROS employs REP-103 and REP-105 conventions. So you need to fix them first, e.g units are SI, coordinate frames are right-hand. Follow tf wiki...
Asked by tuandl on 2018-03-03 10:11:39 UTC
..to make sure your tf tree is correct.
2) configuration of robot_localization: for odom_0, fusing absolute x,y,z position from VO IMO is not a good dea. VO without correction from IMU typically would lose track of scene due to blurring, sudden movements, etc,. ZED uses rolling shutter which...
Asked by tuandl on 2018-03-03 10:16:21 UTC
..in my experience is not good at find features for tracking.
for imu_0: I'm not sure why you have 2 setups for IMU for which the difference whether or not fusing accelerations. Is there any reason to do this?
Asked by tuandl on 2018-03-03 10:22:05 UTC
for imu_0 you also fuse velocities, which may not be a good idea because the imu simply integrates accelerations to generate those values. You end up fusing duplicate measurement. Finally, you set both differential and relative option to true, is there any reason to do that?
Asked by tuandl on 2018-03-03 10:29:38 UTC
I would suggest you follow #q247116 and r_l wiki since they provide detail setup for r_l.
Asked by tuandl on 2018-03-03 10:32:11 UTC
Do not try to fuse everything at once. Try to fuse one sensor at a time and make sure r_l output what you expect. And the most important thing is that you have to tune the noise and state estimator error covariances to make sure they fit your sensors. It is tedious but unavoidable.
Asked by tuandl on 2018-03-03 10:36:42 UTC
Thanks for your help. I am noticing the the things you have pointed, and you are right I should use tf to help me identify the correct tf frame. As for your second point, how should I fuse the odom0? I thought since the camera has VO it might be the best to use that, as for IMU two setups, is..
Asked by hdbot on 2018-03-03 15:53:05 UTC
because the wheelchair is going to be on slope, and in that case the acceleration might be non-linear due to gravitational force. However, I am not sure what I thought is ideal set up. As for fusing velocities, do you suggest fusing acceleration?
Asked by hdbot on 2018-03-03 15:57:16 UTC
IMO, if you don't have accurate GPS/RTK-GPS to give absolute pose, then just fuse velocity from your odometry
Again, IMO, this is unnecessary unless you always move in an elevated terrain.
Asked by tuandl on 2018-03-05 05:46:31 UTC
Odometry likely report reliable velocity measurements, IMU likely report reliable acceleration and gyro measurements. So, I think it's best to stick with reliable measurements only.
Asked by tuandl on 2018-03-05 05:48:42 UTC