ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Sensor Fusion: Frame Offset for robot_location (ZED and IMU) [closed]

asked 2018-03-02 09:04:12 -0500

hdbot gravatar image

updated 2018-03-04 18:18:23 -0500

UPDATE:

Previous Question:I am little new to the EKF world, I was trying to follow various solutions posted by people in ROS world to receive the sensor fusion ideal for my robot. I am trying to understand how to implement the sensor fusion using ZED and IMU: MPU6050.

My ZED is mounted 38" (+Z direction) high from the base, and 18" (-X direction) backwards from the center of the base at 25 degree angles down. While my IMU is located respect to ZED at 9" from ZED (+X direction) and 9" (-X direction) backwards from the center of the base.

I have looked at :

https://answers.ros.org/question/2393...
https://answers.ros.org/question/2215...
http://docs.ros.org/kinetic/api/robot...
https://answers.ros.org/question/2167...
https://github.com/introlab/rtabmap_r... https://github.com/introlab/rtabmap_r...

I am unable to produce the results I want. What I am trying to do: Using ZED camera VO, fusing with with IMU MPU6050 odometry using EKF for a powered wheelchair such that I can generate elevation mapping of the room. For now I am bypassing the odom generated by the wheelchair encoders.

Update: This is the code I constructed after trying to following all the other solutions, however I still can't generate the fusion. I was able to run code, but I still have issues with my tf. I am not sure how to fix it, can you probably tell me what requires to be done for tf. I did try to follow the link you suggested for tf wiki, however I am still confused.

C:\fakepath\tf_zed_imu.png

Updated Code:

<launch>

  <arg name="rgb_frame_id"                          default="camera_link"                   />
  <arg name="rgb_topic"                             default="/camera/rgb/image_rect_color"  />
  <arg name="depth_topic"                           default="/camera/depth/depth_registered"/>
  <arg name="rgb_camera_info_topic"                 default="/camera/rgb/camera_info"       />
  <arg name="imu_topic"                             default="/imu/data"                     />
  <arg name="imu_ignore_acc"                        default="true"                          />
  <arg name="imu_remove_gravitational_acceleration" default="false"                         />

 <!-- Localization-only mode -->
  <arg name="localization"      default="false"/>
  <arg     if="$(arg localization)" name="rtabmap_args"  default=""/>
  <arg unless="$(arg localization)" name="rtabmap_args"  default="--delete_db_on_start"/>  

  <group ns="rtabmap">
    <!-- Visual Odometry -->
    <node pkg="rtabmap_ros" type="rgbd_odometry" name="visual_odometry" output="screen" args="$(arg rtabmap_args)">
      <remap from="rgb/image"       to="$(arg rgb_topic)"/>
      <remap from="depth/image"     to="$(arg depth_topic)"/>
      <remap from="rgb/camera_info" to="$(arg rgb_camera_info_topic)"/>
      <remap from="odom"            to="/vo"/>

      <param name="frame_id"               type="string" value="$(arg rgb_frame_id)"/>
      <param name="publish_tf"             type="bool"   value="false"/>
      <param name="publish_null_when_lost" type="bool"   value="true"/>
      <param name="guess_from_tf"          type="bool"   value="true"/>

      <param name="Odom/FillInfoData"      type="string" value="true"/>
      <param name="Odom/ResetCountdown"    type="string" value="1"/>
      <param name="Vis/FeatureType"        type="string" value="6"/> 
      <param name="OdomF2M/MaxSize"        type="string" value="1000"/>
    </node>

    <!-- SLAM -->
    <node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="$(arg rtabmap_args)">
      <param name="frame_id"        type="string" value="$(arg rgb_frame_id)"/>

      <remap from="rgb/image"       to="$(arg rgb_topic)"/>
      <remap from="depth/image"     to="$(arg depth_topic)"/>
      <remap from="rgb/camera_info" to="$(arg rgb_camera_info_topic)"/>
      <remap from="odom"            to="/odometry/filtered"/>

      <param name="Kp/DetectorStrategy"    type="string" value="6"/> <!-- use ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason question is not relevant or outdated by hdbot
close date 2018-10-28 16:26:17.633233

Comments

I see several problems with your set up: 1. ROS employs REP-103 and REP-105 conventions. So you need to fix them first, e.g units are SI, coordinate frames are right-hand. Follow tf wiki...

tuandl gravatar image tuandl  ( 2018-03-03 09:11:39 -0500 )edit

..to make sure your tf tree is correct.

2) configuration of robot_localization: for odom_0, fusing absolute x,y,z position from VO IMO is not a good dea. VO without correction from IMU typically would lose track of scene due to blurring, sudden movements, etc,. ZED uses rolling shutter which...

tuandl gravatar image tuandl  ( 2018-03-03 09:16:21 -0500 )edit

..in my experience is not good at find features for tracking.

for imu_0: I'm not sure why you have 2 setups for IMU for which the difference whether or not fusing accelerations. Is there any reason to do this?

tuandl gravatar image tuandl  ( 2018-03-03 09:22:05 -0500 )edit

for imu_0 you also fuse velocities, which may not be a good idea because the imu simply integrates accelerations to generate those values. You end up fusing duplicate measurement. Finally, you set both differential and relative option to true, is there any reason to do that?

tuandl gravatar image tuandl  ( 2018-03-03 09:29:38 -0500 )edit

I would suggest you follow #q247116 and r_l wiki since they provide detail setup for r_l.

tuandl gravatar image tuandl  ( 2018-03-03 09:32:11 -0500 )edit

Do not try to fuse everything at once. Try to fuse one sensor at a time and make sure r_l output what you expect. And the most important thing is that you have to tune the noise and state estimator error covariances to make sure they fit your sensors. It is tedious but unavoidable.

tuandl gravatar image tuandl  ( 2018-03-03 09:36:42 -0500 )edit

Thanks for your help. I am noticing the the things you have pointed, and you are right I should use tf to help me identify the correct tf frame. As for your second point, how should I fuse the odom0? I thought since the camera has VO it might be the best to use that, as for IMU two setups, is..

hdbot gravatar image hdbot  ( 2018-03-03 14:53:05 -0500 )edit

because the wheelchair is going to be on slope, and in that case the acceleration might be non-linear due to gravitational force. However, I am not sure what I thought is ideal set up. As for fusing velocities, do you suggest fusing acceleration?

hdbot gravatar image hdbot  ( 2018-03-03 14:57:16 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
0

answered 2018-03-08 09:45:24 -0500

Tom Moore gravatar image

Wow, there's a lot going on here. If I were you, I would start small, and work my way up. I'd get those wheel encoders involved again, and start with an extremely simple EKF setup that just fuses the wheel encoder data with the IMU data. Make sure that is behaving and doing what you expect, then add a new sensor. Make sure you know the coordinate frames of all your sensor data and have your transforms set up accordingly.

edit flag offensive delete link more

Comments

Hey,Tom.Can you help me see where the problem is, thanks; https://answers.ros.org/question/2843...

omoyouo gravatar image omoyouo  ( 2018-03-09 03:37:47 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2018-03-02 09:04:12 -0500

Seen: 1,060 times

Last updated: Mar 08 '18