Ask Your Question
0

Robot_localization doesnt work with Rosbag file

asked 2020-07-14 01:49:41 -0600

pfedom gravatar image

updated 2020-07-16 06:08:20 -0600

I want to fuse the visual odometry of my Intel Realsense T265 camera with the wheel odometry of my AGV (Mechanum wheels). I am using the ekf_node with default covariances and the default launch file on Ros2 Eloquent. (files at the bootom) If I do it on the Robot itself everything works fine. The output of "/odometry/filtered" works fine and everything is calculated as it should. Now the Problem: If I record the exact same data from the robot with a rosbag2 and than try to calculate Robot_localization on my VM the estimated position goes crazy (values over 40000 in x for example). I record everything with the rosbag, so there is no difference except for the not running ekf node of course.

As far as I tested the error occured everytime I try to the Visual odometry signal or if I run the ekf a second time after stopping. After a reboot of the VM the first run is working, if there is only the wheel_odometry signal in the ekf. I am not sure if that is related or random.

I already tried swapping odom0 with odom1 and trying out different combinations in the config matrices. Like fusing position from both or x from the one, y from the other. Just experiments. But nothing helped.

Is this a known issue? Did I miss a setting that I need to get this going? A colleague mentioned it could be an issue with the recorded time?

Thanks in advance!

The used Launchfile:

from launch import LaunchDescription
from ament_index_python.packages import get_package_share_directory
import launch_ros.actions
import os
import yaml
from launch.substitutions import EnvironmentVariable
import pathlib
import launch.actions
from launch.actions import DeclareLaunchArgument

def generate_launch_description():
return LaunchDescription([
    launch_ros.actions.Node(
        package='robot_localization',
        node_executable='ekf_node',
        node_name='ekf_filter_node',
        output='screen',
        parameters=[os.path.join(get_package_share_directory("robot_localization"), 'params', 'ekf_t265.yaml')],
       ),
])

yaml file:

### ekf config file ###
ekf_filter_node:
  ros__parameters:
    frequency: 30.0

    two_d_mode: true

    odom0: /odom     #wheel_odometry
    odom0_config: [false, false, false,
                   false, false, true,
                   false, false, false,
                   false, false, false,
                   false, false, false]

    odom0_differential: true

    odom1: /camera/odom/sample  #VIO
    odom1_config: [true, true, false,
                   false, false, false,
                   false, false, false,
                   false, false, false,
                   false, false, false]

    odom1_differential: true

    odom_frame: odom
    base_link_frame: odom_frame
    world_frame: odom
    publish_tf: true

Update according to comment: I changed the Frames. My TF Tree now looks like: odom -> base_link -> camera_pose_optical_frame. My base_link frame now is base_link which is also the child frame of the wheel_odometry. Furthermore the transform: base_link -> child_frame of the camera camera_pose_optical_frame should now be as you mentioned. I still get this Warning: Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist

I hope I did not misunderstood or missed something from your comment.

Here is my yaml-file:

ekf_filter_node:
    ros__parameters:
        frequency: 30.0
        use_sim_time: true

        two_d_mode: true

        odom0: /odom
        odom0_config: [false, false, false,
                       false, false, false,
                       true, true, false,
                       false, false, false,
                       false, false, false]

        odom0_differential: false

        odom1: /camera/odom/sample
        odom1_config: [true, true, false,
                       false, false, true,
                       false, false, false,
                       false ...
(more)
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2020-07-14 02:24:09 -0600

Tom Moore gravatar image

updated 2020-07-16 06:28:55 -0600

I am still not really using ROS 2, and I haven't played with bag data therein, but can you augment your question by showing the first, say, 3 messages in the bag file from each of your input sources?

Also, a side note: do your measurement sources produce velocities as well as pose? You have differential mode turned on everywhere, which just differentiates the pose data and passes it to the core filters as velocity data. In this case, I'd fuse absolute pose data from my most accurate source (the camera) and velocity from the wheel encoders. But I am not familiar with the sensors in question.

EDIT in response to question update:

I just noticed this in your config:

base_link_frame: odom_frame

That's not going to work. You should use

base_link_frame: agv_base_link

You might want to read a bit more about ROS coordinate frames in REP-105.

Anyway if you make that change, then you can fuse the velocities from the wheel encoders directly. Before you do, though, you should fill out the X, Y, and yaw covariance for your wheel encoder velocity data.

EDIT in response to comments:

No, sorry, I don't agree. Try this setup:

ekf_filter_node:
  ros__parameters:
    frequency: 30.0

    two_d_mode: true

    odom0: /odom     #wheel_odometry
    odom0_config: [false, false, false,  # Note the change to this config
                   false, false, false,
                   true, true, false,
                   false, false, true,
                  false, false, false]

    odom0_differential: false

    odom1: /camera/odom/sample  #VIO
    odom1_config: [true, true, false,  # Note the change to this config
                   false, false, true,
                   false, false, false,
                   false, false, false,
                   false, false, false]

    odom1_differential: false

    odom_frame: odom_frame          # Note this change
    base_link_frame: base_link_agv  # Note this change
    world_frame: odom
    publish_tf: true

You have two coordinate frames that matter in your setup:

  1. odom_frame - This is your world coordinate frame. If your robot moves from its starting pose and drives 10 meters forward, it will be at position (10, 0) in this frame. All of your sensor messages, both wheel odometry and your camera, list this as their world frame (this is slightly inaccurate, FYI: your wheel odometry and camera data won't agree over time, so they really shouldn't have the same frame_id in their messages).
  2. base_link_agv - This is the coordinate frame that is rigidly affixed to your robot, e.g., the coordinate (1, 0) is always 1 meter in front of your robot. All velocities should be specified in this coordinate frame for your EKF.

All pose data for the EKF will be transformed from the frame_id in the sensor message to the world_frame in your EKF. If those frames aren't the same, you need to provide a transform.

All velocity data for the EKF will be transformed from the child_frame_id in the sensor message to the base_link_frame in your EKF. If those frames aren't the same you need to provide a transform.

Note that I set the frames according to what you are reporting in your sensor data. Normally, I'd tell you to change your sensor messages so they ... (more)

edit flag offensive delete link more

Comments

Yes my sensors produce velocities and pose. Thanks for the node. I didnt know that, I just used the defaults. So you would recommend differential mode=false with the camera and use the position there. Did I get this right?

I will edit my Question with messages from the Sensors.

pfedom gravatar image pfedom  ( 2020-07-14 02:32:06 -0600 )edit

I'd use differential_mode: false with the camera, and also set differential_mode: false on the wheel odometry, and change the config so it fuses X, Y, and yaw velocity.

Tom Moore gravatar image Tom Moore  ( 2020-07-14 02:33:53 -0600 )edit

okay thanks, I will change that according to your note. I added the messages of the sensors to the question.

pfedom gravatar image pfedom  ( 2020-07-14 02:42:10 -0600 )edit

Correcting the names according to the REP-105 is a point on my todo-list. Are you sure with the base_link_frame? Both messages are publishing on the "odom_frame" frame id. If I change base_link to agv_base_link the odometry/filtered topic is not putting out data anymore.

The robot_loclization setting described above are working as expected on my robot, the miscalculation only appears when I use rosbag. I don't think the settings should be the problem here. Don't you agree?

pfedom gravatar image pfedom  ( 2020-07-15 07:51:57 -0600 )edit

First of all, thanks for your good explanation on the TF topic and the effort you put into my question. With your yaml I get this error when starting the EKF: Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist The Topic: odometry/filtered doesnt output data.

Two things:

  1. It has to be base_link: agv_base_link instead of base_link: base_link_agv right? (also tested it with that frame_id, same error)

  2. Did I get this right?: The camera odometry and the wheel odometry should have their own base_link frames, so they do not have the same frame id in their messages. A proper TF-Tree should look like: odom ->base_link->wheel_base, camera_base Furthermore: wheel_base to base_link and camera_base to base_link should be static transforms, because they do not move relative to each other. The transform base_link to odom is than published by robot_localization. Corrections are welcome!

pfedom gravatar image pfedom  ( 2020-07-15 09:32:40 -0600 )edit

Can you post your updated config? Did you also change your messages? If so, can you update those? It sounds like either your messages now use _odom_ as their frame_id or your EKF is now using odom_frame: odom, but that differs from what I posted in my config.

The wheel_base link may or may not be necessary, depending on how you configure your robot. I tend to just use odom->base_link. In any case, your EKF base_link_frame parameter value should be the same as your wheel encoder child_frame_id.

As for the camera, it's probably reporting velocity in the coordinate frame of the camera, so you'll need to define a transform from base_link->camera (or whatever you want to call the camera frame) and change the child_frame_id in your camera messages to be camera.

Tom Moore gravatar image Tom Moore  ( 2020-07-16 02:55:31 -0600 )edit

I did not change the frames yet. I just wanted to summarize the how it should be done. The messages are still the same. I only applied the changes in the yaml, given by your answer. I will make some changes now and than update my Question with the new messages, the yaml and a TF-Tree

pfedom gravatar image pfedom  ( 2020-07-16 03:08:51 -0600 )edit

updated the Question with the things you mentioned

pfedom gravatar image pfedom  ( 2020-07-16 06:08:36 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2020-07-14 01:49:41 -0600

Seen: 357 times

Last updated: Jul 16 '20