ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

I am still not really using ROS 2, and I haven't played with bag data therein, but can you augment your question by showing the first, say, 3 messages in the bag file from each of your input sources?

Also, a side note: do your measurement sources produce velocities as well as pose? You have differential mode turned on everywhere, which just differentiates the pose data and passes it to the core filters as velocity data. In this case, I'd fuse absolute pose data from my most accurate source (the camera) and velocity from the wheel encoders. But I am not familiar with the sensors in question.

I am still not really using ROS 2, and I haven't played with bag data therein, but can you augment your question by showing the first, say, 3 messages in the bag file from each of your input sources?

Also, a side note: do your measurement sources produce velocities as well as pose? You have differential mode turned on everywhere, which just differentiates the pose data and passes it to the core filters as velocity data. In this case, I'd fuse absolute pose data from my most accurate source (the camera) and velocity from the wheel encoders. But I am not familiar with the sensors in question.

EDIT in response to question update:

I just noticed this in your config:

base_link_frame: odom_frame

That's not going to work. You should use

base_link_frame: agv_base_link

You might want to read a bit more about ROS coordinate frames in REP-105.

Anyway if you make that change, then you can fuse the velocities from the wheel encoders directly. Before you do, though, you should fill out the X, Y, and yaw covariance for your wheel encoder velocity data.

I am still not really using ROS 2, and I haven't played with bag data therein, but can you augment your question by showing the first, say, 3 messages in the bag file from each of your input sources?

Also, a side note: do your measurement sources produce velocities as well as pose? You have differential mode turned on everywhere, which just differentiates the pose data and passes it to the core filters as velocity data. In this case, I'd fuse absolute pose data from my most accurate source (the camera) and velocity from the wheel encoders. But I am not familiar with the sensors in question.

EDIT in response to question update:

I just noticed this in your config:

base_link_frame: odom_frame

That's not going to work. You should use

base_link_frame: agv_base_link

You might want to read a bit more about ROS coordinate frames in REP-105.

Anyway if you make that change, then you can fuse the velocities from the wheel encoders directly. Before you do, though, you should fill out the X, Y, and yaw covariance for your wheel encoder velocity data.

EDIT in response to comments:

No, sorry, I don't agree. Try this setup:

ekf_filter_node:
  ros__parameters:
    frequency: 30.0

    two_d_mode: true

    odom0: /odom     #wheel_odometry
    odom0_config: [false, false, false,  # Note the change to this config
                   false, false, false,
                   true, true, false,
                   false, false, true,
                  false, false, false]

    odom0_differential: true

    odom1: /camera/odom/sample  #VIO
    odom1_config: [true, true, false,  # Note the change to this config
                   false, false, true,
                   false, false, false,
                   false, false, false,
                   false, false, false]

    odom1_differential: true

    odom_frame: odom_frame          # Note this change
    base_link_frame: base_link_agv  # Note this change
    world_frame: odom
    publish_tf: true

You have two coordinate frames that matter in your setup:

  1. odom_frame - This is your world coordinate frame. If your robot moves from its starting pose and drives 10 meters forward, it will be at position (10, 0) in this frame. All of your sensor messages, both wheel odometry and your camera, list this as their world frame (this is slightly inaccurate, FYI: your wheel odometry and camera data won't agree over time, so they really shouldn't have the same frame_id in their messages).
  2. base_link_agv - This is the coordinate frame that is rigidly affixed to your robot, e.g., the coordinate (1, 0) is always 1 meter in front of your robot. All velocities should be specified in this coordinate frame for your EKF.

All pose data for the EKF will be transformed from the frame_id in the sensor message to the world_frame in your EKF. If those frames aren't the same, you need to provide a transform.

All velocity data for the EKF will be transformed from the child_frame_id in the sensor message to the base_link_frame in your EKF. If those frames aren't the same you need to provide a transform.

I am still not really using ROS 2, and I haven't played with bag data therein, but can you augment your question by showing the first, say, 3 messages in the bag file from each of your input sources?

Also, a side note: do your measurement sources produce velocities as well as pose? You have differential mode turned on everywhere, which just differentiates the pose data and passes it to the core filters as velocity data. In this case, I'd fuse absolute pose data from my most accurate source (the camera) and velocity from the wheel encoders. But I am not familiar with the sensors in question.

EDIT in response to question update:

I just noticed this in your config:

base_link_frame: odom_frame

That's not going to work. You should use

base_link_frame: agv_base_link

You might want to read a bit more about ROS coordinate frames in REP-105.

Anyway if you make that change, then you can fuse the velocities from the wheel encoders directly. Before you do, though, you should fill out the X, Y, and yaw covariance for your wheel encoder velocity data.

EDIT in response to comments:

No, sorry, I don't agree. Try this setup:

ekf_filter_node:
  ros__parameters:
    frequency: 30.0

    two_d_mode: true

    odom0: /odom     #wheel_odometry
    odom0_config: [false, false, false,  # Note the change to this config
                   false, false, false,
                   true, true, false,
                   false, false, true,
                  false, false, false]

    odom0_differential: true
false

    odom1: /camera/odom/sample  #VIO
    odom1_config: [true, true, false,  # Note the change to this config
                   false, false, true,
                   false, false, false,
                   false, false, false,
                   false, false, false]

    odom1_differential: true
false

    odom_frame: odom_frame          # Note this change
    base_link_frame: base_link_agv  # Note this change
    world_frame: odom
    publish_tf: true

You have two coordinate frames that matter in your setup:

  1. odom_frame - This is your world coordinate frame. If your robot moves from its starting pose and drives 10 meters forward, it will be at position (10, 0) in this frame. All of your sensor messages, both wheel odometry and your camera, list this as their world frame (this is slightly inaccurate, FYI: your wheel odometry and camera data won't agree over time, so they really shouldn't have the same frame_id in their messages).
  2. base_link_agv - This is the coordinate frame that is rigidly affixed to your robot, e.g., the coordinate (1, 0) is always 1 meter in front of your robot. All velocities should be specified in this coordinate frame for your EKF.

All pose data for the EKF will be transformed from the frame_id in the sensor message to the world_frame in your EKF. If those frames aren't the same, you need to provide a transform.

All velocity data for the EKF will be transformed from the child_frame_id in the sensor message to the base_link_frame in your EKF. If those frames aren't the same you need to provide a transform.

I am still not really using ROS 2, and I haven't played with bag data therein, but can you augment your question by showing the first, say, 3 messages in the bag file from each of your input sources?

Also, a side note: do your measurement sources produce velocities as well as pose? You have differential mode turned on everywhere, which just differentiates the pose data and passes it to the core filters as velocity data. In this case, I'd fuse absolute pose data from my most accurate source (the camera) and velocity from the wheel encoders. But I am not familiar with the sensors in question.

EDIT in response to question update:

I just noticed this in your config:

base_link_frame: odom_frame

That's not going to work. You should use

base_link_frame: agv_base_link

You might want to read a bit more about ROS coordinate frames in REP-105.

Anyway if you make that change, then you can fuse the velocities from the wheel encoders directly. Before you do, though, you should fill out the X, Y, and yaw covariance for your wheel encoder velocity data.

EDIT in response to comments:

No, sorry, I don't agree. Try this setup:

ekf_filter_node:
  ros__parameters:
    frequency: 30.0

    two_d_mode: true

    odom0: /odom     #wheel_odometry
    odom0_config: [false, false, false,  # Note the change to this config
                   false, false, false,
                   true, true, false,
                   false, false, true,
                  false, false, false]

    odom0_differential: false

    odom1: /camera/odom/sample  #VIO
    odom1_config: [true, true, false,  # Note the change to this config
                   false, false, true,
                   false, false, false,
                   false, false, false,
                   false, false, false]

    odom1_differential: false

    odom_frame: odom_frame          # Note this change
    base_link_frame: base_link_agv  # Note this change
    world_frame: odom
    publish_tf: true

You have two coordinate frames that matter in your setup:

  1. odom_frame - This is your world coordinate frame. If your robot moves from its starting pose and drives 10 meters forward, it will be at position (10, 0) in this frame. All of your sensor messages, both wheel odometry and your camera, list this as their world frame (this is slightly inaccurate, FYI: your wheel odometry and camera data won't agree over time, so they really shouldn't have the same frame_id in their messages).
  2. base_link_agv - This is the coordinate frame that is rigidly affixed to your robot, e.g., the coordinate (1, 0) is always 1 meter in front of your robot. All velocities should be specified in this coordinate frame for your EKF.

All pose data for the EKF will be transformed from the frame_id in the sensor message to the world_frame in your EKF. If those frames aren't the same, you need to provide a transform.

All velocity data for the EKF will be transformed from the child_frame_id in the sensor message to the base_link_frame in your EKF. If those frames aren't the same you need to provide a transform.

Note that I set the frames according to what you are reporting in your sensor data. Normally, I'd tell you to change your sensor messages so they are reporting frame_id: odom and child_frame_id: base_link, and then I'd set odom_frame: odom and base_link_frame: base_link in your EKF config.

I am still not really using ROS 2, and I haven't played with bag data therein, but can you augment your question by showing the first, say, 3 messages in the bag file from each of your input sources?

Also, a side note: do your measurement sources produce velocities as well as pose? You have differential mode turned on everywhere, which just differentiates the pose data and passes it to the core filters as velocity data. In this case, I'd fuse absolute pose data from my most accurate source (the camera) and velocity from the wheel encoders. But I am not familiar with the sensors in question.

EDIT in response to question update:

I just noticed this in your config:

base_link_frame: odom_frame

That's not going to work. You should use

base_link_frame: agv_base_link

You might want to read a bit more about ROS coordinate frames in REP-105.

Anyway if you make that change, then you can fuse the velocities from the wheel encoders directly. Before you do, though, you should fill out the X, Y, and yaw covariance for your wheel encoder velocity data.

EDIT in response to comments:

No, sorry, I don't agree. Try this setup:

ekf_filter_node:
  ros__parameters:
    frequency: 30.0

    two_d_mode: true

    odom0: /odom     #wheel_odometry
    odom0_config: [false, false, false,  # Note the change to this config
                   false, false, false,
                   true, true, false,
                   false, false, true,
                  false, false, false]

    odom0_differential: false

    odom1: /camera/odom/sample  #VIO
    odom1_config: [true, true, false,  # Note the change to this config
                   false, false, true,
                   false, false, false,
                   false, false, false,
                   false, false, false]

    odom1_differential: false

    odom_frame: odom_frame          # Note this change
    base_link_frame: base_link_agv  # Note this change
    world_frame: odom
    publish_tf: true

You have two coordinate frames that matter in your setup:

  1. odom_frame - This is your world coordinate frame. If your robot moves from its starting pose and drives 10 meters forward, it will be at position (10, 0) in this frame. All of your sensor messages, both wheel odometry and your camera, list this as their world frame (this is slightly inaccurate, FYI: your wheel odometry and camera data won't agree over time, so they really shouldn't have the same frame_id in their messages).
  2. base_link_agv - This is the coordinate frame that is rigidly affixed to your robot, e.g., the coordinate (1, 0) is always 1 meter in front of your robot. All velocities should be specified in this coordinate frame for your EKF.

All pose data for the EKF will be transformed from the frame_id in the sensor message to the world_frame in your EKF. If those frames aren't the same, you need to provide a transform.

All velocity data for the EKF will be transformed from the child_frame_id in the sensor message to the base_link_frame in your EKF. If those frames aren't the same you need to provide a transform.

Note that I set the frames according to what you are reporting in your sensor data. Normally, I'd tell you to change your sensor messages so they are reporting frame_id: odom and child_frame_id: base_link, and then I'd set odom_frame: odom and base_link_frame: base_link in your EKF config.

EDIT 2 in response to updates:

You didn't quite make the changes I suggested. Please do the following:

  1. Copy-and-paste this config (note the comments where I point out the differences):

    ekf_filter_node:
      ros__parameters:
        frequency: 30.0
    
        two_d_mode: true
    
        odom0: /odom     #wheel_odometry
        odom0_config: [false, false, false,
                       false, false, false,
                       true, true, false,
                       false, false, true,    # Changed from yours
                       false, false, false]
    
        odom0_differential: false
    
        odom1: /camera/odom/sample  #VIO
        odom1_config: [true, true, false,
                       false, false, true,
                       false, false, false,
                       false, false, false,
                       false, false, false]
    
        odom1_differential: false
    
        odom_frame: odom               # Changed from yours
        base_link_frame: base_link 
        world_frame: odom
        publish_tf: true
    
  2. Change your camera messages so the header looks like this (ignore the time stamp, obviously)

    header:
       stamp:
         sec: 1594901680
        nanosec: 839942118
      frame_id: odom   # This should NOT be base_link
    child_frame_id: camera_pose_optical_frame
    

The only question I have is regarding your camera's frame_id. Try that value and see how it behaves, but it should note be base_link. The frame_id of the camera data will be world-fixed, and base_link is rigidly affixed to your robot.