ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Using only velocity in robot_localization

asked 2021-01-20 16:26:42 -0500

pfedom gravatar image

updated 2021-01-25 12:28:59 -0500

Hey guys,

I'm using robot_localization ekf on ROS2 Foxy to fuse two odometry sources. I am working in 2D so only x, y and yaw is used and the two_d_mode is set to true. I was trying to use only the velocities because the best practice (told on the docu) is to fuse the velocity data if your odometry gives you both. If I do so the Filter doesnt output an odometry source (even with sensor_relative set to true).

I tried using x_vel, y_vel, and yaw_vel on both and also combinations of only using x_vel on the first and the other two on the second odom source for example.

When using the position of one odometry source and the velocity of the other its working fine, as expected. So the setup should be alright.

Do I always have to fuse position data to get it working? Shouldnt the EKF work without position data when the relative parameter is true and its starts at 0?

Thanks for you help in advance!

Update with my settings and sample sensor messages: The only message I get when trying to use 2 velocities and relative parameter true (same case with this parameter off but I dont know how it should work without the relative state so I thought turning it on is the right move) is the following. That message doesnt show up when using pose data from one sensor.

[ekf_node-1] Warning: Invalid frame ID "base_link" passed to canTransform argument target_frame - frame does not exist
[ekf_node-1]          at line 133 in /home/dominik/ros2_ws/src/geometry2/tf2/src/buffer_core.cpp

My settings: I cut off everything on the bottom, beginning with the use_control parameter (which is false), so the thread doesnt get longer than needed. Everything there is untouched, so its default.

ekf_filter_node:
    ros__parameters:
        frequency: 30.0

        sensor_timeout: 0.1

        two_d_mode: true

        transform_time_offset: 0.0

        transform_timeout: 0.0

        print_diagnostics: true

        debug: false

        debug_out_file: /home/dominik/debug/robot_localization_debug.txt

        publish_tf: true

        publish_acceleration: true

        #map_frame: map              # Defaults to "map" if unspecified
        odom_frame: odom            # Defaults to "odom" if unspecified
        base_link_frame: base_link  # Defaults to "base_link" if unspecified
        world_frame: odom           # Defaults to the value of odom_frame if unspecified

        odom0: camera/odom

        odom0_config: [false, false, false,
                       false, false, false,
                       true, true, false,
                       false, false, true,
                       false, false, false]
        odom0_queue_size: 2
        odom0_nodelay: false
        odom0_differential: false
        odom0_relative: true
        odom0_pose_rejection_threshold: 5.0
        odom0_twist_rejection_threshold: 1.0

        odom1: wheel/odom
        odom1_config: [false, false, false,
                       false, false, false,
                       true, true, false,
                       false, false, true,
                       false, false, false]
        odom1_differential: false
        odom1_relative: true
        odom1_queue_size: 2
        odom1_pose_rejection_threshold: 2.0
        odom1_twist_rejection_threshold: 0.2
        odom1_nodelay: false

Sample from /camera/odom

header:
  stamp:
    sec: 1611602305
    nanosec: 640852033
  frame_id: odom
child_frame_id: camera_base_link
pose:
  pose:
    position:
      x: -0.38118633360398285
      y: -3.465815506686516e-05
      z: -0.05412765641582427
    orientation:
      x: 0.00031491834226558634
      y: 0.006867443114608576
      z: 0.0003045650621801162
      w: 0.9999763228655107
  covariance:
  - 1.0
  - 0.0
  - 0.0
  - 0.0
  - 0.0
  - 0.0
  - 0.0
  - 1.0
  - 0.0
  - 0.0
  - 0.0
  - 0.0
  - 0.0
  - 0.0
  - 1.0
  - 0.0
  - 0.0
  - 0 ...
(more)
edit retag flag offensive close merge delete

Comments

@pfedom the "relative" parameter specifies:

  "If this parameter is set to true, then any measurements from this sensor will be fused relative to the first measurement received from that sensor."

Why do you think that has to do with your problem?

JackB gravatar image JackB  ( 2021-01-22 08:09:01 -0500 )edit

My thought was that the Kalman Filter needs an initial pose. Normally given by the pose of an odometry. But if I only fuse velocities it doesnt know where to start. Anyways it didnt work. Do you know how to get it going without pose information given by a sensor?

pfedom gravatar image pfedom  ( 2021-01-22 08:16:06 -0500 )edit

Unless you specify the ~initial_state parameter it just starts everything at 0. So it always knows where to start. I really don't think you need pose information, and I can't say exactly what the problem is with the information provided here, and I don't have time to run it myself rn to check

The state equations for the position state are essentially Newtons law of motion so,

  x_{i+1} = x_{i} + v_{x} delta_t + 0.5 * a_{x} delta_t^2

so it seems like there is a mistake somewhere else in the pipeline as to why this wont work. Because as you can see the position x_{i+1} is clearly linearly related to v_{x}.

JackB gravatar image JackB  ( 2021-01-22 08:31:24 -0500 )edit
1

you are completely correct with your points. Thats why I thought it should work.

pfedom gravatar image pfedom  ( 2021-01-22 08:45:29 -0500 )edit

Please post your full config, as well as a sample message from every sensor input.

Tom Moore gravatar image Tom Moore  ( 2021-01-25 11:27:08 -0500 )edit

updated my question. If you need more information just ping me again here

pfedom gravatar image pfedom  ( 2021-01-25 12:30:16 -0500 )edit

As stated in the docs, unless you are a power user, do not use the rejection thresholds, at least not to start.

JackB gravatar image JackB  ( 2021-01-25 13:39:08 -0500 )edit

I have not changed this. That's the default setting.

pfedom gravatar image pfedom  ( 2021-01-25 13:54:15 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
1

answered 2021-01-29 08:02:35 -0500

Tom Moore gravatar image

Here is your EKF config:

    odom_frame: odom 
    base_link_frame: base_link

So you are telling it that your world frame is odom and your body frame is base_link.

Then you give it measurements. In a nav_msgs/Odometry message, the pose data is reported in the header.frame_id frame, and the velocity data is reported in the child_frame_id. The pose data will always need to be transformed to your world frame (odom) before the filter can use it, and the velocity data will always need to be transformed to your body frame (base_link).

Your camera data is also in the odom frame, so it's happy to use the pose data from it (no transform is needed, because that's the same as your EKF config). However, your camera velocity data is in the camera_base_link frame, so you need to provide a transform from base_link->camera_base_link, or the EKF won't know how to transform that data. Use a static_transform_publisher or something.

header:
  stamp:
    sec: 1611602305
    nanosec: 640852033
  frame_id: odom  # This matches your EKF config, so no transform is needed
child_frame_id: camera_base_link  # This requires a transform to your base_link frame

Similarly, your wheel encoder data velocity is in a frame called wheel_base_link. So you need to provide the transform from base_link->wheel_base_link (or change the wheel odom child_frame_id to base_link).

header:
  stamp:
    sec: 1611604450
    nanosec: 440700380
  frame_id: odom
child_frame_id: wheel_base_link  # This requires a transform to your base_link frame
edit flag offensive delete link more

Comments

That should do the job. Thanks.

Just to get you right: My sensors should have odom_frame: odom and child_frame: something. Than I need a static transform from base_link to each of my sensor child frames. So in my case base_link -> camera_base_link and base_link -> wheel_base_link? Than Robot_localization will take care of the transform from odom to base_link.

please correct me if I am wrong! Changing the child_frame_id of the sensors to base_link or setting up a transform of both to base_link, would lead to both of the sensors doing odom -> base_link transform with different messages, because the diverge over time. Doesnt that also crash?

pfedom gravatar image pfedom  ( 2021-01-29 09:12:43 -0500 )edit

Your understanding (the first question) is correct.

Publishing an odom message is not the same as publishing a transform. Your sensors should not publish the odom->base_link transform. If they are configured to do so, you should disable that.

Tom Moore gravatar image Tom Moore  ( 2021-02-25 00:32:25 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2021-01-20 16:26:42 -0500

Seen: 660 times

Last updated: Jan 29 '21