Robotics StackExchange | Archived questions

In robot_localization, why does differential mode not use static transforms?

Hello, My setup is ROS Kinetic, using the kinetic level of robotlocalization on Ubuntu 16.04 I am running into a problem using the differential parameter in the robot localization package. Below is my launch file and a YAML file with the parameters. I am using the differential parameter for my visual odometry which is outputting a PoseWithCovarianceStamped (I know this is unusual output for visual odometry but it is what I have) which I pass as pose0 to the ekf filter I am using (ekfse_odom). I want the differential data instead so if visual odometry resets it will not start providing data based on an origin centered at its location when it re-initializes.

However, when I turn the differential parameter on, the filter ignores a 180 degree roll I am applying with a static transform publisher (in the launch file below) from my visual odometry sensor. Thus my y-axis values get flipped and yaw changes in the wrong direction. I traced the code that handles this in the robotlocalization package. Specifically, rosfilter.cpp starting at line 2542:

// Handle issues where frame_id data is not filled out properly
// @todo: verify that this is necessary still. New IMU handling may
// have rendered this obsolete.std::string finalTargetFrame;
if (targetFrame == "" && msg->header.frame_id == "")
{
  // Blank target and message frames mean we can just
  // use our world_frame
  finalTargetFrame = worldFrameId_;
  poseTmp.frame_id_ = finalTargetFrame;
}
else if (targetFrame == "")
{
  // A blank target frame means we shouldn't bother
  // transforming the data
  finalTargetFrame = msg->header.frame_id;
  poseTmp.frame_id_ = finalTargetFrame;
}
else
{
  // Otherwise, we should use our target frame
  finalTargetFrame = targetFrame;
  poseTmp.frame_id_ = (differential ? finalTargetFrame : msg->header.frame_id);
}

The trinary statement on the last line seems to suggest that when differential mode is on, the source and target frames are both set to targetFrame (which I traced to be the world frame for the filter). So in my case it will cause an odom to odom transformation and thus my static transform is not used.

So my question is why is this the case? Does anybody know the reasoning behind this? Please let me know if any more information is needed from me.

Launch File:

<launch>

  <!-- starting middleman covariance nodes (allows setting of covarainces after data collection) -->
  <rosparam command="load" file="$(find tractor_localization)/params/dual_ekf_navsat_IMU.yaml" />
  <node pkg="pose_covariances" type="pose_covariances.py" name="pose_covariances" clear_params='true' output="screen" />
  <node pkg="imu_covariances" type="imu_covariances.py" name="imu_covariances1" clear_params='true' output="screen" />
  <node pkg="pose_covariances" type="pose_to_odom.py" name="pose_to_odom" clear_params='true' output="screen" />

   <!-- starting static transform publishers -->
  <node pkg="tf" type="static_transform_publisher"
    name="gps_transform"
    args="0.055 -0.095 0 0 0 0 base_link gps 30" /> <!-- should this be to the map frame?? -->

  <node pkg="tf" type="static_transform_publisher"
    name="imu_transform"
    args="-0.068 -0.062 0 -1.763 0 0 base_link base_link_imu1 30" />


  <node pkg="tf" type="static_transform_publisher"
    name="svo_transform_odom"
    args="0.023 0.1 0 0 0 3.1415 odom cam0 30" />

    <!-- x y z yaw pitch roll frame_id child_frame_id  period_in_ms -->

  <!-- starting Kalman filter nodes -->
  <node pkg="robot_localization" type="ekf_localization_node" name="ekf_se_odom" clear_params="true"/>

  <node pkg="robot_localization" type="ekf_localization_node" name="ekf_se_map" clear_params="true">
    <remap from="odometry/filtered" to="odometry/filtered_map"/>
  </node>

  <node pkg="robot_localization" type="navsat_transform_node" name="navsat_transform_node" clear_params="true">
    <remap from="odometry/filtered" to="odometry/filtered_map"/>
    <remap from="imu/data" to="imu_covar1" />
    <remap from="gps/fix" to="fix" />
  </node>

</launch>

My parameters:

ekf_se_odom:
  frequency: 30
  sensor_timeout: 0.1
  two_d_mode: true
  transform_time_offset: 0.0
  transform_timeout: 0.0
  print_diagnostics: true
  debug: false

  map_frame: map
  odom_frame: odom
  base_link_frame: base_link
  world_frame: odom

  # odom0: /odometry/svo
  # odom0_config: [false, false, false,
  #                false, false, false,
  #                true,  true,  true,
  #                false, false, false,
  #                false, false, false]
  # odom0_queue_size: 10
  # odom0_nodelay: true
  # odom0_differential: false
  # odom0_relative: false

  pose0: /pose_covar
  pose0_config: [true, true, true,
                true,  true,  true,
                false, false, false,
                false,  false,  false,
                false,  false,  false]
  pose0_nodelay: false
  pose0_differential: true 
  pose0_relative: false
  pose0_queue_size: 10

  imu0: /imu_covar1
  imu0_config: [false, false, false,
                true,  true,  false,
                false, false, false,
                true,  true,  true,
                true,  true,  true]
  imu0_nodelay: false
  imu0_differential: false
  imu0_relative: false
  imu0_queue_size: 10
  imu0_remove_gravitational_acceleration: true
  reset_on_time_jump: true
  use_control: false

  process_noise_covariance: [1e-3, 0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                             0,    1e-3, 0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                             0,    0,    1e-3, 0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                             0,    0,    0,    0.1,  0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                             0,    0,    0,    0,    0.1,  0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                             0,    0,    0,    0,    0,    0.01, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                             0,    0,    0,    0,    0,    0,    0.5,   0,     0,    0,    0,    0,    0,    0,    0,
                             0,    0,    0,    0,    0,    0,    0,     0.5,   0,    0,    0,    0,    0,    0,    0,
                             0,    0,    0,    0,    0,    0,    0,     0,     0.5,  0,    0,    0,    0,    0,    0,
                             0,    0,    0,    0,    0,    0,    0,     0,     0,    0.3,  0,    0,    0,    0,    0,
                             0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0.3,  0,    0,    0,    0,
                             0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0.3,  0,    0,    0,
                             0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0.3,  0,    0,
                             0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0.3,  0,
                             0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0.3]

  initial_estimate_covariance: [1e-9, 0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    1e-9, 0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    1e-9, 0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    1.0,  0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    1.0,  0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    1e-9, 0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    1.0,  0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    1.0,  0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    1.0,  0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    1.0,   0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     1.0,   0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     1.0,   0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     1.0,  0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    1.0,  0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    1.0]

Asked by Louis Stromeyer on 2019-07-23 11:50:17 UTC

Comments

Answers

Let me start with defining a few things.

First, we'll call the pose at time 0 p0, and the pose at time 1 p1. We could just as easily use N and N+1; I'm just trying to refer to consecutive poses from your sensor. The measurements arrive at times t0 and t1, respectively.

Now, let's say you have a system like this:

image description

Here, your odom frame is your EKF world frame, and pose_frame is the pose of your robot in some other world-fixed frame. This could be from, e.g., a motion capture system, or an indoor GPS system. For this example, let's assume it comes from some sensor that is not affixed to the robot, and that measures the robot's full 3D pose. Note that these are both right-handed frames (and therefore compliant to ROS standards); it's just that the odom frame has z pointing "out" of the page, and the pose_frame has z pointing "into" the page.

In differential mode, the EKF computes a delta from two consecutive poses as:

delta1 = p0^-1 * p1

We then convert that to a velocity:

v1 = detla1 / (t1 - t0)

We then feed that data to the filter as a velocity, not as a pose.

Note that as soon as we do that transform, both delta1 and v1 are no longer in the original coordinate frame. They are both actually in the base_link frame of the robot itself. This is correct for velocity data.

EDIT: It's really the case that differentiating two poses will produce a velocity in the sensor frame. It's just that in my example, the sensor in question is measuring the pose of the robot.

Referring back to the figure, we can see that:

  1. In the odom frame, p0 is at position (2, 2, 0) with Euler roll, pitch, yaw values of (0, 0, 0)
  2. In the odom frame, p1 is at position (5, 4, 0) with Euler roll, pitch, yaw values of (0, 0, pi / 2)
  3. In the pose_frame frame, p0 is at position (14, 9, 0) with Euler roll, pitch, yaw values of (pi, 0, -pi/2) * We'll revisit these - this is where the trouble lies
  4. In the pose_frame frame, p1 is at position (12, 6, 0) with Euler roll, pitch, yaw values of (pi, 0, pi)

Here's a code snippet that uses tf2 to compute the deltas:

{
  tf2::Transform p0;
  p0.setOrigin(tf2::Vector3(2.0, 2.0, 0.0));
  p0.setRotation(tf2::Quaternion::getIdentity());

  tf2::Transform p1;
  tf2::Quaternion quat1;
  quat1.setRPY(0.0, 0.0, M_PI / 2.0);
  p1.setOrigin(tf2::Vector3(5.0, 4.0, 0.0));
  p1.setRotation(quat1);

  auto delta1 = p0.inverseTimes(p1);

  double r, p, y;
  tf2::getEulerYPR(delta1.getRotation(), y, p, r);
  std::cout << "Delta is (" << delta1.getOrigin().getX() << ", " << delta1.getOrigin().getY() << ", " << delta1.getOrigin().getZ() << "), (" << r << ", " << p << ", " << y << ")\n";
}

{
  tf2::Transform p0;
  tf2::Quaternion quat0;
  quat0.setRPY(M_PI, 0.0, -M_PI / 2.0);  // Note the roll angle of pi
  p0.setOrigin(tf2::Vector3(14.0, 9.0, 0.0));
  p0.setRotation(quat0);

  tf2::Transform p1;
  tf2::Quaternion quat1;
  quat1.setRPY(M_PI, 0.0, M_PI);  // Note the roll angle of pi
  p1.setOrigin(tf2::Vector3(12.0, 6.0, 0.0));
  p1.setRotation(quat1);

  auto delta1 = p0.inverseTimes(p1);

  double r, p, y;
  tf2::getEulerYPR(delta1.getRotation(), y, p, r);
  std::cout << "Delta is (" << delta1.getOrigin().getX() << ", " << delta1.getOrigin().getY() << ", " << delta1.getOrigin().getZ() << "), (" << r << ", " << p << ", " << y << ")\n";
}

The output is:

Delta is (3, 2, 0), (0, -0, 1.5708)
Delta is (3, 2, 2.44929e-16), (-1.22465e-16, -1.22465e-16, 1.5708)

Those deltas are the same. So as long as the robot's pose is reported consistently in both frames, the differentiation logic will work the same. The actual transform from pose_frame to odom is irrelevant, because the differentiation puts everything in the robot's base_link frame.

The issue in your case is that your sensor's coordinate frame is upside-down, but the poses it's generating for your robot assume the robot is right-side-up within that frame. In other words, if we revise (3) and (4) above, it would be this:

  1. In the pose_frame frame, p0 is at position (14, 9, 0) with Euler roll, pitch, yaw values of (0, 0, -pi/2)
  2. In the pose_frame frame, p1 is at position (12, 6, 0) with Euler roll, pitch, yaw values of (0, 0, pi)

However, your robot is really upside-down within that frame, so its pose in that frame should have the non-zero roll angles. Contrast this with a 3D motion capture system, which, if your robot was flipped upside-down within it, would detect the roll angle.

In any case, when we compute the new delta as

{
  tf2::Transform p0;
  tf2::Quaternion quat0;
  quat0.setRPY(0.0, 0.0, -M_PI / 2.0);  // Note the roll angle of 0
  p0.setOrigin(tf2::Vector3(14.0, 9.0, 0.0));
  p0.setRotation(quat0);

  tf2::Transform p1;
  tf2::Quaternion quat1;
  quat1.setRPY(0.0, 0.0, M_PI);  // Note the roll angle of 0
  p1.setOrigin(tf2::Vector3(12.0, 6.0, 0.0));
  p1.setRotation(quat1);

  auto delta1 = p0.inverseTimes(p1);

  double r, p, y;
  tf2::getEulerYPR(delta1.getRotation(), y, p, r);
  std::cout << "Delta is (" << delta1.getOrigin().getX() << ", " << delta1.getOrigin().getY() << ", " << delta1.getOrigin().getZ() << "), (" << r << ", " << p << ", " << y << ")\n";
}

we get

Delta is (3, -2, 0), (0, 0, -1.5708)

which is the issue you're seeing.

While transforming the poses before we differentiate them will still work for the use case I want, I am not yet convinced it would work in the general case for what you want. For example:

{
  // Transforms data to the odom frame from the pose frame
  tf2::Transform pose_frame_to_odom;
  tf2::Quaternion quat_pose_frame_to_odom;
  quat_pose_frame_to_odom.setRPY(M_PI, 0.0, -M_PI / 2.0);
  pose_frame_to_odom.setOrigin(tf2::Vector3(11.0, 16.0, 0.0));
  pose_frame_to_odom.setRotation(quat_pose_frame_to_odom);

  tf2::Transform p0;
  tf2::Quaternion quat0;
  quat0.setRPY(M_PI, 0.0, -M_PI / 2.0);  // Note that we have the roll angle of pi again
  p0.setOrigin(tf2::Vector3(14.0, 9.0, 0.0));
  p0.setRotation(quat0);

  p0 = pose_frame_to_odom * p0;

  double r, p, y;
  tf2::getEulerYPR(p0.getRotation(), r, p, y);
  std::cout << "p0 transformed to odom is " << p0.getOrigin().getX() << ", " << p0.getOrigin().getY() << ", " << y << ", " << p << ", " << r << "\n";

  tf2::Transform p1;
  tf2::Quaternion quat1;
  quat1.setRPY(M_PI, 0.0, M_PI);  // Note that we have the roll angle of pi again
  p1.setOrigin(tf2::Vector3(12.0, 6.0, 0.0));
  p1.setRotation(quat1);

  p1 = pose_frame_to_odom * p1;

  tf2::getEulerYPR(p1.getRotation(), r, p, y);
  std::cout << "p1 transformed to odom is " << p1.getOrigin().getX() << ", " << p1.getOrigin().getY() << ", " << y << ", " << p << ", " << r << "\n";

  auto delta1 = p0.inverseTimes(p1);

  tf2::getEulerYPR(delta1.getRotation(), r, p, y);
  std::cout << "Delta is " << delta1.getOrigin().getX() << ", " << delta1.getOrigin().getY() << ", " << y << ", " << p << ", " << r << "\n";
}

This yields

p0 transformed to odom is 2, 2, -1.22465e-16, 1.22465e-16, -1.36846e-48
p1 transformed to odom is 5, 4, 1.27036e-33, -1.6268e-32, 1.5708
Delta is 3, 2, -1.22465e-16, -1.22465e-16, 1.5708

...all of which is right. However, if we zero out the roll angles again:

{
  // Transforms data to the odom frame from the pose frame
  tf2::Transform pose_frame_to_odom;
  tf2::Quaternion quat_pose_frame_to_odom;
  quat_pose_frame_to_odom.setRPY(M_PI, 0.0, -M_PI / 2.0);
  pose_frame_to_odom.setOrigin(tf2::Vector3(11.0, 16.0, 0.0));
  pose_frame_to_odom.setRotation(quat_pose_frame_to_odom);

  tf2::Transform p0;
  tf2::Quaternion quat0;
  quat0.setRPY(0.0, 0.0, -M_PI / 2.0);
  p0.setOrigin(tf2::Vector3(14.0, 9.0, 0.0));
  p0.setRotation(quat0);

  p0 = pose_frame_to_odom * p0;

  double r, p, y;
  tf2::getEulerYPR(p0.getRotation(), r, p, y);
  std::cout << "p0 transformed to odom is " << p0.getOrigin().getX() << ", " << p0.getOrigin().getY() << ", " << y << ", " << p << ", " << r << "\n";

  tf2::Transform p1;
  tf2::Quaternion quat1;
  quat1.setRPY(0.0, 0.0, M_PI);
  p1.setOrigin(tf2::Vector3(12.0, 6.0, 0.0));
  p1.setRotation(quat1);

  p1 = pose_frame_to_odom * p1;

  tf2::getEulerYPR(p1.getRotation(), r, p, y);
  std::cout << "p1 transformed to odom is " << p1.getOrigin().getX() << ", " << p1.getOrigin().getY() << ", " << y << ", " << p << ", " << r << "\n";

  auto delta1 = p0.inverseTimes(p1);

  tf2::getEulerYPR(delta1.getRotation(), r, p, y);
  std::cout << "Delta is " << delta1.getOrigin().getX() << ", " << delta1.getOrigin().getY() << ", " << y << ", " << p << ", " << r << "\n";
}

we get this

p0 transformed to odom is 2, 2, 3.14159, 1.22465e-16, -1.58728e-48
p1 transformed to odom is 5, 4, -3.14159, -2.46519e-32, 1.5708
Delta is 3, -2, 2.54072e-33, -2.56585e-48, -1.5708

I may have made a math error in here somewhere, but I'm not yet convinced that (a) there's an issue with the code, or that (b) changing the line in question will work in the general case. But I can be convinced otherwise. :)

EDIT: I think that, given another recent issue on the GH repo, it makes sense to just transform all pose data into the target frame, and then differentiate it. If we find that fixes your issue, great. If not, we can talk about it more.

Asked by Tom Moore on 2019-08-23 12:09:10 UTC

Comments

Regardless of whether this answers the OP's question: @Tom Moore: this is a very nicely formatted, articulate and very extensive post. I take off my hat to you sir, and give you an imaginary +100 (which unfortunately still only counts as 10 points).

Asked by gvdhoorn on 2019-08-24 06:24:44 UTC

Related: https://github.com/cra-ros-pkg/robot_localization/issues/482

Asked by rmensing on 2019-08-26 04:30:26 UTC

@gvdhoorn thanks. It was as much an exercise for me to make sure I wasn't doing something stupid. And now that I've had more time to think about it, I'm not convinced that I wasn't. :)

Asked by Tom Moore on 2019-08-26 05:06:06 UTC