ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

It appears your stereo camera doesn't support velocities, as that would be a solution. You could use a static transform from base_link in that case.

The short answer is that the EKF and UKF in r_l don't do a great job of handling sensors that are measuring world-frame data from an offset position. I've run into this same issue with UUVs and depth sensors. The depth sensor is mounted at some offset from the origin, but the value that I need to subtract from that reading is dependent upon the vehicle's pitch angle. If I just define a transform from base_link, all that will do is add the linear offset to my measurement, which is clearly not what I want, and if I define a static transform from odom, then I will not be accounting for said pitch angle.

In any case, if you define a transform directly from odom, it can't be static, as it will be dependent on your pose in the odom frame. Relative mode won't help you either, as that's simply meant to remove the first pose measurement from a given sensor so that it effectively starts at 0. Differential mode converts subsequent poses to velocities, but then doesn't (and I'm not convinced it should) apply the transform to base_link afterwards.

So there are three options:

  1. Write a quick node that computes the odom->zed transform dynamically, and run everything else as-is
  2. Modify the camera driver to output velocities in its odometry message
  3. Add some kind of parameter to r_l for pose sensors that tell it that it needs to dynamically adjust the "static" transform from odom based on the vehicle's current pose.

It appears your stereo camera doesn't support velocities, as that would be a solution. You could use a static transform from base_link in that case.

The short answer is that the EKF and UKF in r_l don't do a great job of handling sensors that are measuring world-frame data from an offset position. I've run into this same issue with UUVs and depth sensors. The depth sensor is mounted at some offset from the origin, but the value that I need to subtract from that reading is dependent upon the vehicle's pitch angle. If I just define a transform from base_link, all that will do is add the linear offset to my measurement, which is clearly not what I want, and if I define a static transform from odom, then I will not be accounting for said pitch angle.sensors.

In any case, if you define a transform directly from odom, it can't be static, as it will be dependent on your pose in the odom frame. Relative mode won't help you either, as that's simply meant to remove the first pose measurement from a given sensor so that it effectively starts at 0. Differential mode converts subsequent poses to velocities, but then doesn't (and I'm not convinced it should) apply the transform to base_link afterwards.

So there are three options:

  1. Write a quick node that computes the odom->zed transform dynamically, and run everything else as-is
  2. Modify the camera driver to output velocities in its odometry message
  3. Add some kind of parameter to r_l for pose sensors that tell it that it needs to dynamically adjust the "static" transform from odom based on the vehicle's current pose.

It appears your stereo camera doesn't support velocities, as that would be a solution. You could use a static transform from base_link in that case.

The short answer is that the EKF and UKF in r_l don't do a great job of handling sensors that are measuring world-frame data from an offset position. I've run into this same issue with UUVs and depth sensors.

In any case, if you define a transform directly from odom, it can't be static, as it will be dependent on your pose in the odom frame. Relative mode won't help you either, as that's simply meant to remove the first pose measurement from a given sensor so that it effectively starts at 0. Differential mode converts subsequent poses to velocities, but then doesn't (and I'm not convinced it should) apply the transform to base_link afterwards.

So there are three options:

  1. Write a quick node that computes the odom->zed transform dynamically, and run everything else as-is
  2. Modify the camera driver to output velocities in its odometry message
  3. Add some kind of parameter to r_l for pose sensors that tell it that it needs to dynamically adjust the "static" transform from odom based on the vehicle's current pose.

EDIT in response to comments:

I think there are a couple of unknowns for me. First, you said your camera is mounted at -45 degrees pitch, which would point it at the ceiling (positive pitch is down in an ENU frame). Second, even if the camera is pointed down at 45 degrees, does that make a difference in the reported odometry? For example, if you aim the camera down 45 degrees toward the floor and drive straight forward for 10 meters, what do you get for reported odometry? Does it say you are at (10, 0, 0) or (7.07, 0, 7.07)? I think I will know how to address this once I have that information.

It appears your stereo camera doesn't support velocities, as that would be a solution. You could use a static transform from base_link in that case.

The short answer is that the EKF and UKF in r_l don't do a great job of handling sensors that are measuring world-frame data from an offset position. I've run into this same issue with UUVs and depth sensors.

In any case, if you define a transform directly from odom, it can't be static, as it will be dependent on your pose in the odom frame. Relative mode won't help you either, as that's simply meant to remove the first pose measurement from a given sensor so that it effectively starts at 0. Differential mode converts subsequent poses to velocities, but then doesn't (and I'm not convinced it should) apply the transform to base_link afterwards.

So there are three options:

  1. Write a quick node that computes the odom->zed transform dynamically, and run everything else as-is
  2. Modify the camera driver to output velocities in its odometry message
  3. Add some kind of parameter to r_l for pose sensors that tell it that it needs to dynamically adjust the "static" transform from odom based on the vehicle's current pose.

EDIT in response to comments:

I think there are a couple of unknowns for me. First, you said your camera is mounted at -45 degrees pitch, which would point it at the ceiling (positive pitch is down in an ENU frame). Second, even if the camera is pointed down at 45 degrees, does that make a difference in the reported odometry? For example, if you aim the camera down 45 degrees toward the floor and drive straight forward for 10 meters, what do you get for reported odometry? Does it say you are at (10, 0, 0) or (7.07, 0, 7.07)? I think I will know how to address this once I have that information.

EDIT in response to updates:

OK, so the raw output from the camera (not passed through any transform), when you drive forward 10 meters, reports that you at ~(7.07, 0, 7.07). There are two things to consider:

  1. Imagine your camera wasn't inclined, so it was facing straight forward. Then, when driving straight forward or backward, the pose data from the camera would be correct (though instead of going from X = 0 to N, the robot's X position would be 0 - linear_offset to N - linear_offset). However, when turning, the camera would register linear motion as a result of its offset and the rotational motion. In the most extreme case, the robot is turning in place, and the camera is registering its pose change along the circle with radius r, which is equal to the X/Y distance from the center of rotation.

    Assuming you have defined a static transform from base_link->zed_frame, since the camera is measuring pose data, the standard method of applying the transform from the zed_frame frame to the odom frame won't work. This is fundamentally because the sensor isn't measuring data that is relative to the sensor (like, e.g., a laser), but rather it is measuring global pose data. So if you take the pose as estimated by your camera and try to transform it through base_link to odom, the output will be incorrect. For example, let's say your robot is at (4.8, 0) with a yaw of 0, and that data was obtained through means other than the camera. Then the first camera measurement comes in, and it says we're at (5, 0). That gets passed through the zed_frame->base_link transform, which says that we have a linear offset of 0.2 meters, so we get (4.8, 0). Then it goes through the base_link->odom transform and becomes (9.8, 0), which is wrong.

    Instead, what should happen is that we need to take the offset from the centroid, which is currently defined in the base_link->zed_frame transform, and apply only the rotation of the odom->base_link transform to it, then use the resulting transform on the measured pose data. That way, if the robot has a yaw of 0, we remove the linear offset of (0.2, 0) meters to get the pose of the robot. If it has a yaw of 45, we correctly remove (0.14, 0.14).

    So: what you need is to do the above yourself: take the camera offset from the robot centroid, rotate it by the rotation in the odom->base_link transform, then apply the "corrected" linear offset/transform to the camera's pose, and that becomes your measurement. In order to facilitate this, you need a node that is generating the corrected transform (rotating the linear offset, then broadcasting the transform as a child of odom, not base_link). Then r_l will apply that transform and all (should) be well, assuming you also handle (2).

  2. I think the rotational element to your camera's offset should just be a matter of rotating the camera's pose by 45 degrees, as you discovered. Then (I think) you can apply (1). You can probably do all of that in whatever node is handling generating the transform in (1). I haven't given it a huge amount of thought, so I may be missing something.