ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Robot Localization not transforms from input's frame_id

asked 2018-08-22 09:18:23 -0500

orsalmon gravatar image

Hi,

I am trying to fuse the output of ORB-SLAM2 (as PoseWithCovarianceStamped msg) into ekf_localization_node instance. The frame_id of the pose msg is an optical frame called "zed_optical_frame".

I want to get the output in the standard X-forward Y-left Z-up frame.

In the TF tree I have "base_link" -> "zed_center" -> "zed_optical_frame". The "zed_optical_frame" is just a rotation from "zed_center" with RPY=(-PI/2,0.0,-PI/2).

Still, I get the output in the same optical frame as the input.

I made a test and left only this input to the ekf_localization_node, when I turned on the Differential flag.

I'm running on Jetson TX2 with Ubuntu 16.04 and ROS Kinetic.

Thanks

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2018-09-05 08:19:12 -0500

Tom Moore gravatar image

I would request that you update the question with your config and sample input messages from every sensor input, but in this case, I can probably answer it anyway.

Just so we're clear: the package outputs the pose of the camera, and the camera's transform is relative to base_link? This is, sadly, not a use case that is well-supported by robot_localization.

The state estimation nodes in r_l always transform velocity and acceleration data into the base_link_frame frame, and always transform pose data into the world_frame (typically _map_ or _odom_) frame.

Let's say you get a pose message in the _zed_optical_frame_, and that pose says we are at (10, 0). That will be interpreted as a pose that is 10 meters _in front of_ your robot's camera, so whatever way your robot is facing, we'll try to fuse a pose that is 10 meters in front of it.

What we really need to be doing is transforming the _base_link_->_zed_optical_frame_ transform into the world frame, then applying that transformed offset to the camera's pose data. I suppose any time we're fusing pose data and the required transform "crosses" the world_frame->base_link_frame transform, we should treat the data differently. But I don't know if that will get implemented before fuse, the spiritual successor to r_l, is done.

In the meantime, you have a few options:

  1. In the camera driver, obtain the _base_link_ -> _zed_optical_frame_ transform (which is probably static), and store that. Call it _T_. Every time you generate a measurement from the ORB-SLAM2 package, rotate T by the robot's current orientation, and then apply the inverse of that transform to the camera's pose data. Then change the frame_id to whatever your EKF world_frame is, and publish.
  2. Use differential mode, but then the filter will assume that data is in the base_link_frame frame. Not great, I know.
  3. If ORB_SLAM2 produces velocity data, fuse that instead. The covariance in your state estimate will grow without bound, though.
edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2018-08-22 09:18:23 -0500

Seen: 258 times

Last updated: Sep 05 '18