ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

robot_localization: sensor frame in differential mode

asked 2021-08-10 11:37:43 -0500

jcremona gravatar image

I'm trying to fuse the output of a VIO package (odometry msgs) and GPS. I have been reading the source code of robot_localization and noticed that when using the differential mode, the resulting velocity is expressed in the base_link_frame (line). The output of my VIO package is expressed in cam frame (I mean that the odometry messages corresponds to the pose of the camera during the navigation). Is there any way to fuse this output using differential mode? To implement this feature in robot_localization, would it be enough to rotate the resulting velocity or am I missing something? I'm not sure what the rotation would be anyway.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
2

answered 2021-11-04 05:20:42 -0500

Tom Moore gravatar image

I believe your question is answered a few lines above. We compute the delta between the two poses (in this case, in the camera frame), then rotate that using the rotation between the sensor frame and the world frame. That should put the delta into your base_link_frame.

However, this might not actually help you, especially if you have your camera's transform defined relative to your base_link_frame (and you probably do). r_l is not good at handling differential poses for sensors whose transforms are defined relative to the base_link_frame.

You have two options, I think:

  1. Modify the VIO package so that it outputs pose and velocity, and then fuse the velocity directly.
  2. As long as you are _only_ going to use the sensor in differential mode, I _think_ you could get away with defining the transform for the camera relative to your world_frame, rather than your base_link_frame. So if you create a static transform publisher and make it publish a rotation-only transform from your world_frame to your camera_frame (the contents of said transform should be the rotation of the camera VO data relative to your robot base_link_frame). The one downside I can see for doing this is that you won't be able to counter the effects of false linear velocity resulting from turning in place, assuming the sensor is not placed at centre of rotation.
edit flag offensive delete link more

Comments

Thanks for your reply, Tom. I had thought of using differential mode because I couldn't see any other way to correctly fuse a VIO (ORB-SLAM3) with GPS. But in general, what would be the correct way to fuse VIO and GPS? (when the output of the VIO is expressed in a frame other than base_link (e.g. in the camera frame))

jcremona gravatar image jcremona  ( 2022-02-01 15:14:10 -0500 )edit

Any time you are fusing pose data from a sensor that is affixed to the robot and not at the robot's centroid, you need to handle that offset from the origin before you can fuse the pose data. But in this particular instance the offset in the world frame is changing. If the robot is facing 0 degrees, the sensor offset in the world frame might be along the X axis. If the robot is facing +90 degrees, then the offset in the world frame might be along the Y axis. Contrast this with the GPS, for which the measurements just need a fixed transform to be transformed into your world frame.

One option is to write a node that does a live computation of the base_link->VIO transform based on the robot's world-frame pose.

Tom Moore gravatar image Tom Moore  ( 2022-02-11 11:10:31 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2021-08-10 11:37:43 -0500

Seen: 109 times

Last updated: Nov 04 '21