Setting up Transforms for robot_localization
My question is kind of similar to this one. I'm having trouble understanding how the transforms should be setup for the case I'm working on
My robot is a quad with a camera that can localise itself using markers. My goal is to fuse the pose from a slam package I'm using with the quad's IMU.
My slam package is outputting a transform tree : odom>base_link>camera>marker. The published odom>baselink transform contains the pose information of the robot in the world, so my world frame would be odom. setting my base_link frame to base_link causes the circular relationship that you mention and affects the published pose. If set my base_link frame to a something new like ekf_base,
The pose itself is published on a topic called /fiducial_pose which has a frame_id 'camera' (not odom). If I fed this topic into robot_localisation, will this be a problem? Does the frame_id of the pose data matter?
If I fed just this topic into the ekf in robot_localisation as a pose0 topic in the config file, I should be getting basically a smoother version of the same pose, correct? But my pose jumps all over the place getting even worse the original data plugged in. The covariances in /odometry/filtered jump all over the place. My /vpose frequency is about 20Hz, but I would like to use the ekf to make it higher.
My next step would be to add the IMU values.But i would like to know if what I'm doing is correct before adding the IMU into the equation.
Thanks!
UPDATE 1:
Fixed the frames issue and now I have a working filter that takes into account IMU measurements and visual odometry.
The way I have it setup currently is:
Slam package publishes map > odom, robot localisation as a state estimator node (converting IMU acceleration data to pose estimates) publishes odom>baselink and another instance of r_l is which fuses pose from the state estimator and visual odometry (with map as the world frame).
I have two questions:
1) Is this the right way of doing it? Do I have to run two nodes in this way, or is possible to fuse IMU and visual odometry in one node without separate state estimation node? What would my odom>baselink transform be in this case?
In this setup, I notice that when visual odometry fails for even a second, the final ekf pose estimation immediately drifts away to very high values. On the surface, I can understand why this is happening (double integration). I just want to make sure I did not miss anything else. Do you have any quick tips as to how to slow down the drift by tweaking EKF rate, timeout or covariances?
2) What do you think the best approach would be to go about using my own motion model? I think that you have used a 6DOF Kinematic model of a rigid body in 3D space. I would like ...