Is there a proper way to set up multiple odom frames for ros_localization?

asked 2021-12-19 21:18:36 -0500

mdpalm81 gravatar image

I am trying to use ros_localization for a differential drive robot with wheel odometry (using the ros_control package, two Hokuyo lidars (front and back: merged with ira laser tools, filtered with laser_filters scan_to_scan_filter_chain to remove noise and interference, and fed into the AMCL package), an LPMS IMU, and a Zed camera.

The wheel odometry and Zed camera can both be set to produce a tf frame, but I have found these frames to cause problems with the ros_localization package because each of them are trying to produce the "odom to base link" transform. I am able to use the sensor fusion by disabling these transforms, but I think I might be losing some important information by doing so.

The setup works pretty without the wheel odometry and zed camera producing frame transforms, at least when the robot moves forward or turns, but the localization goes very wrong when reversing. I am wondering if the odometry frames produced by these would be helpful in resolving the localization issue.

The Question: Is there a proper way to fuse sensor information with mutliple odometry frames?

For added information, I am using an Nvidia Jetson TX2 for controlling the wheels and wheel odometry. The AMCL, Hokuyo Lidars, IMU, Zed Camera and ros_localization packages are all run from an Nvidia Jetson Xavier. Both of these machines are running ROS Melodic on Ubuntu 18.04 Bionic.

edit retag flag offensive close merge delete