How to tell if odom localization is "good enough" for the nav2 controller
We're using the Nav2 stack for ROS 2 Galactic, and we've been having some serious problems with navigation. We're using VSLAM for our global positioning and wheel encoders for local positioning, and we're seeing very glitchy behavior from the controller (lots of jerking around, and it has a hard time deciding that its pointing in the right direction). Furthermore, we have troubles where the robot dramatically deviates from the path our SMAC planner finds around corners by up to 12 inches, often causing the robot to collide with obstacles. We are not using the robot_localization package because we are not sure if it is necessary.
Using RVIZ, we can tell that the position the global planner is getting seems quite good, but we don't know how to verify if the odometry position we're feeding into the controller is "good enough" for it to follow the path smoothly and accurately. Setting the static frame in RVIZ to "odom" doesn't really give enough information - visually, the position looks fine enough, but we can't see how the controller is reacting to that position.
RVIZ allows us to "see" what the global planner is seeing to help diagnose issues with it. Is there a setting within RVIZ or even a dedicated tool we can use to "see" what the controller is seeing? Or does the situation I described scream out some obvious problem with our configuration?
What controller? Also, if you get your robot and make it follow a path around a known space (e.g. start from a pose, drive around for a few minutes, then end back at the pose) how much odometric drift are you seeing?
WIthout IMU, your orientation might not be great. Fusing IMU + encoders is the most common use of R_L for mobile robots, since that helps manifestly.
Thanks for the tip, Steve! Getting our robot to navigate in a stable fashion has been an ongoing struggle, unfortunately. We decided to go down the sensor fusion route, but we've yet to get it fully stable, mostly because most of our odometry sources have issues like not following REP-105, not producing covariance matrices, etc. In the simulation where we got fusion working (because the simulated sensors have none of the issues of real life), we had much more success using DWB, and not RPP, when turning (it's much less prone to jerking around or missing the target). But, DWB still struggles on the tight corners we must make into narrow lanes, probably because of the elongated footprint of our robot. I'll likely be making a separate question for that soon.
If anyone does have a tip for viewing/debugging the controller, I'm still interested in finding ways to debug it. The controllers seem to have all sorts of options for producing more information than just the trajectory you can view in RVIZ, but I'm not sure how to use it productively.