External camera as SLAM sensor/input for navigation
Hi everyone!
If this question has been asked somewhere else, please point me there.
Consider the following challenge: onboard navigation is not accurate enough. We'd like to enhance it using outboard cameras (fixed in the environment)
Given the external camera transform to a global frame is known and we can estimate with some variance the transform from the camera to the robot, would it be possible to use that as an input to the robot's navigation algorithm?
For illustration purposes, let's imagine the image below was obtained by the external camera and we're interested in navigating a single robot...
How one would go about integrating that on nav2? Any hints or resources would be appreciated.
Mike, thanks for your comment.
My understanding is that you can find the robot's pose in any arbitrary frame given there's a connection between them in the Transform Tree.
I'm familiar with the motion, sensing, and resampling loop on the MCL algorithm, but not how it is implemented in ROS.