Is nav stack useable for "eagle eye" camera scenario?
We build a simple robot, without any sensors, just two motors. To detect the position and orientation of the robot, we use a camera mounted over the table, where the robot is running on.
I want to use the navigation stack for that setup, hopefully it makes the live easier, to get the best route for the robot. So far I've only read, that I need a LaserScan or PointCloud sensor on the robot, to fulfill the nav stack setup requirements.
And here I ask myself currently, if it is a good idea, to use the nav stack for my navigation scenario (changing/moving targets on the table and changing barriers).
Is it enough to setup tf / map / odom / acml / move_base / base_controller? I read always, that the LaserScan or PointCloud is important for the nav stack, but I don't have any data like that, but I know exactly, where my robot is :-)