ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Hi,

I think this is a multi-faceted problem... First, you'll have to make sure you know how to implement high-level control with ardrone_autonomy (which is pretty straight forward if you look at https://github.com/AutonomyLab/ardrone_autonomy#sending-commands-to-ar-drone). Then you'll have to use the stream from the camera (which is either the bottom camera or the front camera, but I believe definitely not both at the same time) and forward it to/subscribe to it from your node which does the OpenCV stuff. There you'll also generate the high level commands based on what you figure out with OpenCV. If you then transform everything correctly from the camera frame to the ardrone frame you should be able to send it cmd_vel commands based on the output of your OpenCV algorithm. Re. the two different cameras, I think there is a service you can call to switch between the active cameras to be transmitted over wifi, if I remember correctly.

Those are just my first thoughts, but I think if you go that route it should be easily doable... Just read into ardrone_autonomy documentation and also have a look at the tum_ardrone package. It uses PTAM for localization, which is an old method but a good start.

Best Regards, Marc