Visual servoing with ROS
I'm trying to develop a visual servoing application using ROS, AR.Drone and ardrone_autonomy package. For that I'm planning on tracking a four red dots target to use as reference for the error function but I'm having some trouble tracking those targets. Here's what I tried so far:
1 - Use cv::HoughCircles and track dots' centroid. 2 - Using some blob detection.
I can segment the dots using both methods but I can't track the dots correctly. Let me explain: As the AR.Drone moves, the image changes (obviously) and the reference dots change position on the image. Then I loose track of which dot is which. This way the error functions eventually won't work throughout servoing.
I want to focus on control problems and not on programming and computer vision right now. So I'd like to know if you have any suggestions for a different approach or even some advice I may be missing on the approach I'm using. Is there any out of the shelf solution for this so I can focus on control at once?
Thank you.