Drone Feedback: Object Detection / Color Detection

asked 2017-10-04 10:52:55 -0500

mikerosz gravatar image

I have been working on a code where an A.R Drone 2.0 will detect color and put a red dot in the middle of the image. I am using streaming for the drone. The goal is for the drone to detect a white gutter and fly straight over it from one point to the other. Essentially following a line. I noticed when I changed the BGR to 0, 0, 255, I get the entire gutter to be distinguished but it detects white spots as well. Is there to isolate my detection just to see the gutter. Maybe using shapes, once the gutter is detected, put a bounding box. And my finally question is how do I tell my drone to follow the red dot or maybe drawing a line. I looked at python-AR drone libraries but don't know how to apply it.This is my code.

import numpy as np    
import cv2        


# open the camera    
cap = cv2.VideoCapture('tcp://192.168.1.1:5555')


def nothing(x):    
    pass

cv2.namedWindow('result')


# Starting with 100's to prevent error while masking    
h,s,v = 100,100,100            
# Creating track bar    
cv2.createTrackbar('h', 'result',0,179,nothing)    
cv2.createTrackbar('s', 'result',0,255,nothing)    
cv2.createTrackbar('v', 'result',0,255,nothing)    

while True:    
    #read the image from the camera    
    ret, frame = cap.read()            
    #You will need this later    
    frame = cv2.cvtColor(frame, 35)    

    #converting to HSV

    hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)                   
    # get info from track bar and appy to result    
    h = cv2.getTrackbarPos('h','result')    
    s = cv2.getTrackbarPos('s','result')    
    v = cv2.getTrackbarPos('v','result')

    # Normal masking algorithm    
    lower_blue = np.array([h,s,v])    
    upper_blue = np.array([180,255,255])        

    mask = cv2.inRange(hsv,lower_blue, upper_blue)            
    result = cv2.bitwise_and(frame,frame,mask = mask)            
    cv2.imshow('result',result)            
    #find center    
    cnts=cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[-2]            
    center=None        

    if len(cnts)>0:    
        c=max(cnts, key=cv2.contourArea)    
        ((x,y),radius)=cv2.minEnclosingCircle(c)    
        M=cv2.moments(c)    
        center=(int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))            
        if radius>10:    
            #cv2.circle(frame, (int(x),int(y)), int(radius), 2)    
            cv2.circle(frame, center,5,(0,0,255),-1)

    # color detection limits    
    lB = 5    
    lG = 50    
    lR = 50    
    hB = 15    
    hG = 255    
    hR = 255    
    lowerLimits = np.array([lB, lG, lR])    
    upperLimits = np.array([hB, hG, hR])            
    # Our operations on the frame come here    
    thresholded = cv2.inRange(frame, lowerLimits, upperLimits)    
    outimage = cv2.bitwise_and(frame, frame, mask = thresholded)                    
    cv2.imshow('original', frame)            
    # Display the resulting frame    
    cv2.imshow('processed',outimage)


    # Quit the program when Q is pressed    
    if cv2.waitKey(1) & 0xFF == ord('q'):    
         break       

# When everything done, release the capture    
print 'closing program'    
cap.release()
cv2.destroyAllWindows()
edit retag flag offensive close merge delete

Comments

1

The first question is an OpenCV question and should be asked on their site: http://answers.opencv.org/questions/ . For the second one, if you want to restructure your question so that it's a ROS based question then maybe we can help you. Otherwise checkout https://robotics.stackexchange.com/

jayess gravatar image jayess  ( 2017-10-04 11:03:06 -0500 )edit