ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
2d-face-recognition-basic-implementation
Preface: Face recognition is moving fast these days, this entry on answers.ros alone has had over 2000 hits since I first posted it. This answer is my attempt at a simple, yet effective approach to face recognition that I hope some find useful. It is very much a proof of concept and I hope those that give it a try will extend it and share their improvements with me and others.
Overview: The 2d face recognition methodology I present here is an easy and straightforward implementation that combines a modified version of the pi_face_tracker (pi_vision package) with the objects_of_daily_use package.
The pi_face_tracker code is used to “find” a persons head, crop it down to an roi (region of interest) then publish that image over a rostopic (ie the modified ros2opencv.py file in the pi_face_tracker package).
Setup: Link to my modified files for pi_face_tracker and objects_of_daily_use_finder: http://code.google.com/p/2d-face-recognition-basic/downloads/list
1st install the pi_vision package (gives you pi_face_tracker and ros2opencv), install the objects_of_daily_use package and install any dependencies they require. http://www.ros.org/wiki/pi_face_tracker http://www.ros.org/wiki/objects_of_daily_use_finder
Backup the ros2opencv.py file located in the ros2opencv/src folder of the pi_vision package Download my modified version of the file and put it in the src folder, replacing the original version
Download the two launch files and put them in the launch folder of the objects_of_daily_use_finder package.
Create two directories in the objects_of_daily_use_finder directory. faces_database faces_images
Edit both the launch files to give the FULL path to both of these files (relative path with ~ will not work)
Acquisition and Training of face images: Run : roslaunch ros2opencv openni_node.launch This will launch the driver for the kinect
cd to the faces_images folder_recognize] example mkdir scott cd to that persons named directory (VERY Important otherwise you will have to move all of you images) Note: you will repeat this step for each new person you want to learn and recognize You do not need to rename the images, as they will all be referenced by the parent folder that has their name
Run: rosrun image_view image_view image:=/camera/rgb/image_color This will open image view which lets us view raw images from the kinect Have a person face the kinect While the image_view screen is active, right click your mouse. This will save a jpg image to the folder you started the image_view from Have the person turn to different angles and make different facial expressions and each time right click the mouse again to get a new image
Cropping the images: Install an image editing program (I used GIMP) Open each file one by one, select a rectangle around the head region, then select crop to selection. Resave the image
Training: Run: roslaunch objects_of_daily_use_finder 2d_build_faces_db.launch This will do the training and will place the files needed for recognition in the faces_database directory you created earlier (as long as you properly edited the path in the launch files before running this step)
Detection/Recognition: Now for the fun part. Start the pi_face_tracker code: roslaunch pi_face_tracker face_tracker_kinect.launch Start the objects_of_daily_use_finder code: roslaunch objects_of_daily_use_finder 2d_detect_faces.launch
Usage: Currently, the code sets the roi for the face tracking when you draw a rectangle around the head of the person in the pi_face_tracker interface. You will see another small window pop-up that shows you what is being published to the objects_of_daily_use_finder. All of the code that affect what is sent to the objects_of_daily_use_finder is in the ros2opencv.py file You will see another window open which is for the objects_of_daily_use_finder. It will find what it thinks is the best match in the database for the image it is receiving Note: The attempted matching is continuous, even if you are not feeding it a proper image of a persons head) To determine if you have a good match or not you can examine the rostopic outputs from the objects_of_daily_use_finder to see the match confidence level, which ranges from 0 (poor match) to 2 (good match). Try it out and see what levels are high enough for you to consider it a match. For further details, see the pi_face_tracker (pi_vision) wiki and the objects_of_daily_use_finder wiki on ros at: http://www.ros.org/wiki/pi_face_tracker http://www.ros.org/wiki/objects_of_daily_use_finder
Opportunities to improve the code: Any methodology that allows you to identify a face (head), crop to that image (roi) and then publish it on a topic will work as a feed into the objects_of_daily_use_finder code. Possible other options would be to use the openni skeleton tracker to find a person and then use that information to set an roi as a way to crop the image to the head as well. Also, the current code does not “move” the roi as the person moves their head, so if they move around, you have to draw a new rectangle around their head in the pi_face_tracker gui in order to adjust the roi.
Questions and Feedback: Please use answers.ros and ask questions as a comment on this thread if you run into any problems so others can benefit. You can also e-mail me directly at bellscotcv@gmail.com
If you make enhancements, please share with me and the others here so we can all benefit from the opensource model. If you like this answer, and find the code to be useful, please vote it up! Best Regards, -Scott