After the first and second part of this mini series on infant robots detecting human emotions through audio and facial features, we want the robot to know and recognize the people it talks to (just like a human infant). For the third part of my three part mini series on AI & robotics, I am demonstrating an experimental Face Recognition project to explore its potential.
Here’s a quick overview of the project:
Project Description: Using the face-recognition library to detect face location and encodings in a frame and compare them to know encodings to recognize people in real time.
Data Set: Since this project focuses on recognizing specific people, there isn’t a specific data set that we’re going to use. Rather, you would be including the images of the people you want it to recognize.
Furthermore, let’s divide the project into 2 parts:
- Creating face encodings: Finding face locations and encodings using downloaded images of known people from google and local images
- Real time face recognition: Comparing faces found in the frame to known encodings and finding a match based on that