Amazon Rekognition Concepts
Amazon Rekognition Demonstrations
Amazon Rekognition Course Review
The course is part of these learning paths
In this lecture we dive into collections and the storage based API set.
The Rekognition service allows you to create server side containers called Collections. Within these collections you can store faces, or more explicitly a vector of facial features, 1 vector with multiple facial attributes, per face.
Welcome back. In this lecture, we'll cover off the concept of face collections and associated storage-based APIs. The Rekognition service allows you to create server-side containers called collections. Within these collections, you can store faces or, more explicitly, a vector of facial features: one vector with multiple facial attributes per face.
An example where you might use collections would be authenticating users by facial recognition. You would create and populate a collection with a scene of faces, one per person within your company. When each employee arrives at work, a capture of their face is taken and a search is done within the collection to find a match. Depending on outcome, match or not, the employee is allowed into the office or not.
Creating a collection is easy. You simply call the CreateCollection API operation and provide a string as an identifier for the new collection. Additionally, you can also call the list collections and/or delete collection to manage the collections you have already created. You can set up multiple collections. Each collection is later referenced by the CollectionId assigned at time of creation.
To add faces to the collection, you call the IndexFaces operation on an image containing faces. Each detected face within the image will result in the facial features and metadata being stored within a vector which then itself is added to the collection.
When you call the IndexFaces operation, you pass in the CollectionId for the specific collection you want to add into. You can call the IndexFaces operation on multiple different image files with the same CollectionID. When doing so, make sure to specify a unique, external image ID value as this will help later on when performing searches on a collection and knowing what image a matching face was originally from. As expected, there are also list and delete operations—list faces and delete faces, respectively—that allow you to manage the faces within a particular collection.
The most important thing you do with collections is to search on them. There are two approaches to searching. The first is to search by an image and the second is to search by face ID. We'll cover off the first approach. The second is a variation of the first.
Given an input image containing a face, you can search an existing collection for any matching faces. To perform this type of search, you make a call to the SearchFacesByImage API operation passing in the CollectionId and an image either by S3 location or Base64 encoding.
The matching faces within the collection will be returned in the format as shown here. Special mention to the ExternalImageId attribute tracked on each and every matching face object, this is an attribute that you assign when calling the IndexFaces operation and can be used to determine which image the matching face belongs to.
That concludes our lecture on collections and the associated storage-based APIs available within the Rekognition service. In the next lecture, we'll review the different ways you can interface with the Rekognition service. Go ahead and close this lecture and we'll see you shortly in the next one.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, and Kubernetes.