1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Introduction to Amazon Rekognition

Demonstration - Facial Analysis Web App

The course is part of these learning paths

Solutions Architect – Professional Certification Preparation for AWS
course-steps 47 certification 6 lab-steps 19 quiz-steps 4 description 2
Applying Machine Learning and AI Services on AWS
course-steps 5 certification 1 lab-steps 2

Contents

keyboard_tab
Amazon Rekognition Course Review
play-arrow
Start course
Overview
DifficultyBeginner
Duration1h 11m
Students509
Ratings
4.7/5
star star star star star-half

Description

In this demonstration we will show you how to use Amazon Rekognition together with S3 and CloudFront for hosting, to build a simple facial analysis static web application. This demonstration will utilise the following AWS services and tools:

  • Rekognition
  • AWS Javascript SDK
  • Cognito
  • S3
  • CloudFront
  • Route 53
  • AWS Certificate Manager (ACM)

Transcript

- [Instructor] Welcome back. In this lecture we'll provide a demonstration in which we build a static web app that interfaces with Rekognition. This web app will allow you to take a photo of yourself from within the browser and submit it to Rekognition for facial analysis. Rekognition will return results which will then be displayed in the same web page. Let's take a quick look at the architecture we're proposing.

In this demonstration we'll deploy our static web app files into an S3 bucket then set up a CloudFront distribution in front of it. We'll configure Amazon Cognito and create an identity pool. Within this identity pool, we'll enable access to unauthenticated identities. As this is an example, we'll bypass setting up authentication. Unauthenticated identities will be good enough for this example.

We'll then set up IM policies or permissions that will allow this user to upload files to an S3 bucket and then call the Rekognition service to perform DetectFaces operation on it. The static web app is already built and hosted in CloudAcademy's machine learning get hub repo. It leverages the AWS JavaScript SDK to integrate with both the S3 servers for uploading photos and the Rekognition service for performing facial analysis on those photos.

Right, let's begin. First, let's head over to GitHub and clone the repository that contains the static web app. Copy the URL here and browse to it within your browser. Next, under the clone or download button, copy the URL. We'll use this URL in a terminal session. Here we'll perform a get clone and paste it in the URL. A clone is currently happening and should complete fairly quickly. So the clone has just completed. Let's now list the contents of the current directory.

We can see that we have a single directory within this. Let's change into this directory and list the contents again. We'll navigate down into the demo-SDK directory. Listing the contents again, let's now open this up in Visual Code. We do so by typing code end dot for the current directory. This will open up Visual Studio and show us the structure of our folder. Again, we see our index.html file. Opening this up we can see the contents.

Let's now browse the assets directory. Drilling down into the JavaScript file. Here we can see two files. At the top of the script.js file, we have three environmental variables that we need to configure. The first one is the bucket name, into which we'll upload our photos. The second one is the region and the third is the Cognito identity pool ID. We'll now switch over into our AWS console and zip these up.

First of all, let's navigate to S3. Within S3 we'll click on the create bucket button and set up a new bucket. We'll call it CloudAcademy-rek, for Rekognition. Clicking on the create button creates the bucket for us. We'll take the name of our new bucket and paste it back into our script.js file. Let's do this now. Swapping back into Visual Code, under the album bucket name variable, paste in the name of the new bucket that we just created in S3. Again, we just created CloudAcademy-rek.

Okay, so that sets up our bucket. Next we need to specify the region in which the bucket was created. In our case it was us-east-1. Finally, we need to set up a new Cognito identity pool and then take the identity pool ID and bring it back into the script.js file. Let's head over to the Cognito service and set up a new identity pool. Within Cognito we'll click on the manage federated identities button.

Next, an identity pool name. We'll configure a new pool and call it cloudacademyrek, all one word, no spaces, in lower case. We'll enable access to unauthenticated identities. Finally, click the create pool button in the bottom right-hand corner. This begins creating our identity pool. Part of that process is to set up two identity and access management roles. The first of which will be used by authenticated users and the second of which will be used by unauthenticated users.

Go ahead and click the allow button. This will complete the creation of the IM roles. Here we can see both have been completed successfully. Next we'll need to copy the identity pool ID. Here we can see it highlighted in red. Copy this piece of string and then we'll paste this back into our Visual Code script.js file. Jumping back into Visual Code. We now updated the identity pool ID. We now need to make sure that the script.js file, the settings that we've changed are saved.

Okay, that's great. Next, we'll jump back into our terminal session and we'll zip up our directory structure. In the project root directory, we do zip -r, the name of the zip file, we have dot zip, and the current directory. That will zip up the current directory structure and nothing more into the web.zip archive. Listing the contents of our directory, we see that the new zip file has been created. Okay, at this stage let's jump back into the AWS console. On the home page, we'll browse down to the builder solution subsection. Here we'll click on host a static website.

This will begin our wizard that will walk us through the process of hosting our static website. We'll give our new website a name. In this case we'll call it CloudAcademyRekognitionWebApp. We'll then browse to our web.zip archive file. Let's do that now. Selecting web.zip it will then be passed by the wizard. Finally, we click on create your website button. This begins the process of provisioning a CloudFront distribution where the origin is an S3 bucket into which our web.zip file is unzipped and hosts our static webpages. Eventually, we'll get to see the CloudFront distribution, which we see now. So if we take a close look at the URL we can see that our CloudFront distribution is being created with this DNS name, which is auto-generated by the CloudFront servers.

What we'll do next is we'll navigate to route 53 and we'll create our own DNS record which we'll map using a C name record back to that auto-generated CloudFront DNS value. Within route 53, we click on the create record set for our particular hosted site. We'll create a new record, a C name record, and set the name to be Rekognition. The value of which will be sent to the auto-generated DNS name that CloudFront generated for our CloudFront distribution. So we copy this value and back within route 53, under value, we'll paste this.

Clicking the create button completes the process for this new record set. Now the key point here is that it's a C name record which points back to the auto-generated DNS name that CloudFront created. The next thing we'll do is we'll go into the certificate manager and we'll provision a new certificate because we want to use HTTPS on our CloudFront distribution. So within the AWS certificate manager, click on the get started button. We'll give it a domain name, which needs to be exactly the value that we used within our record set. In our case, rekognition.democloudinc.com.

So we'll copy that value and then paste it here. We click the next button and we're taken to the validation method. Here we'll use DNS validation. Click on the review button and confirming the domain name that we want for our certificate in the validation method. Click on the confirm button. We see that our certificate is in a pending validation state. Here we can see the DNS record required to be hosted in our zone for us to complete the validation. We can do this by clicking on the create record in route 53 button which will do this seamlessly for us. This is the great thing about working with route 53 and ACM, that they are well integrated services.

If we refresh within route 53, we can see that that record has been created. Okay, let's go back under certificate manager now and we'll refresh this page to see the current state of our validation. Eventually, the status for the certificate should reach the issued status. In this case it has. So our certificate is now ready to be used. What we'll do next is we'll go back to route 53 and we can delete the validation record. It's no longer required as our certificate has completed the validation process.

We'll go back into the website hosting wizard to see where we're at and we can see that we're almost completed, but not quite. In the meantime, let's navigate into CloudFront itself and have a look to see what our distribution looks like. Here we can see we have a single distribution where the current status of the distribution is still in progress. So we'll give it a little bit more time and eventually that should complete. Okay, jumping the demonstration ahead.

We can see that the CloudFront distribution has now gone to a deployed status. If we navigate into the distribution, clicking on edit, what we'll want to do is set up an alternate domain name. This is the same one that we set up in route 53. So we copy our record, rekognition.democloudinc.com and back within CloudFront for the particular distribution we'll paste it. We'll also configure an SSL certificate. The same one that we just recently provisioned within ACM. Clicking on yes, edit confirms our changes. Again, this will be published out to all the points of presence within the CloudFront network. Again, jumping the demo ahead, we can see that our latest changes have been deployed.

Let's now take our ultimate domain name that we've assigned to our CloudFront distribution and open it up in a new Incognito window. Clicking enter we can see that we get our webpage come back and that it's being served over HTTP. We get an error, permission denied error, to be exact. The reason being is that the browser is trying to use the camera within the HTML which is a privileged operation and needs to be performed over HTTPS. Doing so we get requested to allow this operation. Once we do that we can see that we now have a webpage with camera access.

Let's now navigate into developer tools. We'll click on the camera button to take a picture of ourselves. In here we'll see some new web traffic being generated and that a few of the calls are resulting in 403s or forbiddens. Taking a closer look at their particular piece of traffic, we realize that we actually need to go back and set up a couple of things. So going back into S3, we'll go into permissions and we'll need to configure CORES or cross-origin resource sharing. Here we'll specify some extra methods. In particular, we'll allow end posts. Clicking save commits the change.

Secondly, we'll jump into IM and we'll need to configure the unauthenticated role and attach privileges to allow it to upload files into our S3 bucket. For the purposes of this demonstration, we'll just give our S3 full access. In a real project, you would probably lock this down, but full access will do for this demo. To ensure that we pick up our new privileges, we'll start a new incognito window and again browse to our website that is being served via CloudFront.

Here again we are requested to allow permission to use the camera, which we do. Next we'll bring up our developer tools again so that we can watch the traffic when we click the capture button within our web app. Clicking the capture button within our web app we can begin to see the agents call being generated by the application. It now looks like the pre-flight cause jake has passed and that we're on to the stage where we're submitting data up to the S3 bucket.

Okay, that's completed successfully. So at this stage our photo should be successfully uploaded into our S3 bucket, but the following call to the recognition service has failed. Let's go back in to IM and update it again, but before we do let's check out our S3 bucket and see if our file indeed was uploaded. Clicking the refresh button, we can see that our first file has successfully arrived. That's great. So we know that we can upload the file. Next we need to sort out this IM permission to give us access to the Rekognition service. Back within IM, we'll attach our seeking policy. In this case, we'll give it access to the Rekognition service. Again, for the purpose of this demonstration, we'll just give it full access to the Rekognition service. We probably wouldn't do this in production. We'll close the current browser and start a new incognito session.

Again, we'll navigate to our web app, rekognition.democloudinc.com, we'll grant it permission to use the camera, again all looks good. We'll bring up developer tools to again watch the traffic. Scrolling down to the camera button, we click it to take a photo. This generates a number of ajacks calls. Again, I request a cause for preflight, then to upload the photo just taken to the S3 bucket. Then if all goes well, we'll call the Rekognition service and perform a detect faces operation. This will tell Rekognition to go to the S3 bucket and take the photo and perform an analysis on it and return the results.

Here we can see the results, including the bounding box of the face within the photo, the age range, in this case a low of 35 and a high of 52. We can see that a smile is false and that eyeglasses are true. Gender is male and that the person in question does not have a beard, nor a mustache. So that's great. That's our end to end system working. Let's jump back into visual code and take a closer look at our web app. In particular, we'll focus on the script.js file. Opening this up, again we see at the top of the file our three environmental variables.

Our bucket name, the region in which the bucket resides, and the identity pool ID. Next we call update on the AWS.conflict object. We specify the region and the credentials. In this case the identity pool ID. Following on from there, the stable issue new is three object. Again, we specify the bucket name that we're going to be uploading our photos to. The next section queries the browser dom and establishes the number of pointers to various objects within the dom.

The real magic of the application takes place in the take snapshot function. The take snapshot function gets triggered when the user clicks the take photo button. The first thing we do is to get a handle to the canvas onto which the current photo has been captured. Calling the two data URL converts the canvas data into an image representation base 64 encoded. We then take that data URL and convert it into blog data, which we pass as part of a parameter into the S3.upload method.

This method takes a call back function. The call back function here calls the Rekognition service. If the upload was successful, the Rekognition service then performs a detect faces on the image now hosted within the S3 bucket. The rekognition.detectfaces method also takes a callback function.

Here, the call back function's role is to sit the HTML on the rick element to be there for the facial analysis response data returned from the detect faces call. For convenience of the end user, this data is formatted and color coded before it is rendered to the screen.

Okay, that concludes our first demonstration. Go ahead and close this lecture and we will see you shortly in the next demonstration.

About the Author

Students11075
Labs28
Courses65
Learning paths15

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.