In this demonstration we will show you how to use Amazon Rekognition together with S3 and CloudFront for hosting, to build a simple facial analysis static web application. This demonstration will utilise the following AWS services and tools:
- Route 53
- AWS Certificate Manager (ACM)
Welcome back. In this lecture, we'll provide a demonstration in which we build a static web app that interfaces with Rekognition. This web app will allow you to take a photo of yourself from within the browser and submit it to Rekognition for facial analysis. Rekognition will return results which will then be displayed in the same web page. Let's take a quick look at the architecture we're proposing.
In this demonstration, we'll deploy our static web app files into an S3 bucket then set up a CloudFront distribution in front of it. We'll configure Amazon Cognito and create an identity pool. Within this identity pool, we'll enable access to unauthenticated identities. As this is an example, we'll bypass setting up authentication. Unauthenticated identities will be good enough for this example.
Right, let's begin. First, let's head over to GitHub and clone the repository that contains the static web app. Copy the URL here and browse to it within your browser. Next, under the clone or download button, copy the URL. We'll use this URL in a terminal session. Here we'll perform a git clone and paste it in the URL. A clone is currently happening and should complete fairly quickly.
So the clone has just completed. Let's now list the contents of the current directory. We can see that we have a single directory within this. Let's change into this directory and list the contents again. We'll navigate down into the demo-SDK directory. Listing the contents again, let's now open this up in Visual Code. We do so by typing code and dot for the current directory. This will open up Visual Studio and show us the structure of our folder.
First of all, let's navigate to S3. Within S3, we'll click on the Create bucket button and set up a new bucket. We'll call it CloudAcademy-rek, for Rekognition. Clicking on the Create button creates the bucket for us. We'll take the name of our new bucket and paste it back into our script.js file. Let's do this now. Swapping back into Visual Code, under the album bucket name variable, paste in the name of the new bucket that we just created in S3. Again, we just created CloudAcademy-rek. Okay, so that sets up our bucket.
Next, we need to specify the region in which the bucket was created. In our case, it was us-east1. Finally, we need to set up a new Cognito identity pool and then take the identity pool ID and bring it back into the script.js file. Let's head over to the Cognito service and set up a new identity pool. Within Cognito, we'll click on the Manage Federated Identities button.
Next, in identity pool name, we'll configure a new pool and call it cloudacademyrek, all one word, no spaces, in lower case. We'll enable access to unauthenticated identities. Finally, click the Create Pool button in the bottom right-hand corner. This begins creating our identity pool. Part of that process is to set up two identity and access management roles. The first of which will be used by authenticated users and the second of which will be used by unauthenticated users. Go ahead and click the allow button. This will complete the creation of the IM roles. Here we can see both have been completed successfully.
Next, we'll need to copy the identity pool ID. Here we can see it highlighted in red. Copy this piece of string and then we'll paste this back into our Visual Code script.js file. Jumping back into Visual Code, we now update the identity pool ID. We now need to make sure that the script.js file, the settings that we've changed, are saved. Okay, that's great.
Next, we'll jump back into our terminal session and we'll zip up our directory structure. In the project root directory, we do zip hyphen r, the name of the zip file, we have dot zip, and the current directory. That will zip up the current directory structure and nothing more into the web.zip archive. Listing the contents of our directory, we see that the new zip file has been created.
Okay, at this stage, let's jump back into the AWS console. On the home page, we'll browse down to the Build a solution subsection. Here, we'll click on host a static website. This will begin our wizard that will walk us through the process of hosting our static website. We'll give our new website a name. In this case, we'll call it CloudAcademyRekognitionWebApp. We'll then browse to our web.zip archive file. Let's do that now. Selecting web.zip it will then be passed by the wizard.
Finally, we click on create your website button. This begins the process of provisioning a CloudFront distribution where the origin is an S3 bucket into which our web.zip file is unzipped and hosts our static webpages. Eventually, we'll get to see the CloudFront distribution, which we see now. So if we take a close look at the URL, we can see that our CloudFront distribution is being created with this DNS name, which is auto-generated by the CloudFront servers.
What we'll do next is we'll navigate to Route 53 and we'll create our own DNS record which we'll map using a CNAME record back to that auto-generated CloudFront DNS value. Within Route 53, we click on the create record set for our particular hosted zone. We'll create a new record, a CNAME record, and set the name to be Rekognition, the value of which will be sent to the auto-generated DNS name that CloudFront generated for our CloudFront distribution. So we copy this value and back within Route 53, under value, we'll paste this. Clicking the create button completes the process for this new record set.
Now, the key point here is that it's a CNAME record which points back to the auto-generated DNS name that CloudFront created. The next thing we'll do is we'll go into the certificate manager and we'll provision a new certificate because we want to use HTTPS on our CloudFront distribution. So within the AWS certificate manager, click on the Get Started button. We'll give it a domain name, which needs to be exactly the value that we used within our record set. In our case, rekognition.democloudinc.com. So we'll copy that value and then paste it here.
We click the Next button and we're taken to the validation method. Here we'll use DNS validation. Click on the Review button and confirming the domain name that we want for our certificate in the validation method. Click on the Confirm button. We see that our certificate is in a pending validation state. Here we can see the DNS record required to be hosted in our zone for us to complete the validation. We can do this by clicking on the Create record in Route 53 button which will do this seamlessly for us.
This is the great thing about working with Route 53 and ACM, that they are well-integrated services. If we refresh within Route 53, we can see that that record has been created.
Okay, let's go back into certificate manager now and we'll refresh this page to see the current state of our validation. Eventually, the status for the certificate should reach the issued status. In this case, it has. So our certificate is now ready to be used.
What we'll do next is we'll go back to Route 53 and we can delete the validation record. It's no longer required as our certificate has completed the validation process. We'll go back into the website hosting wizard to see where we're at and we can see that we're almost completed, but not quite. In the meantime, let's navigate into CloudFront itself and have a look to see what our distribution looks like. Here we can see we have a single distribution where the current status of the distribution is still in progress. So we'll give it a little bit more time and eventually, that should complete.
Okay, jumping the demonstration ahead, we can see that the CloudFront distribution has now gone to a deployed status. If we navigate into the distribution, clicking on edit, what we'll want to do is set up an alternate domain name. This is the same one that we set up in Route 53. So we copy our record, rekognition.democloudinc.com, and back within CloudFront for the particular distribution, we'll paste it. We'll also configure an SSL certificate. The same one that we just recently provisioned within ACM. Clicking on Yes, Edit confirms our changes. Again, this will be published out to all the points of presence within the CloudFront network.
Again, jumping the demo ahead, we can see that our latest changes have been deployed. Let's now take our ultimate domain name that we've assigned to our CloudFront distribution and open it up in a new incognito window. Clicking enter, we can see that we get our webpage come back and that it's being served over HTTP. We get an error, permission denied error, to be exact. The reason being is that the browser is trying to use the camera within the HTML which is a privileged operation and needs to be performed over HTTPS. Doing so we get requested to allow this operation. Once we do that, we can see that we now have a webpage with camera access.
Let's now navigate into developer tools. We'll click on the camera button to take a picture of ourselves. In here we'll see some new web traffic being generated and that a few of the calls are resulting in 403s or forbiddens. Taking a closer look at their particular piece of traffic, we realize that we actually need to go back and set up a couple of things.
So going back into S3, we'll go into permissions and we'll need to configure CORS, or cross-origin resource sharing. Here we'll specify some extra methods. In particular, we'll allow PUT and POSTS. Clicking save commits the change.
Secondly, we'll jump into IM and we'll need to configure the unauthenticated role and attach privileges to allow it to upload files into our S3 bucket. For the purposes of this demonstration, we'll just give it S3 full access. In a real project, you would probably lock this down, but full access will do for this demo.
To ensure that we pick up our new privileges, we'll start a new incognito window and, again, browse to our website that is being served via CloudFront. Here again, we are requested to allow permission to use the camera, which we do.
Next, we'll bring up our developer tools again so that we can watch the traffic when we click the capture button within our web app. Clicking the capture button within our web app, we can begin to see the AJAX calls being generated by the application, and it now looks like the pre-flight CORS check has passed and that we're on to the stage where we're submitting data up to the S3 bucket.
Okay, that's completed successfully. So at this stage, our photo should be successfully uploaded into our S3 bucket, but the following call to the Recognition service has failed. Let's go back into IM and update it again. But before we do, let's check out our S3 bucket and see if our file indeed was uploaded. Clicking the refresh button, we can see that our first file has successfully arrived. That's great. So we know that we can upload the file.
Next, we need to sort out this IM permission to give us access to the Rekognition service. Back within IM, we'll attach a second policy. In this case, we'll give it access to the Rekognition service. Again, for the purpose of this demonstration, we'll just give it full access to the Rekognition service. We probably wouldn't do this in production. We'll close the current browser and start a new incognito session.
Again, we'll navigate to our web app, rekognition.democloudinc.com, we'll grant it permission to use the camera, again all looks good, we'll bring up developer tools to again watch the traffic. Scrolling down to the camera button, we click it to take a photo. This generates a number of AJAX calls. Again, I request a CORS for preflight, then to upload the photo just taken to the S3 bucket. Then if all goes well, we'll call the Rekognition service and perform a detect faces operation. This will tell Rekognition to go to the S3 bucket and take the photo and perform an analysis on it and return the results.
Here we can see the results, including the bounding box of the face within the photo, the age range, in this case, a low of 35 and a high of 52. We can see that a smile is false and that eyeglasses are true. Gender is male and that the person in question does not have a beard, nor a mustache. So that's great. That's our end-to-end system working.
Let's jump back into Visual Code and take a closer look at our web app. In particular, we'll focus on the script.js file. Opening this up, again we see at the top of the file our three environmental variables. Our bucket name, the region in which the bucket resides, and the identity pool ID.
Next, we call update on the AWS.conflict object. We specify the region and the credentials, In this case, the identity pool ID. Following on from that, we establish a new S3 object. Again, we specify the bucket name that we're going to be uploading our photos to. The next section queries the browser dom and establishes the number of pointers to various objects within the dom.
The real magic of the application takes place in the take snapshot function. The take snapshot function gets triggered when the user clicks the take photo button. The first thing we do is to get a handle to the canvas onto which the current photo has been captured. Calling the toDataURL converts the canvas data into an image representation base64-encoded. We then take that data URL and convert it into blog data, which we pass as part of a parameter into the S3.upload method. This method takes a callback function. The callback function here calls the Rekognition service.
If the upload was successful, the Rekognition service then performs a detect faces on the image now hosted within the S3 bucket. The rekognition.detectfaces method also takes a callback function. Here, the callback function's role is to set the innerHTML on the Rek element to be that for the facial analysis response data returned from the detect faces call. For convenience of the end-user, this data is formatted and color-coded before it is rendered to the screen.
Okay, that concludes our first demonstration. Go ahead and close this lecture and we will see you shortly in the next demonstration.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).