1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Creating a Highly Available Campaign Website - Scenario

The AWS Build- Setting up the AWS Services we need to deliver on our requirements

Start course
2h 7m



In this group of live videos, we tackle a practical scenario to help you learn real-world cloud consulting skills.

This is a unique and engaging live video format where we join the Cloud Academy AWS, Azure, and Google Cloud Platform teams in a real-time work situation. The team listen to a customer brief, discuss and define technical requirements and then evaluate which of the public cloud platforms could best deliver on the customer requirements.

From this course, you will learn how cloud professionals go about solving real-world business problems with cloud solutions.

With this course, you will learn how cloud professionals tackle and solve a business problem with each of the three public cloud platforms. This course is highly recommended for anyone interested in learning how to become a cloud architect, specialist or consultant!

Learning how to use your cloud skills in real-world situations is an important skill for a cloud professional. Real life projects require you to be able to evaluate requirements, define priorities and use your knowledge of cloud services to come up with recommendations and designs that can best meet customers' requirements. As a cloud professional you often have to think on your feet, process information quickly and be able to demonstrate design ideas quickly and efficiently.

In this course, we work through a customer scenario that will help you learn how to approach and solve a business problems with a cloud solution. The scenario requires us to build a highly available campaign site for an online competition run by loungebeer.com - a "craft" beer launching a new product in to the market at the US Superbowl event.

In these interactive discussions we join the team as they evaluate the business requirements, define the project constraints, and agree the scope and deliverables for the solution. We then work through the technical requirements we will use to evaluate how each of the three cloud platforms - Google Cloud Platform, AWS and Microsoft Azure - could be used to meet the technical requirements.

We follow each of the platform teams as they define solution architectures for Google Cloud Platform, AWS and Microsoft Azure. We then regroup to run a feature and price comparison before the team builds a proof of concept for our solution design. 

This group of lectures will prepare you for thinking and reacting quickly,  prioritzing requirements, discussing design ideas and coming up with cloud design solutions.  

02/2018 - DynamoDB now supports encryption at rest so that would potentially influence our choice of database in thie scenario

For planning tools see

For more information on White Listing see



- [Instructor] Let's now walk through the process of setting up our environment. First thing we'll do, is we'll set up Amazon Cognito. We need to establish a federated identity within our block of JavaScript we use in the front end. We create a new identity pool, here, we'll call ours LoungeBeer2017. Enabling access to unauthenticated identities, this allows our JavaScript that runs on the front end to access the Kinesis API endpoint, recalling that our JavaScript will be creating messages and sending them to the Kinesis Firehose stream, it is this identity that will give us this type of access that our JavaScript requires. And on this identity, we need to establish an IAM role, and give it the appropriate permissions to access that particular endpoint. Clicking the Allow button completes the IAM role creation, here we can see that both roles have been created successfully. We can now move on and set up the Kinesis Firehose stream. So within the AWS console, let's now select the AWS Kinesis service. We click on the Firehose console button, and here we will begin to set up our Firehose stream. Clicking the Create Delivery Stream, we choose a destination, in our case, we're going to stream to Amazon S3. We give the stream a name, we'll call it LoungeBeer2017. We configure our stream to send its messages into a new bucket, in our case, we'll create this new bucket called LoungeBeer2017, click the Create Bucket button. OK, we need to ensure that our bucket name is all in lowercase letters, let's do that. Hitting the Create Bucket button again, and we're good to go. Click Next, and our Firehose stream is almost ready to be provisioned. Let's change the data compression to GZIP, so that our messages are compressed when they are put into the S3 bucket. We now establish the IAM role that the Firehose stream will use to be able to access the S3 bucket. This will allow the stream to write out the messages as they come into the S3 bucket. As you can see now, the Kinesis console has autogenerated the IAM policy that we require for the Firehose stream to have access to our new bucket. Inside the resource list you can see that the new bucket that we created, loungebeer2017, has been placed in the resource section. We click on Allow, and the IAM role with its new policy is all configured and wired up for us. Unfortunately in this case, since I've already created a previous role with the same name, I need to now rename this current role, let's do that. In this case, I'll just add the prefix to it, LoungeBeer2017. Clicking Allow again, and the role is created successfully. Clicking the Allow button now takes us back to the Configuration screen, let's finalize the configuration by clicking the Next button. Here we have the Review section, everything looks good, let's complete the creation of the delivery stream. We now see that our delivery stream has been created successfully for us, let's click on it. In this section we can see the details that we configured, additionally, we can see the Monitoring and the S3 logs, via their respective tabs, let's pause and do a quick review of what we've just established. Firstly, we created a Cognito unauthenticated identity, then we created a Kinesis Firehose stream, and configured it to write messages that it receives to an S3 bucket. We'll now move on and start to configure our front end code, in this case, we'll update our JavaScript block with our Cognito and Kinesis configuration attributes. We'll now update our sample JavaScript code. The first thing we need to do is copy in our new identity pool ID, so let's go back to Cognito and we'll pull out the identity pool ID for the new Cognito identity pool that we created at the start of this. So remembering we created LoungeBeer2017, clicking on this identity pool, if we go into the Edit section, we get access to the identity pool ID, let's copy that. We'll go back to our JavaScript code, and we'll paste in the new identity pool ID. In our case, I'm simply going to take a copy of the existing one, comment it out, and create a new one. Additionally, we need to update the region, and again, we comment out the existing one, create a copy of it, and update it to us-west-2. In our JavaScript, we have two event listeners, the first event listener is attached to a button on the front end that, when clicked, sends exactly one message to our Kinesis Firehose stream. The second event listener is attached to the second button, which when clicked, sends a batch of 100 messages, again to the same Kinesis Firehose stream. With this in mind, we update the name of the Kinesis stream within both event listeners. OK, we've made all the necessary edits to our JavaScript code, let's now save this file, and we'll move on and do a review of the IAM policy that is attached to the Cognito unauthenticated identity. Clicking on the IAM role, we'll do a search for the unauthenticated role that is attached to our unauthenticated identity. Clicking on the Show Policy, we can see that it has just the two actions here, neither of which are related to Kinesis, let's leave this as is. I'm now gonna jump onto the command line. I'll start up a local web server, and this will serve up our front end code base. But before I do that, let's just do a quick look at the contents of the directory. Within it, we have these files, let's dump out the kinesis-example.js file. Here, you can see the identity pool ID that we've just updated, and also the region that we've just reconfigured. OK, let's now start serving these files, this will allow us to browse to them. OK, copying the URL, we'll open up Chrome and browse to this. I create a new Incognito window, I dump in the URL, press Enter, I've also opened up debug tools, so that we can see what gets transferred over the wire. So as you can see, by browsing to this URL, a number of requests have been made. Let's click the first button and send one message. You'll notice on the console, that we've encountered a error. This was intentional, so let's copy this arn and go back to our IAM role, and update the policy on it, and include this action that is currently not there. So let's edit this policy, and we will now add in this additional action. We'll click the Apply Policy button, and this will confirm the update. We then go back to our browser, and we need to reload the screen. We'll clear our console, we do our reload, and we click the Send 1 button again. This time, the send to the Kinesis stream has succeeded, as confirmed by the response printed out in the console. Let's look at the Network tab and see what happened. Here, you can see the headers that were used by the JavaScript when it communicated with the Kinesis stream API endpoint. We can also see the request payload. Let's now send a batch of 100, by clicking the second button. Again, we encounter an error. As we can see, we do not have permissions to the firehose:PutRecordBatch action, let's go back and update our policy. We click on the Edit Policy, and we add in the new required action, in this case, firehose:PutRecordBatch. We apply the policy, and we come back to our browser. We reload the browser again. Clicking on the Send 100 button, we now see that our batch send to the Kinesis stream has succeeded, as indicated by a zero on the FailedPutCount, and additionally by the fact that we have an array of 100 responses. Let's now jump back into the AWS console, and look at the Firehose monitoring. Currently we've got it configured to show the monitoring for the last hour, we'll now also take a look at our S3 bucket, and we should be able to start seeing messages being written out from the stream into our bucket. So we do a refresh on this bucket, and we notice that nothing's coming through yet. What we need to do is go back to our Firehose setup, and what we'll soon realize is that Firehose is configured to batch the incoming messages, or I should say buffer the incoming messages. So we can see from the monitoring that messages are actually incoming, but we're not writing out, and the reason for this, as mentioned, is that we're buffering them in the Firehose stream. So let's change the buffering, and let's make it more frequent in terms of how often it writes out to the S3 bucket. In this case, we'll set the S3 buffer size down to one, for one megabyte, and down to 60 seconds. We'll commit that change, and we'll send in some more records. That should be enough. OK, jumping back to our AWS Kinesis console, we'll jump back into our S3 bucket, and now we can see that some of the messages are starting to arrive, so if we drill down into the lowest folder, we now see files, and this represents a compressed or gzipped file of the messages received by Kinesis and written out to S3. What we'll do is, well let's download one of these files. We'll then open up the gzip file, here it gets decompressed for us. And then finally, we'll open up this in our text editor, in our case I'm going to open this up with Atom. And here we can see a copy of all of the data that was written into the Kinesis stream from our JavaScript AWS Kinesis SDK.

About the Author
Andrew Larkin
Head of Content
Learning Paths

Head of Content

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.