CloudAcademy
  1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Creating a Highly Available Campaign Website - Scenario

Microsoft Azure - Solution Design

play-arrow
Start course
Overview
DifficultyBeginner
Duration2h 7m
Students437

Description

 

 

In this group of live videos, we tackle a practical scenario to help you learn real-world cloud consulting skills.

This is a unique and engaging live video format where we join the Cloud Academy AWS, Azure, and Google Cloud Platform teams in a real-time work situation. The team listen to a customer brief, discuss and define technical requirements and then evaluate which of the public cloud platforms could best deliver on the customer requirements.

From this course, you will learn how cloud professionals go about solving real-world business problems with cloud solutions.

With this course, you will learn how cloud professionals tackle and solve a business problem with each of the three public cloud platforms. This course is highly recommended for anyone interested in learning how to become a cloud architect, specialist or consultant!

Learning how to use your cloud skills in real-world situations is an important skill for a cloud professional. Real life projects require you to be able to evaluate requirements, define priorities and use your knowledge of cloud services to come up with recommendations and designs that can best meet customers' requirements. As a cloud professional you often have to think on your feet, process information quickly and be able to demonstrate design ideas quickly and efficiently.

In this course, we work through a customer scenario that will help you learn how to approach and solve a business problems with a cloud solution. The scenario requires us to build a highly available campaign site for an online competition run by loungebeer.com - a "craft" beer launching a new product in to the market at the US Superbowl event.

In these interactive discussions we join the team as they evaluate the business requirements, define the project constraints, and agree the scope and deliverables for the solution. We then work through the technical requirements we will use to evaluate how each of the three cloud platforms - Google Cloud Platform, AWS and Microsoft Azure - could be used to meet the technical requirements.

We follow each of the platform teams as they define solution architectures for Google Cloud Platform, AWS and Microsoft Azure. We then regroup to run a feature and price comparison before the team builds a proof of concept for our solution design. 

This group of lectures will prepare you for thinking and reacting quickly,  prioritzing requirements, discussing design ideas and coming up with cloud design solutions.  

02/2018 - DynamoDB now supports encryption at rest so that would potentially influence our choice of database in thie scenario
https://aws.amazon.com/blogs/aws/new-encryption-at-rest-for-dynamodb/

For planning tools see
https://www.agilebusiness.org/content/moscow-prioritisation-0
http://www.allaboutagile.com/prioritization-using-moscow/

For more information on White Listing see
https://cloudacademy.com/course/understanding-aws-authentication-authorization-accounting/authorization-in-aws-1/

 

Transcript

- What I was thinking is serverless because really it is a bare bones API that you can post data to. I was thinking of using a serverless Azure function with an HTTP trigger, that's going to be the API. That takes the post request, dumps that into an event hub and then that queues it up. From there, there's another serverless function that's just a maybe five line function that just processes that queue and stores that to Cosmos DB. Cosmos DB does encryption at rest which is one of our requirements.

- It makes it easy because you can query with SQL, which we all know and love so it makes it easy. Let me share my screen.

- That's high availability, Ben. We've got not issues with availability with that design?

- There shouldn't be. The assets will be hosted with Azure Storage, cloud storage, and with CDN in front of it. That's going to handle edge locations for that. We can have the actual functions deployed in multiple regions if need be. We can handle that. For this case, I'm not sure, I think we'll be high enough availability with just a ... Let me know when you can see my screen here.

- Okay, we've got it.

- [Ben] Okay, cool. Let me show you the proof of concept. This is just a very crude bootstrap form and if I type in fake@example.com, again this is a very crude proof of concept. Now, what happens is I click this, it's going to send off an API. Send off a post request to this save data function. All this does is it takes the request and it de-serializes it into this object here and it's going to queue it up into an event hub. This gives us ultra high availability. If for some reason there should be something that goes wrong with this function, we can always take something else and serve the API up from that. The API and the processing of the event hub are decoupled. Once this sends it to an event hub, there's another function here. This is the processing the event and saving it to Cosmos. This is where we take the request from the user and just return the response of OK.

- Are we doing any transformation of the post string?

- [Ben] Yeah, just a little bit. We're taking and we're transforming into this user data. What this does is it basically white lists only this email property. My assumption was basically, if you submit the form, you're kind of approving of their legal agreement. You're saying you're old enough for this campaign, etc. You can add new properties here and hit at the HTML as well. This is the basic form here. Once you click submit, it takes the email address, this is a view.js and it's going to just turn this into a string. It's going to send this off to our endpoint here. On success, it's just going to show the success message, on error, the error message. If you needed to add new properties, you could add them in the form here. New input, just edit this JSON here. Jumping back, yeah this one's the same data. I thought it was in the other one at first. We have the http request that comes in. Even if somebody puts a million properties on there, once we de-serialize this object, email is the only thing that remains.

- Yep, nice.

- [Ben] All this does, output event message. This is going to take this object that we've turned into this user data and it's going to dump this onto our event hub. We have basically an event hub queue that's sitting there listening. Those are extremely available. We don't have any availability issues in there. Then it just returns an HTTP 200 and a JSON message with a status of OK. This processes the queue. This is set up based on a trigger. Any time a new message comes into the event hub, it's going to trigger this. We can actually fire up the consol here. You can see this in action. Clear. If I submit this again, fake2. HTTP triggers process the request, it's going to grab that, put it in event hub and then event hub grabs that and puts it on Cosmos so now we have it stored in the db and if I run this query, you can see, let me scroll down here. There's fake, there's fake2. Just to show I'm not hoodwinking you.

- That's awesome. Encryption at rest, brilliant. Plus we can SQL it. That sounds really powerful.

- It's not bad. It's easy to implement. If you check out this, we have 19 lines of code and here we have, we'll call it 23, some white space. Ease of maintenance, ease of accessing the data after the fact. Here are actual files. There's really not much code for this when all is said and done. It needs a CDN to sit in front of it, to kind of make it more highly available. Overall, it's pretty easy to use.

- That is great! One of the things we were looking at AWS CDM was web service. Do we have anything similar like that with an Azure implementation?

- There are offerings for that, but in this case I opted to not use one. That was intentional. There's just nothing there to protect our worst case scenario. We're not taking any data and really doing anything with it, but de-serializing it through a mechanism that's going to get rid of any untrusted data and we're dumping it into a database that ... It's not even a SQL database perse, so SQL injection's not really a thing, cross-side scripting. There's very little in the way of attack vectors and so there's almost no value in sticking a firewall in front of it.

- Okay, okay. Could you just walk through the components you used again just as your components?

- In particular the standout is Azure functions.

- Yeah.

- Azure functions give you an HTTP trigger. It's kind of like AWS has the API gateway.

- Yep.

- [Ben] It's allowing me to create an API. The Azure function that also listens for event hub, so event hub is the other component. Event hub is just a very highly scalable queuing mechanism. I can send messages to it and they will sit there for a time of my choosing, so between one and seven days. That gives us a lot of flexibility. If for some reason our processing component went down, we can still store that for up to seven days.

About the Author

Students50474
Courses76
Learning paths28

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.