Implementation and Deployment
In this course, we apply the design principles, components, and services we learned in the previous courses in this learning path to design and build a highly available, scalable application. We then apply our optimization principles to identify ways to increase durability and cost efficiency to ensure the best possible solution for the end customer.
- Identify the appropriate techniques and methods using AWS services to implement a highly available, cost-efficient, fault-tolerant cloud solution.
This course is for anyone preparing for the Solutions Architect–Associate for AWS certification exam. We assume you have some existing knowledge and familiarity with AWS, and are specifically looking to get ready to take the certification exam.
Basic knowledge of core AWS functionality. If you haven't already completed it, we recommend our Fundamentals of AWS Learning Path. We also assume you have completed all the other courses, labs, and quizzes that precede this course in the Solutions Architect–Associate on AWS learning path.
This Course Includes
- 4 Video Lectures
- Real-world application of concepts covered in the exam
What You'll Learn
|Lecture Group||What you'll learn|
|Solution Design||How to apply what you've learned about designing solutions to a real-world scenario|
|Solution Architecture||Architecting a solution in the real world|
|Implementation||Implementing on a solution you've designed and architected for the real world|
|Optimizing for High Availability||Optimizing your real-world solution for high availability|
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Okay, CloudAcademy ninjas. Let's review some of the key points we have to keep in mind around domain two, implementation and deployment. So, subnets. If a subnet has an Internet Gateway and a route to that internet gateway, then it's a public subnet. If a subnet doesn't have an internet gateway, or more importantly a route to that gateway, then it's a private subnet. If a subnet only routes traffic to the Virtual Private Gateway or the VPG, then it's a VPN only subnet. Now route tables are the rules for where traffic is allowed to go, right. And a route table enables EC2 instances within different subnets to talk to each other. Now, let's make sure we're clear on IP addressing in the VPC. So public IP addresses are owned and assigned by AWS, and they can be automatically assigned to instances launched within the VPC. Now an EIP or elastic IP address is also an AWS owned public IP address. But it's one that you can allocate to your account. And by default we're allowed five IP addresses per region, which corresponds to the number of VPCs we're allowed to stand up per region. And the EIPs are free if we're using them. But we do pay if we have an EIP allocator that is not being used. Sounds confusing, I know. But this is just to prevent people from reserving a whole lot of elastic IP addresses. So basically you're only charged for them if you don't use them. Few things to keep in mind about the VPC. So the minimum subnet size you can have in the VPC is a/ 28 CIDR. And the maximum size you can have in a VPC is /16. Now you can't change the CIDR block of a VPC once it's been created. And you're gonna need to have two public subnets and two private subnets if you have a High Availability design that's using two availability zones, right. So if you've got a question about how many subnets you need, you're gonna need two public and two private, so four subnets in total. Now network ACLs, or NACLS, are associated to a VPC subnet to control traffic flow. And by creating an IGW and a route to that IGW, you're creating a public subnet. Now you can only have one internet gateway for each VPC. When you create a VPC, all subnets can communicate with each other by default. Now if you're setting up a NAT gateway, remember you need to disable the source and destination checks on the NAT instance to enable traffic flow. Now an elastic IP address remains associated with an instance when the instance is stopped, right. In an EC2 classic network, the elastic IP address will be dissociated from the instance on a stop or start event but not in the VPC. A stop and start of an EBS-backed EC2 instance always changes the underlying host computer, okay. Can't guarantee it, but generally a stop and start of an EBS-backed EC2 instance changes the underlying host. Now if you attach an EIP to an instance that is associated to a different subnet, then the instance will be duel-homed. Reserved instances enable cost savings if you need to run instances full time. For example, regular applications that run nine to five, Monday to Friday, or perhaps to best respond to website traffic. With reserved instances, you can change an instance type within the same instance family, and you can change the availability zone. But that's generally all. On demand instances, on the other hand, enable flexibility to handle spikes in traffic. So if you got a retail site or a marketing campaign, that may suit on demand. Now spot instances are a really cost effective way to provide computing on demand when we're not time constrained. So if it's not an urgent requirement that could tolerate an interruption. Say batch processing, or number crunching, spot instances will suit that use case. Now Dedicated Instances ensure your application will not be run on hardware used by any other client. So, but of course Dedicated Instances can cost you way more. Now the instance type itself defines the virtual hardware, and it's the AMI that determines the initial software state. The instance type and the AMI, those are the two things you need when you start up a new instance. If you do have multiple security groups, then the rules for all of those groups will be aggregated as far as the instance is concerned. Now if you enable enhanced networking, that enables more packets to travel per second, it enables a lower latency, and it provides less jitter. Now if you're using SSL, an SSL certificate must specify the name of the website it's being used for and either the subject name or listed as a value in the SAN extension of the certificate. If you don't do that, the connecting clients will see an error. Now, a VPN is a private connection between network resources using an IPSEC tunnel. So IPSEC encrypts the traffic passed between the network points. And to set up a VPN you need a few things. First you need a VPG or Virtual Private Gateway on the AWS side of the network. We also need a CGW, or a customer gateway. And that is a generally a physical device or a software appliance on the customer side of the network. And the VPN needs to be initiated from the customer side. So by default we get two VPN tunnels with an IPSEC connection which adds another layer of durability to your networking. Now IPSEC encrypts the transport, but it doesn't increase the resilience of your connection. So to do that, we use Direct Connect, which is a dedicated connection between your network and AWS, i.e. traffic sent over Direct Connect does not rely on the internet, right. So it's more resilient and less prone to traffic fluctuations. Okay, a few things to keep in mind with the services. Right, so Simple Workflow Service, important one to remember. And that enables coordinated tasks across distributed components. And Simple Workflow Service is really useful for coordinating tasks, say an auto-processing application. Now an SWF Workflow is a collection of activities performed by Simple Workflow Service actors. Right, so an SWF actor can be a worker, a workflow starter, or a decider. And each workflow runs in a domain. And you can have multiple domains per account, per your account, but workflows from domains can't interact with each other, alright. Now remember that Simple Notification Service is an asynchronous push notification service. Different from Simple Queue Service which is a queuing service. So Simple Notification Service enables a publisher to send notifications to an individual or groups of subscribers. And Simple Notification Service can use HTTP, HTTPS, SMS, email, email-JSON, or even Amazon SQS, or a Lambda as a protocol. So the key elements to remember for Simple Notification Service are a publisher, and a subscriber, and a topic. A Simple Queue Service visibility timeout window is the period of time where Simple Queue Service prevents other applications or services from accessing or receiving a message as another component has already accessed that message. Now the default message visibility timeout is 30 seconds. And the maximum time you can set for a visibility timeout is 12 hours. So by default, the ordering of Simple Queue Service messages cannot be set, however Simple Queue Service now supports first in, first out ordering, which is FIFO, which enables you to set message ordering when it's enabled. So by default, Simple Queue Service messages are retained for 14 days. Now if you need to store messages for longer, cull them out and store them in Amazon S3. Now Simple Queue Service long polling is another very important feature. And that allows an application to poll the Simple Queue Service queue with a wait factor which you can set to be between one and 20 seconds. So if there's no message in there, Simple Queue Service holds the request rather than returning an empty response. So long polling reduces queue request cpu cycles. So if you get a question about how to reduce the number of cycles and polling, increasing your long polling value may help. Okay, so those are a few things to keep in mind for our exam preparation. Let's get into the next domain.
Head of Content
Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.