Designing for high availability, fault tolerance and cost efficiency
AWS Services That Enable High Availability
The AWS exam guide outlines that 15% of the SysOps Administrator–Associate exam questions could be on the topic of designing highly-available, fault-tolerant, cost-efficient, scalable systems. This course teaches you to recognize and explain the core architecture principles of high availability, fault tolerance, and cost optimization. We then step through the core AWS components that can enable highly available solutions when used together so you can recognize and explain how to design and monitor highly available, cost efficient, fault tolerant, scalable systems.
- Identify and recognize cloud architecture considerations such as functional components and effective designs
- Define best practices for planning, designing, and monitoring in the cloud
- Develop to client specifications, including pricing and cost
- Evaluate architectural trade-off decisions when building for the cloud
- Apply best practices for elasticity and scalability concepts to your builds
- Integrate with existing development environments
This course is for anyone preparing for the Solutions Architect–Associate for AWS certification exam. We assume you have some existing knowledge and familiarity with AWS and are specifically looking to get ready to take the certification exam.
Basic knowledge of core AWS functionality. If you haven't already completed it, we recommend our Fundamentals of AWS learning path. We also recommend completing the other courses, quizzes, and labs in the Solutions Architect–Associate for AWS certification learning path.
This Course Includes
- 11 video lectures
- Detailed overview of the AWS services that enable high availability, cost efficiency, fault tolerance, and scalability
- A focus on designing systems in preparation for the certification exam
What You'll Learn
|Lecture Group||What you'll learn|
Designing for High availability, fault tolerance and cost efficiency
Designing for business continuity
How to combine AWS services together to create highly available, cost efficient, fault tolerant systems.
How to recognize and explain Recovery Time Objective and Recovery Point Objectives, and how to recognize and implement AWS solution designs to meet common RTO/RPO objectives
|Ten AWS Services That Enable High Availability||Regions and Availability Zones, VPCs, ELB, SQS, EC2, Route53, EIP, CloudWatch, and Auto Scaling|
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Elastic Compute Cloud, or EC2, provides virtual processing power through cloud-based virtual machines. Amazon EC2 is a core part of the syllabus. We need to understand this clearly, so let's just go through it step by step. So to launch an instance, you first select an Amazon machine image, or AMI. Now that AMI defines what operating system and software will be on the instance when it launches. Now there's four types of AMI. There's the AWS-published AMI, and those basically are generic machines with an operating system pre-installed. Amazon EC2 currently supports Amazon Linux, Ubuntu, Windows Server, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Fedora, Debian, CentOS, Gentoo, Linux, Oracle Linux, and FreeBSD. Then there's the AWS-partner-published AMIs, and these are found in the AWS Marketplace, and they generally have software package that are pre-installed and optimized for particular workloads. And there also are community AMIs, and those are ones that have been built or snapshotted from an existing EC2 instance that is optimized, which you may want to share with partners or the community. You can also save snapshots of your own virtual machines and save those as private AMIs. A single AMI can be used to launch one or thousands of instances. Once you have created an AMI, replacing a failed instance is very simple. You can launch a replacement instance that uses the same AMI as a template. This can be done through an API invocation, through scriptable command-line tools or through the AWS Management Console. Creating Amazon machine images of customized machines enables you to launch and relaunch instances quickly, thereby increasing availability and durability. From an ideal machine configuration, you can save what we call a golden image as a template to use for autoscale launch configurations and to use in the autoscale group. AMIs are private by default. They can be shared with team members or made public. They can also be ported across regions. EC2 allows you to pick configurations called instance types that best match the type of processing needed. Performance testing is important to selecting instance types. It's best practice to assess the requirements of your applications, and select the appropriate instance family as a starting point for application performance testing. You should start evaluating the performance of your applications by first identifying how your application needs compare to different instance families, i.e., is the application compute-bound or memory-bound? And secondly, sizing your workload to identify the appropriate instance size. Okay, so a few things to consider when we're choosing an instance type. Now, network support is important. So every instance type is rated either low, moderate or high for networking. We've got 10 gigabits per second network performance as well if we want high-end networking. Now the larger the instance type, generally the better network performance. Some instance types offer enhanced networking, which is a option you can choose, and that provides additional improvements to the network performances. Okay, another major factor to consider when you're choosing is storage. Now there's two types of storage that you can select with Amazon machine images, and it's really important that you know the difference between these two. First we have the instance store volume, and second we have elastic block store volumes. So instance store volumes come with the machine basically. Because instance store is temporary, these disks are described as ephemeral storage. Ephemeral is just another way to say temporary. So if you see instance store or a ephemeral store, be thinking temporary, and I'm likely to lose what is on that disk if this machine terminates. Okay, so how does this affect our AMIs? All AMIs are categorized as either backed by Amazon EBS or backed by instance store. With EBS-backed instances, the root device for your instance is on an Amazon EBS volume that's created from an Amazon EBS snapshot. With instance-store-backed instances, the root device for your instance is an instance store volume created from a template stored in Amazon S3. So instances that use an instance store volume for the root device automatically have instance store available, and the root volume contains the root partition, and you can store additional data on there. Now any data on an instance store volume is deleted when the instance fails or terminates. Now instance-store-backed AMIs can't be stopped. They're either running or they're terminated. So strange but true, and also a great tongue twister, you can't stop an instance that's launched from an instance-store-backed AMI with ephemeral storage. So here we can see I've launched an instance-backed instance. Here's the paravirtual setting that says it's instance-store backed. Now when we try and stop or start this, you can see the options are grayed out for stop and start, all we can do is terminate or reboot. You can add persistent storage to that instance by attaching one or more Amazon EBS volumes. Now instances that use the Amazon EBS volume for their root device automatically have an Amazon EBS volume attached when they start. Now that volume appears in your list of volumes like any other, and the instances don't use any available instance store volumes by default. So if a question comes up about an S3-backed instance, then it's about ephemeral storage, and data on an instance store is lost when the instance is terminated, but instance store data persists through an OS reboot. So Amazon-EBS-backed AMIs launch faster than instance-store-backed AMIs. And when you launch an instance-store-backed AMI, all the parts have to be retrieved from Amazon S3 before the instance is available. With the EBS-backed AMI, only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available. There's also a difference in how large a size volume you can have. With EBS-backed you can have up to 16 terabytes, with instance-store-backed you can only have 10 gigabytes. Another thing to consider is that you can't upgrade a instance-store-backed instance. The attributes are fixed for the life of the instance, and remember, you can't stop it, you can only terminate it, whereas with an EBS-backed instance, you can change the instance type, the kernel, the RAM disk, and the user data can be changed while the instance is in the stopped mode. You can create AMIs for both. It's much easier creating an AMI from an Amazon-EBS-backed instance. The Create Image API action creates your Amazon-EBS-backed AMI, and it also registers it. There's an option in the AWS Management Console to allow you to create an image from a running instance, but this applies only to EBS-backed instances. If you want to create an AMI for a instance-backed instance, you have to create the AMI on the instance itself using the Amazon EC2 AMI tools. So why would you use an instance-store-backed instance? I can hear you ask. Well, instance-store-backed instances are good when you just need temporary data storage, or you have an architecture that provides redundancy, such as Hadoop's HDFSS framework, for example. They can also be really useful if you need to encrypt temporary data. And there is also a cost consideration with EBS-backed instances. Storage can get expensive if you have to launch 500 EBS-backed instances where each instance needs a 20-gig EBS volume, for example. Anyway, let's get back to launching our instance from our AMI. Once we've selected the AMI we want to use and the instance type, then we're presented with a number of options for startup. So the first switch is asking us how many instances we want to launch. The purchasing option is our next switch, and you may be wondering, what is a Spot instance? So allow me to introduce you to the wonderful world of Spot instances. As you can imagine, AWS is constantly overprovisioning compute resources to ensure instances will always be available to On-Demand customers. So, Spot instances allow us to buy some of their unused overhead at a lower price. Now the downside of this is that you can't guarantee when you'll have these instances, as On-Demand customers are always going to come first. But if your processing job is not time constrained, i.e., you don't mind waiting for a while, Spot instances are ideal, as they're way cheaper than the On-Demand price. So, Spot instances suit flexible, interruption-tolerant applications. For example, a data-crunching job or processing a queue of video transcoding jobs, for examples, and they can lower your instance costs significantly. So a Spot instance allows you to specify the maximum price you're willing to pay per instance hour, and if your bid is higher than the current Spot price, your Spot instance is launched and you'll be charged at that current Spot price. The other switch is choosing a network, so we have to choose which network we want to launch our instance into, and then we get to choose our subnet. Now, we can choose to auto-assign a IP address. So remember, this is the public IP address, or whether we use an elastic IP address or just use the private IP address that is assigned to an instance by AWS. Now the next switch is the IAM role, and IAM roles for EC2 automatically deploy and rotate the AWS credentials for you when you start that instance. And that takes away the need to store your AWS access keys with your application. So roles are fantastic for just limiting the amount of access an instance will have. You create a role in the IAM console, then you just assign it to your instance. Now the next switch is shutdown behavior. This allows you to specify the instance behavior when an OS-level shutdown is performed. So you can either terminate or stop an instance. Now this applies to EBS-backed instances. Remember with instance-store instances, which have ephemeral storage, then we can only terminate the instance. There's no stop behavior supported. Now the next switch is enabling termination protection. So, you can protect instances from being accidentally terminated, all right? So once you enable this, you won't be able to terminate this instance from the API or from the AWS Management Console. The flag is a disable API termination attribute, and that controls whether the instance can be terminated using the console, CLI, or API. So the other options we have are monitoring, and we can enable CloudWatch monitoring from this, and we have the option to choose if we want our instance to run on dedicated hardware, which of course will come at a higher cost, all right? So if we're not using the virtualized tenancy of AWS, if we do need our instance to run on dedicated hardware, we can choose that from here, but the instance costs will be higher. Okay, under the Advanced Details tab is one of my favorite features, which is the user data, and this allows us to add a bootstrap routine to our instance. Now the benefit of this is you can configure a machine to start itself and configure itself basically. This first line identifies this user data as an executable script. There are many ways you can run scripts during bootstrapping, including fetching this type of executable file from an S3 bucket. But here, first we install the Apache HTTPD webserver and support for PHP. The -y option installs the updates without asking for confirmation. We use httpd start command to start the Apache web server, and we use the chkconfig command to configure the Apache web server to start at your system boot. Then use the s3 cp command to recursively copy the contents of our S3 bucket to our HTML directory. The Amazon Linux Apache document root is /var/www/html, which is owned by root by default. To allow our user to manipulate files in this directory, we need to modify the ownership and permissions of that directory. So we add a www group to the instance, and then we give that group permissions to the /var/www directory and write permissions for the group. We then assign the user role we set for this instance to the www group. That means this instance will start, provision itself as a web server, and then get the latest HTML files or script files from our S3 bucket. So this is just one simple example of bootstrapping. There are many things we can do to provision a new instance with this powerful feature. So once we've got our configuration set, then we can go through and add our storage type. And remember, when we're adding volumes to an instance, we're talking about elastic block store volumes. So the ephemeral storage that is part of a instance is something that we can't set. It's launched with the instance itself. It's ephemeral storage that won't persist if the instance is terminated. Whereas adding EBS volumes, these volumes will persist during a reboot, they will persist during a stop or start, and they will exist through a terminate as well. So we've got a number of options for the type of EBS volumes that we want to set. We can put the size of the volume, and we can set the type. So there's three key types. There's magnetic, which is the cheapest, which is generally a little slower. We've got our general purpose solid state drives, which are GP2 drives, and we've also got provisioned IOPS, which allows us to set what type of input and output we'd like to have for this volume. And that's very useful if we're running a database instance where we need a lot of high I/O activity. Again it comes at an additional cost, so if we're just looking at running a instance with a basic volume, then we could use magnetic. We don't even have to attach an EBS volume if we've chosen an instance type where we can use our ephemeral storage, but of course if we want data to persist after the instance has stopped, then we're going to need to use an EBS volume. Now, the other flag here is the Delete on Termination flag, and this is set as active by default. So if we disable this, then this EBS volume will persist after the instance is terminated. Now if we don't disable that, then this EBS volume will be deleted when the instance is terminated, okay? So just to clarify, if this Delete on Termination flag is left in its default position, then this EBS volume will be deleted if this instance is terminated. If the instance is rebooted or stopped, then the EBS volume will persist, but if the instance is terminated, then this EBS volume will be deleted with it as well unless we change this flag. This is done just as a housekeeping exercise, because as you can imagine, if you've launching 500 instances and each of those has an EBS-backed volume attached to it of 40 gigabytes, then you've got a lot of volumes left lying around once those instances terminate. So AWS is just trying to save us money here. Okay, so the next option we have is to add a tag, and tags are really just an easy way for us to organize content. They're very, very useful when you want to set rules for groups of instances. And then the next step is to choose our security group. So we need to set a security group for an instance when we launch it. So configuring a security group is a prerequisite for launch. If we create a new security group, then we're defaulted to an SSH rule that allows the TCP protocol to connect to this instance on port 22. So another option of note here is the ability to set the source IP address. So this is which range of IP numbers will be able to access your new instance on the port that you've specified. The best practice here is to set that to be your own IP address only, or a range of numbers so that you can restrict access to your new instance. You can set whatever rule you like for this, and of course you can add other rules, so if you want to have access from another instance or a subnet, then you can set that rule in here in this security group. If you've got a pre-configured security group, then you just select it from the existing security group checkbox. If we're launching a Windows instance, then our default rule will be for RDP, and the RDP port will be configured as open, which is remote desktop protocol, which allows us to log in as administrator and start our machine. So once we've done that we can review and launch our instance. Now once our instance is launched, how do we address it? So AWS allocates a private address to each new machine. Now you can also select to set a public IP address or an elastic IP address to the instance. Now the number of public IP addresses that we can use in AWS is limited per account. It's currently five per region, and that limit is in place because the number of IP addresses available worldwide is actually quite limited, Now the elastic IP address is an IP address that's assigned by AWS that can be swapped out behind the scenes from instance to instance or to another load balancer, which means that you can improve your availability because if a machine fails, the elastic IP address is transferred to another machine. So instances can be addressed using the public DNS name, the public IP address, or the AWS elastic IP address. Now if we're going to connect to an instance, we have to basically put that address in and then connect to the machine. So for connecting to a Linux instance, we use the private half of the key pair to connect to the instance via SSH. If we're connecting to a Windows instance, we use the private half of the key pair to decrypt the randomly initialized local administrator password. So we're gonna use an RDP desktop client, it's gonna connect us, we're gonna use this decrypted password to log in as an admin. Okay, so let's just review a few key points now that we've got our instance launched. You pay only for what you use, and there is no minimum fee with easy to EC2. EC2 is billed by the second for On-Demand, Reserved, and Spot families of Amazon Linux and Ubuntu instance types. So EC2 usage is billed in one-second increments with a minimum billable unit of 60 seconds. So also note that provision storage for EBS volumes will be billed in per second increments with a 60-second minimum as well. Per-second billing is available across all regions and availability zones, and for all other instance types, usage is billed per instance hour, which is rounded up to the next hour. So with per-second billing, you really do only pay for what you use. This is especially useful if you manage instances running for irregular periods of time, such as dev testing where you might just be starting an instance for a couple of minutes and then turning it off again. Now as always, the pricing is continually changing, so make sure you check the cost of your EC2 instance in the simple monthly calculator or on the website. One service I do like using is EC2Instances.info. Now it's an unofficial site, so it's not maintained or managed by AWS, but it does give you a breakdown of the cost per second, per hour, per week, per month, per year even. Very, very useful. EC2 billing can be quite complex, so let's just step through this. When you terminate an instance, the state changes to shutting-down or terminated, and you are no longer charge for that instance. When you stop an instance, it enters the stopping state and then the stopped state, and you are not charged hourly usage or data transfer fees for your instance after you stop it. But AWS does charge for the storage of any Amazon EBS volumes. Now each time you transition an instance from stopped to running, AWS charges a full instance hour, even if these transitions happen multiple times within a single hour, unless you are using a Amazon Linux or Amazon Ubuntu instance type, and then it is a per-second increment. So you're charged per second for those two instance types. When you reboot an instance, it doesn't start a new instance billing hour. Let's go through the differences between rebooting, stopping, starting, and terminating. So in terms of the host, the instance stays on the same host when we reboot, but the instance may run on a new host computer when we stop or start, underline may. When we terminate, there's no impact. In terms of public and private IP addresses, when we reboot the addresses stay the same. With EC2-Classic, the instance gets a new private, a new public IP address. With EC2-VPC, the instance keeps its private IP address and the instance gets a new public IP address, unless it has an elastic IP address, an EIP, which doesn't change during a stop or start. With elastic IP addresses, the EIP remains associated with the instance when you reboot it. For instance-store volumes, when we reboot, the data is preserved, when we stop or start the data is erased, and when we terminate, the data is erased. So remember that with instance-store volumes, data gone when you stop it or terminate it. The root device volume is preserved during a reboot, and the volume is preserved during a stop or start event, but the volume is deleted during, by default, during termination. And with billing, during a reboot, the instance hour doesn't change. Each time an instance transitions from stopped to running, AWS starts a new instance billing hour. When you terminate an instance, you stop incurring charges for their instance as soon as its state changes to shutting-down. Okay, let's just talk about instance recovery for a minute. As of November 2018, you can pause and resume instances using the new hibernate function for On-Demand and Reserved instances. You can do this from the AWS console, from the CLI, or via API. Now when hibernating your instances, the EBS root volumes, and any other attached EBS data volumes are persisted between sessions. How cool is that? Even data and RAM memory is saved to your EBS root volume. When you resume the instance, your EBS root device is restored to its original state, which includes the RAM content. So previously attached data volumes are reattached, and the instance remains or retains its instance ID. So while the instances are in hibernation, you're only paying for the EBS volumes and elastic IP addresses attached to it. Now one thing to remember is the AMI snapshot used to launch the instance must be encrypted. This ensures protection of sensitive contents in memory, of course, when they get copied to the root volume. Now you can create an Amazon CloudWatch alarm using the StatusCheckFailed_ system alarm that monitors and automatically recovers an EC2 instance if it becomes impaired due to a hardware failure. A recovered instance is identical to the original instance. It includes the instance ID, the private IP addresses, the elastic IP addresses, and all the instance metadata. Now, terminated instances can not be recovered, all right? So when the StatusCheckFailed_ system alarm is triggered and the recovery action is initiated, you'll get notified by the Amazon SNS topic that you selected when you created the alarm, and it tells you associated recovery actions. So during instance recovery, the instance is migrated during an instance reboot, and any data that is in memory is unfortunately lost. When the process is complete, information is published to the SNS topic that you created when you made the alarm. Now anyone who has subscribed to that SNS topic will receive an email notification that includes the status of the recovery attempt and any further instructions. So that might be something that you only send to a select group of people. You will notice an instance reboot on the Recovered Instance dashboard. So some of the problems that often cause system status checks to fail: loss of network connectivity, loss of system power, software issues on the physical host, hardware issues on the physical host that impact network reachability, all right? So anything that has to do with the underlying infrastructure is going to be a perfect candidate for StatusCheckFailed_ system alarms. The Recover action also can be triggered when an instance is scheduled by Amazon Web Services to stop or retire due to degradation of the underlying hardware or an upgrade, for example. All right, so just keep in mind that the Recover action is supported on instances with the following characteristics. One, they need to be running in the VPC, so not EC2-Classic instances. They need to be using shared tenancy, so not dedicated hardware, i.e., the tenancy attribute is set to default. They're using EBS volumes only, and they're not configured to use instance-store volumes. Currently it's supported on C3, C4, C5, M3, M4, M5, R3, R4, T2, or X1 instance types, but by the time I've recorded this there will already be a group of other instance types added to it. So that's it for recovery. Okay, let's talk billing models for EC2. There are four different cost models. The instance families go like this. First, there's On-Demand. With On-Demand pricing you pay hourly for however long you run your EC2 instance at a price set per instance type. If your EC2 instance does not run the full hour, you are still billed for the full hour. The second option is Spot pricing. Spot pricing is market-placed pricing based on supply and demand. You are bidding for unused AWS capacity. There is no guarantee that you will get a Spot instance. When you do, there is no guarantee that you will have it for any length of time. Now this makes Spot pricing useful in situations where jobs are not time constrained, i.e., they can spin up and shut down without a negative impact on the system they're interacting with. Keep in mind, Spot instances can be terminated. Reserved instances. Reserved pricing offers discounted hourly rates per instance type, with an upfront commitment of either one year or three years. The upfront commitment comes in the form of a one-time payment, which offers the steepest hourly discount, a partial upfront payment, or no upfront payment at all. RIs suit predictable usage where you can safely explain or expect a certain level of compute will be required. Scheduled instances are like Reserved instances, however you can reserve the capacity in advance so that you know it is available when you need it. You pay for the time that the instances are scheduled, even if you do not use them. Scheduled Reserved instances enable you to purchase capacity reservations that reoccur on a daily, weekly, or monthly basis, with a specified start time and duration for a one-year term. Scheduled instances are a good choice for workloads that do not run continuously, but do run on a regular schedule. For example, you can use scheduled instances for an application that runs during business hours or for a batch processing job that runs at the end of the week as an example. In addition to the early pricing, EC2 instances are subject to data transfer charges. This varies per region, but essentially data coming into the EC2 instance from the internet is not charged. Data sent out from the EC2 instance to the internet is charged per gigabyte in terabyte tiers. Check this simple monthly calculator for the latest prices on all instance types. So let's look at these instance types. The t-series can handle low traffic websites, development, environments, etc. Essentially any processes that do not require a ton of CPU power. You get burstable CPU via CPU credits that are obtained hourly based on the size of the instance. Their max limits also are based on the size of the instance. EC2 a one instance which was released at Reinvent 2018 runs on the AWS nitrous system, and it uses an ARM based graviton processor. So the AWS nitro system improves the performance of processing of virtualized environments by offloading virtualization functions to dedicated Hardware in software. Now by offloading this processing overhead, The AWS nitrous system can deliver better performance and virtualized processing. So the main benefit of the A1 instances is cost efficiency. The ARM processors generally use less power, which is why you see the ARM processor used extensively in smartphones and tablets. By using the ARM processor in the A1 instance, that provides high-performance processing at a reduced operating cost. And that means that the A1 is potentially a cheaper alternative to using the T2 or the M5 instance for general processing requirements. The M-series is perfect for small or medium sized databases. It accomplishes this with the right balance of memory, CPU and storage. It uses solid-state drives with fast IO performance. These features make it a very popular choice for many different types of systems. The compute optimized C Series offers the best price for pure CPU performance. The C4 instances use processes built specifically for AWS hardware and EC2 instances. This family works best for jobs that are CPU intensive, be it batch processing, video encoding, or other workforce type tasks. Memory optimized instances offer the best price per gigabyte of RAM, for all the instance families. Think high-performance databases, or caching when considering this family of EC2 instances. Not only do you get a lot of memory, you get fast memory. The R series, or our three instances, are designed for memory intensive applications, such as high performance databases, distributed memory caches, and memory analytics, large SAP deployments, SharePoint, and other enterprise applications. AWS also provides high performance instance types for specialist applications and news cases. The P2 instances are intended for general purpose GPU compute applications and they have tasks for specific GPUs that are EBS optimized. They use a high frequency Intel Xeon e5 Broadwell processor, or a high performance Nvidia K 80 GPU. Now, that can provide up to 12 gigabytes of GPU memory. And those chips support the GPU direct peer-to-peer GPU communication channel, and they also provide enhanced networking using the EC2 elastic network adapter. Which, incidentally, can give you up to 20 gigabits of aggregate network bandwidth within a placement group. So the G2 incidences are optimized for graphics-intensive applications, and they provide high performance in the video GPUs. The high-frequency intel Xeon Sandy Bridge processor and each GPU features an on-board hardware video encoder, which can be useful for video capturing and encoding. The F1 instances are really high-end processors. These allow you to customize your hardware acceleration, using field-programmable arrays, or FPGAs. They came with a high-frequency intel Broadwell processor. They support EC2 enhanced network. So, all high-performance instances have been constantly upgraded. So check the AWS EC2 side for more detail on the latest features, such as field program array support. It's unlikely that you would get a specific question on the high-end processors in the certification exam. These instance types recently released features. However, if you are building graphic-intensive apps, it is good to know about this performance band. So along with the F1 processor, AWS also provides an Amazon EC2 elastic GPU. Now that allows you to easily attach low-cost graphics acceleration to the current generation of EC2 instances. Now with this elastic GPU, you can choose the GPU resources that are size for the workflow that you need. Lastly, the storage-optimized instance family brings the choice between low-cost IOPS or the base cost per gigabyte of storage. The I series delivers high IOPS at a low cost. These instances are designed for fast, random IO performance that is ideal for no SQL databases like Cassandra and MongoDB. They scale out transactional databases, data warehousing, Hadoop, and cluster file systems. D2 instances feature up to 48 terabytes of HDD-based local storage, deliver high-disk throughput, and offer the lowest price per disc throughput performance on EC2. So, what options do we have for network and storage optimized instances? For applications that benefit from low-cost per CPU, you should try compute-optimized instances first. Replications that require the lowest cost per gigabyte of memory use memory-optimized instances, the M or C classes.
If you're running a database, you should also take advantage of the EBS optimization or instances that support placement groups. For applications with high internode network requirements, you should choose instances that support enhanced networking. Okay, let's talk placement groups. Now, a placement group is a way to associate instances together. So, they have the very best networking connectivity between them. A placement group is, in essence, a cloud cluster, all right? Placement groups are recommended for applications that benefit from low network latency, or high network throughput, or basically both. When with placement groups, it is best to choose an instance type that supports enhanced networking. There are two types of placement groups currently offered by AWS. Spread placement and cluster placement. Now, spread placement groups is a group of instances that are each placed on distinct, underlying hardware. So spread placement groups are recommended for applications that have a small number of critical instances that should, or could be kept separate from each other. And launching instances in a spread placement group reduces the risk of simultaneous failures. It might occur when instances share the same underlying hardware, so it's risk mitigation. Spread placement groups provide access to distinct hardware, and therefore, good for mixing instance types, or launching instances over time. Keep that in mind. A spread placement group can span multiple availability zones. And you can have a maximum of seven running instances per availability zone per group. That number is probably likely to change, but for now, a maximum seven running instances per availability zone, per group. So, just remember. Spread placement groups is multi AZ, can support mixed instance types, and it's good for reducing single points of failure in your design. Now, a cluster placement group is where EC2 instances are grouped together in one availability zone. To provide the lowest latency within that single availability zone. To provide the lowest latency and the highest per-second network performance with a cluster replacement group, use the same instance type for all members of the placement group. And ideally, launch them at the same time. So remember, the cluster placement group is single AZ, same instance types, ideally launched together, and great for getting the fastest connectivity between your instances. Now, keep in mind there's no additional cost for using either type of placement group. So, a few common design scenarios that pop up in exam questions, if you stop an instance in a placement group and then start it again, it still runs in the placement group. However, an instance start will fail if there isn't enough capacity for that instance. So if you start or launch an instance in a spread placement group, and there's insufficient unique hardware to fulfill the request, then that request fails. Now, Amazon EC2 will make more distinct hardware available over time, so one workaround is to try your request again later. If you receive a capacity error when launching an instance in a cluster placement group, and it's already had instances running in it, stop and start all the instances in that placement group, and try the launch again.
Restarting the instances may mitigate, or migrate them to hardware that has capacity for the requested instances. Okay, so that wraps up placement groups. Let's move on. AWS Lightsail makes it really easy to launch image-based applications, so we can launch WordPress, LAMP_Stack, just at the click of a button, basically. All of the image information is pre-configured, then we just choose an instance plan. It's very easy to configure and set up in this way, so if you're just looking at launching an instance really quickly, and you don't want to have to go through all of the configuration pieces, then Lightsail makes it very easy to do this. Amazon EC2 systems manager is a really useful instance management service for your EC2 instances. You'll find it at the bottom of the menu bar. The EC2 systems manager helps you manage your instances from one place. So you can audit instance state, you can automatically collect software inventory, apply OS patches, which I think is its best feature, and create system images. You can configure Windows and Linux operating systems, and the service capabilities help you define and track system configurations. It makes it easier to manage software compliance for your EC2, and for your on-premise configuration. So they'll do both! You just select the instances you want to manage, and define the management tasks you want to perform. There's no additional cost for using the EC2 system's manager. Accounts have a set of default limits per region. Most can be increased by logging a support ticket with AWS, explaining your use case, and the anticipated volume. AWS support can be very helpful in standing up and monitoring burst activity usage. So just ask is the best policy. If you expect you need to exceed any of these limits. New AWS accounts may start with lower limits. So currently, we can run up to 20 on-demand instances. We can purchase up to 20 reserved instances. We can request spot instances per our dynamic spot limit, which is set per region. Certain instance types, EG xlarge, are limited per region, and some instances are not available in all regions. Okay, so what does EC2 do for us for high availability and full tolerance? With EC2, we can utilize elastic IPs, elastic load balancers, and auto-scaling to create a highly available environment.
Okay, so EC2 is a key component of the exam. Now we've been through quite a lot, so as a way of summarizing, what we'll do is we'll walk through some sample questions from the Cloud Academy database. And, we'll just try and refresh ourselves on some of these key parts. What will happen to the data on a root volume when an EC2 instance that was launched from an S3 based AMI is terminated? Option one, data will be saved as an EBS snapshot. No, that's not going to happen. With an S3 based AMI, we're talking about ephemeral storage, which means that it's going to be built from an S3 template, and that when the instance is terminated, the ephemeral storage is lost. Option two, data will be saved as an EBS volume. No, that's not going to happen. Option three, data will be automatically deleted. Yes, that's going to be the correct option in this instance, because it's an ephemeral-stored, instance backed instance, and that data was automatically deleted when the instance terminates. Remember, we can't stop it as we can with an EBS-backed instance, so the minute that it's terminated, that data is lost. Next question. A user has launched an EC2 instance from an instance store backed AMI. If the user restarts the instance, what will happen to the ephemeral storage data? Now, remember, we support a restart, okay? We just can't stop, it, we can only terminate it. So, all data will be erased, and the ephemeral storage is released. No. The data is preserved, correct. It is not possible to restart an instance launch from an instance store backed AMI. That's not correct. All the data will be erased, but the ephemeral storage will stay connected. That's not correct, either. So the correct answer is the data is preserved. Okay, next question. A user has launched an EC2 instance store backed instance in the US-East-1A zone. The user created AMI #1 and copied it to the Europe region. After that, the user made a few updates to the application running in the US-east-1A zone. The user makes the AMI #2 after the changes. If the user launches a new instance in Europe from the AMI #1 copy, which of the below mentioned statements is true? First, the new instance in the EU region will not have the changes made after the AMI copy. Now, this is correct because we've snapshotted and created an AMI, and then we've made changes to the instance that the AMI was made from. However, those changes are not dynamically updated. We need to create a new AMI, if we were to have those new changes in our AMI snapshot. Option two, the new instance will have the changes made after the AMI copy, since AWS keeps updating the AMI. Incorrect. It is not possible to copy the instance store backed AMI from one region to another. Incorrect. And the new instance will have the changes made after the AMI copy, as AWS just copies the reference to the original AMI during the copying. Thus, the copied AMI will have all the updated data. Incorrect. So, in this scenario, if you want to have any changes reflected in the AMI that we port to the other region, we have to take another copy of the AMI first. Okay, that brings us to the end of our EC2 section. This is call for the exams, so let's just go over some of the key things we've covered. We've talked about Amazon machine images. We've talked about the types of Amazon machine image. We've got AWS ones, which come with an operating system, we've got partner ones which often come with an operating system and some enhanced software. We've got community AMIs, which are snapshotted and shared, which can save us time and money using those. We've got our own private AMIs, which we may save and a machine image is a golden image that we want to use in our auto scaling group. We may wish to share amongst our other members of our organization. And then we talked about selecting the instance type. And that's where we set the virtual CPU size, the memory and the network combination for our machine. We looked at the billing types that are available to us. We've got on-demand instances, which are convenient and that we can just spark those up at any time. We've got spot instances, which suit those predictable workloads where it doesn't matter if we're going to be interrupted. We can compute without a time constraint. They can save us a significant amount of money, because they're cheaper than the on-demand price. Then we've got reserved instances where we pay either a partial or full up-front for the actual instance itself. And again, that provides us with a cheaper price. Because we're committing to using that instance over a period of time. Then we've got scheduled instances, which are reserved instances that can be scheduled for a certain time or date during the calendar. We talked about instance storage, which is important that we know the difference between ephemeral storage, and elastic block store volumes. So, ephemeral storage is temporary storage that comes with the instance, and EBS volume is one which we can attach to an instance, and which will persist after the instance is terminated. We talked about the difference between EBS-backed instances and instance store-backed instances, and how the different variants in the way that we use those, and how we launch them. And then we talked about the difference in instance states. Really important you get this. So, remember there's three key states. There's start, stop and terminate. Now, with EBS-backed instances, we can stop an instance. And when we stop it, any attached volumes persist during that reboot or stop period. If we have an instance-backed instance, then we can only start or terminate it. So, we can't stop it, and if we do stop it, or terminate is the correct term for that, then any ephemeral store that's been used on that instance will be deleted. Okay, we talked about the launch parameters and how we can set user data to bootstrap our machine. A very powerful way of setting the parameters for the machine, we can have it go off and fetch software and install that software using the user data. And we can set some parameters, too around how that machine provisions itself. Which, again, saves us time and money, and allows us to spin up large numbers of machines without having to configure each one individually. And then we just looked at how allow us to group instances together in the same availability zone. That the machine types need to be the same for placement groups. And then we looked at just how all of these great features from EC2 help us design highly available, cost efficient, scalable solutions. They're giving us more flexibility, they're allowing us to provision machines quickly, and taking out a lot of the undifferentiated heavy lifting and provisioning resources. Okay, let's get on to the next one.
About the Author
Head of Content
Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.