1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Certified Developer for AWS - Designing and Developing

Amazon EC2

play-arrow
Start course
Overview
DifficultyIntermediate
Duration2h 45m
Students1959

Description

An introduction to the AWS components that help us develop highly available, cost efficient solutions

In this course we will:
Understand the core AWS services, uses, and basic architecture best practices
Identify and recognize cloud architecture considerations, such as fundamental components and effective designs.

Areas covered:

Elasticity and Scalability
Regions and AZ's
Amazon VPC
Amazon Elastic Load Balancer
Amazon Simple Queue Service
Amazon EC2
Amazon Route53
Amazon Elastic IP Addresses
Amazon CloudWatch
Amazon Auto Scaling

Developing
Identify the appropriate techniques to code a cloud solution
Recognize and implement secure procedures for optimum cloud deployment and maintenance
Amazon APIs
Using Amazon SQS
Decoupling Layers
Using Amazon SNS
Using Amazon SWF
Using Cross Origin Resources (CORS)

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

- [Instructor] Okay, service number five. Elastic Compute Cloud or EC2, provides virtual processing power through cloud-based virtual machines. Amazon EC2 is a core part of the syllabus. We need to understand this clearly. So let's just go through it step by step. So, to launch an instance, you first select an Amazon machine image or AMI. Now, that AMI defines what operating system and software will be on the instance when it launches. Now, there's four types of AMI. There's the AWS published AMI. Those basically are generic machines with an operating system pre-installed. Amazon EC2 currently supports Amazon Linux, Ubuntu, Windows Server, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Fedora, Debian, CentOS, Gentoo, Linux, Oracle Linux, and FreeBSD. Then there's the AWS partner published AMIs, and these are found in the AWS marketplace, and they generally have a software package that are pre-installed and optimized for particular workloads. And there are also community AMIs, and those are ones that have been built or snapshotted from an existing EC2 instance that is optimized, which you may want to share with partners of the community. You can also save snapshots of your own virtual machines and save those as private AMIs. A single AMI can be use to launch one or thousands of instances. Once you have created an AMI, replacing a failed instance is very simple. You can launch a replacement instance that uses the same AMI as a template. This can be done through an API invocation, through scriptable command line tools, or through the AWS Management Console. Creating Amazon machine images of customized machines enables you to launch and relaunch instances quickly, thereby increasing availability and durability. From an ideal machine configuration, you can save what we call a golden image as a template to use for autoscale launch configurations and to use in an autoscale group. AMIs are private by default. They can't be shared with team members or made public. They can also be ported across regions. EC2 allows you to pick configurations called instance types that best match the type of processing needed. Performance testing is important to selecting instance types. It's best practice to assess the requirements of your applications and select the appropriate instance family as a starting point for application performance testing. You should start evaluating the performance of your applications by first identifying how your application needs compare to different instance families, i.e., is the application compute-bound or memory-bound, and secondly, sizing your workload to identify the appropriate instance size. Okay, so a few things to consider when we're choosing an instance type. Now, network support is important. So every instance type is rated either low, moderate, or high for networking, or we've got 10 gigabits per second network performance as well if we want high-end networking. Now, the larger the instance type, generally, the better network performance. Some instances types offer enhanced networking, which is an option if you choose, and that provides additional improvements to the network performances. Okay, another major factor to consider when you're choosing is storage. Now, there's two types of storage that you can select with Amazon machine images, and it's really important that you know the difference between these two. First, we have the instance store volume, and second, we have elastic block store volumes. So instance store volumes come the machine basically. Because instance store is temporary, these disks are described as ephemeral storage. Ephemeral is just another way to say temporary. So if you see instance store or ephemeral store, be thinking temporary, and I'm likely to lose what is on that disk if this machine terminates. Okay, so how does this affect our AMIs? All AMIs are categorized as either backed by Amazon EBS or backed by instance store. With EBS-backed instances, the root device for your instance is on an Amazon EBS volume that's created from an Amazon EBS snapshot. With instance store-backed instances, the root device for your instance is an instance store volume created from a template stored in Amazon S3. So, instances that use an instance store volume for the root device automatically have instance store available, and the root volume contains the root partition, and you can store additional data on there. Now, any data on an instance store volume is deleted when the instance fails or terminates. Now, instance store-backed AMIs can't be stopped. They're either running, or they're terminated. So strange but true, and also a great tongue twister: you can't stop an instance that's launched from an instance store-backed AMI with ephemeral storage. So, here we can see I've launched an instance-backed instance. Here's the virtual setting that says it's instance store-backed. Now, when we try and stop or start this, you can see the options are grayed out for stop and start. All we can do is terminate or reboot. You can add persistent storage to that instance by attaching one or more Amazon EBS volumes. Now, instances that use the Amazon EBS volume for their root device automatically have an Amazon EBS volume attached when they start. Now, that volume appears in your list of volumes like any other, and the instances don't use any available instance store volumes by default. So if a question comes up about an S3-backed instance, then it's about ephemeral storage, and data on an instance store is lost when the instance is terminated, but instance store data persists through an OS reboot. So Amazon EBS-backed AMIs launch faster than instance store-backed AMIs. And when you launch an instance store-backed AMI, all the parts have to be retrieved from Amazon S3 before the instance is available. With the EBS-backed AMI, only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available. There's also a difference in how large a size volume you can have. With EBS-backed, you can have up to 16 terabytes. With instance store-backed, you can only have 10 gigabytes. Another thing to consider is that you can't upgrade an instance store-backed instance. The attributes are fixed for the life of the instance. Remember, you can't stop it. You can only terminate it, whereas with an EBS-backed instance, you can change the instance type, the kernel, the RAM disk, and the user data can be changed while the instance is in a stopped mode. You can create AMIs for both. It's much easier creating an AMI from an Amazon EBS-backed instance. The Create Image API actually creates your Amazon EBS-backed AMI, and it also registers it. There's an option in the AWS Management console to allow you to create an image from a running instance. But this applies only to EBS-backed instances. If you want to create an AMI or an instance-backed instance, you have to create the AMI on the instance itself using the Amazon EC2 AMI tools. So, why would you use an instance store-backed instance, I can hear you ask. Well, instance store-backed instances are good when you just need temporary data storage or you have an architecture that provides redundancy, such as Hadoop's HDFS framework, for example. They can also be really useful if you need to encrypt temporary data, and there is also a cost consideration with EBS-backed instances. Storage can get expensive if you have to launch 500 EBS-backed instances where each instance needs a 20-gig EBS volume, for example. Anyway, let's get back to launching our instance from our AMI. Once we've selected the AMI we want to use and the instance type, then we're presented with a number of options for startup. So, the first switch is asking us how many instances we want to launch. The purchasing option is our next switch. And you may be wondering, what is a spot instance? So, allow me to introduce you to the wonderful world of spot instances. As you can imagine, AWS is constantly over-provisioning computer resources to ensure that instances will always be available to on-demand customers. So, spot instances allow us to buy some of their unused overhead at a lower price. Now the downside of this is that you can't guarantee when you'll have these instances, as on-demand customers are always going to come first. But if your processing job is not time constraint, i.e., you don't mind waiting for a while, spot instances are ideal, as they're way cheaper than the on-demand price. So spot instances suit flexible, interruption-tolerant applications, for example, a data crunching job or processing a queue of video transcoding jobs, for examples. And they can lower your instance costs significantly. So, a spot instance allows you to specify the maximum price you're willing to pay per instance hour, and if your bid is higher than the current spot price, your spot instance is launched, and you'll be charged at that current spot price. The other switch is choosing a network. So we have to choose which network we want to launch our instance into. And then we get to choose our subnet. Now, we can choose to auto-assign an IP address. So remember, this is the public IP address, or whether we use an elastic IP address or just use the private IP address that is assigned to an instance by AWS. Now, the next switch is the IAM role, and IAM roles for EC2 automatically deploy and rotate the AWS credentials for you when you start that instance, and that takes away the need to store your AWS access keys with your application. So roles are fantastic for just limiting the amount of access an instance will have. You create a role in the IAM console. Then you just assign it to your instance. Now, the next switch is shutdown behavior. This allows you to specify the instance behavior when an OS-level shutdown is performed. So, you can either terminate or stop an instance. Now, this applies to EBS-backed instances. Remember with instance store instances, which have ephemeral storage, then we can only terminate the instance. There's no stop behavior supported. Now, the next switch is enabling termination protection. So, you can protect instances from being accidentally terminated. So once you enable this, you won't be able to terminate this instance from the API or from the AWS Management Console. Autoscaling, however, will still be able to terminate an instance if this instance is launched into an autoscaling group. The flag is a disable API termination attribute, and that controls whether the instance can be terminated using the console, CLI, or API. So, the other options we have are monitoring, and we can enable CloudWatch monitoring from this, and we have the option to chose, if we want, our instance to run on dedicated hardware, which of course will come at a higher cost. So if we're not using the virtualized tenancy of AWS, we do need our instance to run on dedicated hardware. We can choose that from here, but the instance cost will be higher. Okay, under the advanced details tab is one of my favorite features, which is the user data, and this allows us to add a bootstrap routine to our instance. Now, the benefit of this is you can configure a machine to start itself and configure itself basically. This first line identifies this user data as an executable script. There are many ways you can run scripts during bootstrapping, including fetching this type of executable file from an S3 bucket. But here, first we install the Apache httpd web server and support for PHP. The -y option installs the updates without asking for confirmation. We use httpd start command to start the Apache web server, and we use the chkconfig command to configure the Apache web server to start at system boot, then use the S3 copy command to recursively copy the contents of our S3 bucket to our HTML directory. The Amazon Linux Apache document root is /var/www/html, which is owned by root by default. To allow our user to manipulate files in this directory, we need to modify the ownership and permissions of that directory. So we add a www group to the instance, and then we give that group permissions to the /var/www directory and write permissions for the group. We then assign the user role we set for this instance to the www group. That means this instance will start, provision itself as a web server, and then get the latest HTML files or script files from our S3 bucket. So this is just one simple example of bootstrapping. There are many things we can do to provision a new instance with this powerful feature. So, once we've got our configuration set, then we can go through and add our storage type, and remember, when we're adding volumes to an instance, we're talking about elastic block store volumes. So the ephemeral storage that is part of an instance is something that we can't set. It's launched with the instance itself. It's ephemeral storage that won't persist if the instance is terminated, whereas adding EBS volumes, these volumes will persist during a reboot, they will persist during a stop or start, and they will exist through a terminate as well. So we've got a number of options for the type of EBS volumes that we want to set. We can put the size of the volume, and we can set the type. So there's three key types. There's magnetic, which is the cheapest, which is generally a little slower. We've got our general purpose solid-state drives, which are GP2 drives, and we've also got provisioned IOPS, which allows us to set what type of input and output we'd like to have for this volume. And that's very useful if we're running a database instance where we need a lot of high I/O activity. Again, it comes at an additional cost. So if we're just looking at running an instance with a basic volume, then we could use magnetic. We don't even have to attach an EBS volume if we've chosen an instance type where we can use our ephemeral storage, but of course if you want data to persist after the instance is stopped, we're going to need to use an EBS volume. Now, the other flag here is the delete on termination flag, and this is set as active by default. So if we disable this, then this EBS volume will persist after the instance is terminated. Now, if we don't disable that, then this EBS volume will be deleted when the instance is terminated. So, just to clarify, if this delete on termination flag is left in its default position, then this EBS volume will be deleted if this instance is terminated. If the instance is rebooted or stopped, then the EBS volume will persist, but if the instance is terminated, then this EBS volume will be deleted with it as well, unless we change this flag. This is done just as a housekeeping exercise, 'cause as you can imagine, if you're launching 500 instances and each of those has an EBS-backed volume attached to it of 40 gigabytes, then you've got a lot of volumes left lying around once those instances terminate. So AWS is just trying to save us money here. So the next option we have is to add a tag, and tags are really just an easy way for us to organize content. They're very, very useful when you want to set rules for groups of instances. And then the next step is to choose our security group. So we need to set a security group for an instance when we launch it. So configuring a security group is a pre-requite for launch. If we create a new security group, then we're defaulted to an SSH rule that allows the TCP protocol to connect to this instance on Port 22. So another option of note here is to pick the ability to set the source IP address. So, this is which range of IP numbers will be able to access your new instance on the port that you specified. The best practice here is to set that to be your own IP address only or a range of numbers so that you can restrict access to your new instance. You can set whatever rule you like for this, and of course you can add other rules. So if you want to have access from another instance or a subnet, then you can set that rule in here in the security group. If you've got a pre-configured security group, then you just select it from the existing security group checkbox. And if we're launching a Windows instance, then our default rule will be for RDP, and the RDP port will be configured as open, which is remote desktop protocol, which allows us to log in as administrator and start our machine. So, once we've done that, we can review and launch our instance. Now, once our instance is launched, how do we address it? So AWS allocates a private address to each new machine. Now, you can also select to set a public IP address or an elastic IP address to the instance. Now, the number of public IP addresses that we can use in AWS is limited per account. It's currently five per region, and that limit is in place because the number of IP addresses available worldwide is actually quite limited. Now, the elastic IP address is an IP address that's assigned by AWS that can be swapped out behind the scenes from instance to instance or to another load balancer, which means that you can approve your availability, because if a machine fails, the elastic IP address is transferred to another machine. So, instances can be addressed using the public DNS name, the public IP address, or the AWS elastic IP address. Now, if we're going to connect to an instance, we have to basically put that address in and then connect to the machine. So if we're connecting to a Linux instance, we use the private half of the key peer to connect to the instance via SSH. If we're connecting to a Windows instance, we use the private half of the key peer to decrypt the randomly-initialized local administrative password. So we're going to use an RDP desktop client. It's gonna connect us. We're gonna use this decrypted password to log in as an admin. Okay, so let's just review a few key points, now that we've got our instance launched. You pay only for what you use, and there is no minimum fee with EC2. EC2 is billed by the second for on-demand, reserve, and Spot families of Amazon, Linux, and Ubuntu instance types. So EC2 usage is billed in one-second increments with a minimum billable unit of 60 seconds. So, also note that provision storage for EBS volumes will be in per-second increments with a 60-second minimum as well. Per-second billing is available across all regions and availability zones, and for all other instance types, usage is billed per instance hour, which is rounded up to the next hour. So, with per-second billing, you really do only pay for what you use. This is especially useful if you manage instances running for irregular periods of time such as dev testing, where you might just be starting an instance for a couple of minutes and then turning it off again. Now, as always, the pricing is continually changing, so make sure you check the cost of your EC2 instance in the simple monthly calculator or on the website. One service I do like using is EC2instances.info. Now, it's an unofficial site, so it's not maintained or managed by AWS, but it does give you a breakdown of the cost per second, per hour, per week, per month, per year, even. Very, very useful. EC2 billing can be quite complex, so let's just step through this. When you terminate an instance, the state changes to shutting down or terminated, and you are no longer charged for that instance. When you stop an instance, it enters the stopping state and then the stopped state, and you are not charged hourly usage or data transfer fees for your instance after you stop it. But AWS does charge for the storage of any Amazon EBS volumes. Now, each time you transition an instance from stopped to running, AWS charges a full instance hour, even if these transitions happen multiple times within a single hour, unless you are using an Amazon Linux or Amazon Ubuntu instance type, and then it's a per-second increment. So you're charged per second for those two instance types. When you reboot an instance, it doesn't start a new instance billing hour. Let's go through the differences between rebooting, stopping, starting, and terminating. So, in terms of the host, the instance stays on the same host when we reboot, but the instance may run a new host computer when we stop or start. Underline may. When we terminate, there's no impact. In terms of public and private IP addresses, when we reboot, the addresses stay the same. With EC2 classic, the instance gets a new private and a new public IP address. With EV2 VPC, the instance keeps its private IP address, and the instance gets a new public IP address unless it has an elastic IP address, an EIP, which doesn't change during a stop or start. With elastic IP addresses, the EIP remains associated with the instance when you reboot it. For instance store volumes, when we reboot, the data is preserved. When we stop or start, the data is erased, and when we terminate, the data is erased. So remember that with instance store volumes. Data gone when you stop it or terminate. The root device volume is preserved during a reboot, and the volume is preserved during a stop or start event, but the volume is deleted by default during termination. And with billing, during a reboot, the instance hour doesn't change. Each time an instance transitions from stopped to running, AWS starts a new instance billing hour. When you terminate an instance, you stop incurring charges for that instance as soon as the state changes to shutting down. Okay, let's just talk about instance recovery for a minute. Now, you can create an Amazon CloudWatch alarm using the statuscheckfailed_system alarm that monitors and automatically recovers an EC2 instance if it becomes impaired due to a hardware failure. A recovered instance is identical to the original instance. It includes the instance ID, the private IP addresses, the elastic IP addresses, and all the instance metadata. Now, terminated instances cannot be recovered. So when the statuscheckfailed_system alarm is triggered and the recovery action is initiated, you'll get notified by the Amazon SNS topic that you selected when you created the alarm. And it tells you all the associated recovery actions. So during instance recovery, the instances migrated during an instance reboot and any data that is in memory is unfortunately lost. When the process is complete, information is published to the SNS topic that you created when you made the alarm. Now, anyone who is subscribed to that SNS topic will receive an email notification that includes the status of the recovery attempt and any further instructions. So, that might be something that you only sent to a select group of people. You will notice an instance reboot on the recovered instance dashboard. So, some of the problems that often cause system status checks to fail: loss of network connectivity, loss of system power, software issues on the physical host, hardware issues on the physical host that impact network reachability. So anything that's to do with the underlying infrastructure is going to be a perfect candidate for statuscheckfailed_system alarms. The recover action also can be triggered when an instance is scheduled by Amazon Web Services to stop or retire due to degradation of the underlying hardware or an upgrade, for example. So just keep in minute that the recover action is supported on instances with the following characteristics. One, they need to be running in the VPC, so not EC2 Classic instances. They need to be using shared tenancy, so not dedicated hardware, and i.e. the tenancy attribute is set to default. They're using EBS volumes only, and they're not configured to use instance store volumes. Currently, it's supported on C3, C4, C5, M3, M4, M5, R3, R4, T2, or X1 instance types. But by the time I've recovered this, there will already be a group of other instance types added to it. So, that's it for recovery. Okay, let's talk billing models for EC2. There are four different cost models. The instance families go like this. First, there's on-demand. With on-demand pricing, you pay hourly for however long you run your EC2 instance at a price set per instance time. Ir your EC2 instance does not run the full hour, you are still billed for the full hour. The second option is spot pricing. And spot pricing is marketplace pricing based on supply and demand. You are bidding for unused AWS capacity. There is no guarantee that you will get a spot instance. When you do, there is no guarantee that you will have it for any length of time. Now, this makes spot pricing useful in situations where jobs are not time-constrained, i.e., they can spin up and shut down without a negative impact on the system they're interacting with. Keep in mind, spot instances can be terminated. Reserved instances. Reserved pricing offers discounted hourly rates per instance time with an upfront commitment of either one year or three years. The upfront commitment comes in the form of a one-time payment which offers the steepest hourly discount, a partial upfront payment, or no upfront payment at all. RIs suit predictable usage where you can safely explain or expect a certain level of compute will be required. Scheduled instances are like reserved instances; however, you can reserve the capacity in advance so that you know it is available when you need it. You pay for the time that the instances are scheduled, even if you do not use them. Scheduled reserved instances enable you to purchase capacity reservations that reoccur on a daily, weekly, or monthly basis with a specified start time and duration for a one-year term. Scheduled instances are a good choice for workloads that do not run continuously but do run on a regular schedule. For example, you can use scheduled instances for an application that runs during business hours or for a batch processing job that runs at the end of the week, as an example. In addition to the hourly pricing, EC2 instances are subject to data transfer charges. This varies per region, but essentially, data coming into the EC2 instance from the internet is not charged. Data sent out from the EC2 instance to the internet is charged per gigabyte and terabyte tiers. Check the simple monthly calculator for the latest prices on all instance types. So let's look at these instance types. The T series can handle low-traffic websites, development environments, et cetera. Essentially, any processes that do not require a ton of CPU power. You get burstable CPU via CPU credits that are obtained hourly based on the size of the instance. Their max limits also are based on the size of the instance. The M series is perfect for small or medium-sized databases. It accomplishes this with the right balance of memory, CPU, and storage and uses solid-state drives with fast I/O performance. These features make it a very popular choice for many different types of systems. M3 instances are general purpose instances and enable a higher number of virtual CPUs, which provides higher performance. M3 instances are recommended if you are seeking general purpose instances where demand exceeds CPU requirements. The compute optimized C series instance family offers the best price for pure CPU performance. The C4 instances use processes built specifically for AWS hardware and EC2 instances. This family works best with jobs that are CPU-intensive, be it batch processing, video encoding, or other workforce-type tasks. Memory optimized instances offer the best price per gigabyte of RAM for all of the instance families. Think high-performance databases or caching when considering this family of EC2 instances. Not only do you get a lot of memory; you get fast memory. The R series or R3 instances are designed for memory-intensive applications such as high-performance databases, distributive memory caches, in-memory analytics, large SAP deployments, SharePoint, and other enterprise applications. AWS also provides high-performance instance types, specialist applications, and use cases. The P2 instances are intended for general purpose GPU compute applications, and they have task-specific GPUs that are EBS-optimized. They use a high-frequency Intel Xeon E5 Broadwell processor or a high-performance Nvidia K80 GPU. Now that can provide up to 12 gigabytes of GPU memory, and those chips support the GPU direct peer-to-peer GPU communication channel, and they also provide enhanced networking using the EC2 elastic network adaptor, which incidentally can give you up to 20 gigabits of aggregate network bandwidth within a placement group. So the G2 instances are optimized for graphics-intensive applications, and they provide high performance Nvidia GPUs, the high-frequency Intel Xeon Sandy Bridge processor, and each GPU features an onboard hardware video encoder which can be useful for video capture and encoding. Now, the F1 instances are really high-end processors, and these allow you to customize your hardware acceleration using field-programmable arrays or FPGAs. They come with a high-frequency Intel Broadwell processor, and they support EC2 enhanced networking. So all high-performance instances are being constantly upgraded, so check the AWS EC2 site for more detail on the latest features such as field program array support. Okay, it's unlikely that you would get a specific question on the high-end processors in the certification exam, as these instance types are recently-released features. However, if you building graphic-intensive apps, it is good to know about this performance band. So along with the F1 processor, AWS also provides an Amazon EC2 elastic GPU. Now, that allows you to easily attach low-cost graphics acceleration to the current generation of EC2 instances. Now, with this elastic GPU, you can choose the GPU resources that are sized for the workload that you need. Lastly, the storage optimized instance family brings the choice between low-cost IOPS or the best cost per gigabyte of storage. The I series delivers high IOPS at a low cost. These instances are designed for fast, random I/O performance that is ideal for NoSQL databases like Cassandra and MongoDB. They scale out transactional databases, data warehousing, Hadoop, and cluster file systems. D2 instances feature up to 48 terabytes of HDD-based local storage, deliver high disk throughput, and offer the lowest price per disk throughput performance on EC2. So, what options do we have for network- and storage-optimized instances? For applications that benefit from low cost per CPU, you should try compute optimized instances first. For applications that require the lowest cost per gigabyte of memory, use memory optimized instances, the M or C classes. If you're running a database, you should also take advantage of the EBS optimization or instances that support placement groups. For applications with high internode network requirements, you should choose instances that support enhanced networking. Okay, let's talk placement groups. Now, a placement group is a way to associate instances together so they have the very best networking connectivity between them. A placement group is in essence a cloud cluster. Placement groups are recommended for applications that benefit from low-network latency or high-network throughput or basically both. And with placement groups, it is best to choose an instance type that supports enhanced networking. There are two types of placement groups currently offered by AWS: spread placement and cluster placement. Now, spread placement groups is a group of instances that are each placed on distinct, underlying hardware. So spread placement groups are recommended for applications that have a small number of critical instances that should or could be kept separate from each other. And launching instances in a spread placement group reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware, so it's risk mitigation. Spread placement groups provide access to distinct hardware, and therefore are good for mixing instance types or launching instances over time. All right, keep that in mind. A spread placement group can span multiple availability zones, and you can have a maximum of seven running instances per availability zone per group. Okay, that number is probably likely to change, but for now, a maximum of seven running instances per availability zone per group. So, just remember, spread placement groups is Multi-AZ, can support mixed instance types, and it's good for reducing single points of failure in your design. Now, a cluster placement group is where EC2 instances are grouped together in one availability zone to provide the lowest latency within that single availability zone. So, to provide the lowest latency and the highest packet-per-second network performance with a cluster placement group, use the same instance type for all members of the placement group, and ideally, launch them at the same time. So, remember, the cluster placement group is Single-AZ, same instance types, ideally launched together, and great for getting the fastest connectivity between your instances. Now, keep in mind, there's no additional cost for using either type of placement group. So, a few common design scenarios that pop up in exam questions, if you stop an instance in a placement group and then start it again, it still runs in the placement group; however, an instance start will fail if there isn't enough capacity for that instance. So, if you start or launch an instance in a spread placement group, and there's insufficient unique hardware to fulfill the request, then that request fails. Now, Amazon EC2 will make more distinct hardware available over time, so one workaround is to try your request again later. If you receive a capacity error when launching an instance in a cluster placement group, and it's already had instances running in it, stop and start all the instances in that placement group and try the launch again. Restarting the instances may mitigate or migrate them to hardware that has capacity for the requested instances. Okay, so that wraps up placement groups. Let's move on. AWS Lightsail makes it really easy to launch image-based applications. So we can launch WordPress or LAMP Stack just by the click of a button, basically. All of the image information is pre-configured, and we just choose an instance plan. It's very easy to configure and set up in this way. So if you're just looking at launching an instance really quickly, and you don't wanna have to go through all the configuration pieces, then Lightsail makes it very, very easy to do this. So, Amazon EC2 systems manager is a really useful instance management service for your EC2 instances. You'll find it at the bottom of the menu bar. The EC2 system Manager helps you manage your instances from one place, so you can audit instance state, you can automatically collect software inventory, apply OS patches, which I think is its best feature, and create system images. You can configure Windows and Linux operating systems, and the service capabilities help you define and track system configurations. And that makes it easier to manage software compliance for your EC2 and for your on-premise configurations. So it'll do both. You just select the instances you want to manage and define the management tasks you want to perform. There's no additional cost for using the EC2 systems manager. Accounts have a set of default limits per region. Most can be increased by logging a support ticket with AWS explaining your use case and the anticipated volume. AWS Support can be very helpful in standing up and monitoring burst activity usage, so just ask is the best policy if you expect you need to exceed any of these limits. New AWS accounts may start with lower limits. So currently, we can run up to 20 on-demand instances. We can purchase up to 20 reserved instances. We can request spot instances per our dynamic spot limit, which is set per region. Certain instance types, e.g. xlarge, are limited per region, and some instances are not available in all regions. Okay, so what does EC2 do for us for high availability and fault tolerance? With EC2, we can utilize elastic IPs, elastic load balancers, and autoscaling to create a highly-available environment. Okay, so EC2 is a key component of the exam. Now, we've been through quite a lot. So as a way of summarizing, what we'll do is we'll walk through some sample questions from the Cloud Academy database, and we'll just try and refresh ourselves on some of these key parts, okay? What will happen to the data on a root volume when an EC2 instance that was launched from an S3-based AMI is terminated? Option one, data will be saved as an EBS snapshot. No, that's not going to happen. With an S3-based AMI, we're talking about ephemeral storage, which means that it's going to be built from an S3 template and that when the instance is terminated, the ephemeral storage is lost. Option two, data will be saved as an EBS volume. No, that's not going to happen. Option three, data will be automatically deleted. Yes, that's going to be the correct option in this instance, because it's an ephemeral store instance-backed instance, and that data is automatically deleted when the instance terminates. Remember, we can't stop it as we can with an EBS-backed instance, so the minute that it's terminated, that data is lost. Next question. A user has launched an EC2 instance from an instance store-backed AMI. If the user restarts the instance, what will happen to the ephemeral storage data? Now, remember, we support a restart, okay? We just can't stop it. We can only terminate it. So all data will be erased and the ephemeral storage is released, no. The data is preserved, correct. It is not possible to restart an instance launched from an instance store-backed AMI. That's not correct. All the data will be erased, but the ephemeral storage will stay connected. That's not correct either. So the correct answer is the data is preserved. Okay, next question. A user has launched an EC2 instance store-backed instance in the US-East-1a zone. The user created AMI #1 and copied it to the Europe region. After that, the user made a few updates to the application running in the US-East-1a zone. The user makes the AMI #2 after the changes. If the user launches a new instance in Europe from the AMI #1 copy, which of the below mentioned statements is true? First, the new instance in the EU region will not have the changes made after the AMI copy. Now, this is correct, because we've snapshotted and created an AMI, and then we've made changes to the instances that the AMI was made from. However, those changes are not dynamically updated. We need to create a new AMI if we want to have those new changes in our AMI snapshot. Option two, the new instance will have the changes made after the AMI copy since AWS keeps updating the AMI. Incorrect. It is not possible to copy the instance store-backed AMI from one region to another, incorrect, and the new instance will have the changes made after the AMI copy as AWS just copies the reference to the original AMI during the copying. Thus, the copied AMI will have all the updated data. Incorrect. So, in this scenario, if we want to have any changes reflected in the AMI that we port to the other region, we have to take another copy of that AMI first. So okay, that brings us to the end of our EC2 section. This is core for the exam, so let's just go over some of the key things we covered. We've talked about Amazon machine images. We've talked about the types of Amazon machine image. We've got AWS ones, which come with an operating system. We've got partner ones, which often come with an operating system and some enhanced software. We've got community AMIs, which are snapshotted and shared, which can save us time and money using those, and we've got our own private AMIs, which we may save a machine image as a golden image that we want to use in our autoscaling group, or we may wish to share amongst the other members of our organization. And then we talked about selecting the instance type, and that's where we set the virtual CPU size, the memory, and the network combination for our machine. We looked at the billing type that are available to us. We've got on-demand instances, which are convenient in that we can just spark those up at any time. We've got spot instances, which suit those predictable workloads where it doesn't matter if we're going to be interrupted or we can compute without a time constraint. They can save us a significant amount of money, 'cause they're cheaper than the on-demand price. Then we've got reserved instances where we pay either a partial or full upfront for the actual instance itself, and again, that provides us with a cheaper price, because we're committing to using that instance over a period of time. And then we've got scheduled instances, which are reserved instances that can be scheduled for a certain time or date during the calendar. We talked about instance storage, which is important, that we know the difference between ephemeral storage and elastic block store volumes. So ephemeral storage is temporary storage that comes with the instance, and an EBS volume is one which we can attach to an instance and which will persist after the instance is terminated. We talked about the difference between EBS-backed instances and instance store-backed instances and how there are different variances in the way that we use those and how we launch them. And then we talked about the difference in instance states. Really important you get this. So, remember, there's three key states. There's start, stop, and terminate. Now, with EBS-backed instances, we can stop an instance, and when we stop it, any attached volumes persist during that reboot or stop period. If we have an instance backed-instance, then we can only start or terminate it. So we can't stop it , and if we do stop, or terminate is the correct term for that, then any ephemeral store that's been used on that instance will be deleted. Okay, we talked about the launch parameters and how we can set user data to bootstrap our machine, a very powerful way of setting the parameters for the machine. We can have it go off and fetch software and install that software using the user data, and we can set some parameters too around how that machine provisions itself, which again, saves us time and money and allows us to spin up large numbers of machines without having to configure each one individually. And then we just looked at how allow us to group instances together in the same availability zone, that the machine types need to be the same for placement groups, and then we looked at just how all of these great features from EC2 help us design highly-available, cost-efficient, scalable solutions. They're giving us more flexibility. They're allowing us to provision machines quickly and taking out of a lot of the undifferentiated heavy lifting and provisioning resources. Okay, let's get onto the next one.

About the Author

Students60198
Courses90
Learning paths38

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.