1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. AWS Solutions Architect Associate Level Certification Course - Part 1 of 3

EC2: advanced topics

Contents

keyboard_tab
The AWS Solutions Architect Associate Level Certification
2
Overview about AWS
PREVIEW9m 26s
AWS Elastic Compute Cloud
AWS Simple Storage Service
AWS Identity and Access Management
Start course
Overview
Difficulty
Advanced
Duration
52m
Students
3513
Description

This lecture will explore the more advanced tools offered by AWS Elastic Compute Cloud (EC2).

We will start by discussing placement groups, and how large instances should be run to avoid insufficient capacity errors.

You will create a placement group for hands-on practice from the EC2 dashboard, populate it with several instances, and select a VPC network.

After launching the group, you will delve into command line for increased efficiency. You will learn about the AWS python-based command line interface (CLI), and how to install it using Linux terminal.

You will learn how to obtain an access key ID and secret access key, and how to use them on any AWS account page. You will enter the keys in the CLI after installation is completed.

In the CLI, we will discuss the most used key commands: create (AWS EC2 create volumes), delete (AWS EC2 delete-instances), and describe (AWS EC2 describe-VPCS). You will view several examples of how each command responds.

Next, you will review the curl tool to look at the metadata of a running instance. You will add a security group to the instance.

Finally, we will review the pricing models offered through AWS EC2 explaining why you would use spot instances instead of on-demand models, and how to maximize savings using reserved instances.

Transcript

We'll now explore some of the more advanced tools offered by AWS EC2. A placement group is a logical grouping of instances within a single availability zone. You'll want to group instances this way if you have a particular need for low latency, 10 gigabytes per second connectivity within your network. To avoid the risk of getting an insufficient capacity error, you should launch all the instances you'll need in the placement group at one time, and use the same instance type for all instances in the group. If for some reason, you stop a placement group instance and then start it up again, it'll still run as long as there's enough available capacity. I should note that only X large instance types can be launched into placement groups, and that AWS recommends that you launch your placement group instances from a specially created AMI. Creating a placement group is actually very simple. From the EC2 dashboard, click on placement groups. Then on create placement group, then choose a name that you need to your AWS account, and click create. Now let's populate our group with a couple of instances. We'll click on instances. Then on launch instance. We'll select an AMI, then an appropriate instance type. Let's go with compute optimize C3.xlarge. Now we'll set the number of instances at two, and select our placement group. Next, we'll select a network, going with the default VPC, and we'll enable autoassign public IP. You'll notice that we've been given two extra storage volumes for each instance. These are part of the C3.xlarge type profile.

We'll add a tag, click next, and create a new security group that limits SSH access to my own IP, and review.

And launch. Just about anything you can do within your AWS account can be done more efficiently from the command line. AWS has created their own python based command line interface, that can provide direct, authenticated access from any connected terminal, anywhere in the world. Before we quickly demonstrate installing the CLI, you should first get your access key ID and the secret access key from any AWS account page. Click on your account name at the top right of the screen, and then on security credentials. Then follow directions to create a new access key.

Note the access key ID and the secret access key for later use. Now, let's move to a Linux terminal and make sure you've got at least Python 2.6.3 installed. We'll run w-get to download the current AWS CLI package directly from Amazon official S3 bucket. Now we'll unzip the package, and using sudo, run the installer. We'll run AWS configure to enter the access key we saved earlier. We'll enter the access key ID, and the secret access key. We'll use US-east-1 for the region, and table as our output format, which could also be text or JSON. Now that it's installed, let's take a quick look at how the AWS CLI could be used. While you should look through the documentation on your own, I think it's fair to say that most of the key commands are either create, as in say, AWS EC2 create volumes, or delete, as in AWS EC2 delete-instances. Or describe, as in AWS EC2 describe-VPCS. Let's try one or two examples. We can list all the current key pairs associated with our account by typing AWS EC2 describe-key-pairs. Now we'll create a new key pair named, new pair. AWS EC2, create key pair--key name, new pair.pem. We can copy the public key from the terminal by highlighting it and using shift control C. Let's do another describe to check if we were successful.

And delete our new pair, using delete key pair. Each AWS instance is built on its own unique configuration, but since you might need to create multiple instances following a similar profile, you could benefit from direct access to instance metadata. So for instance, if you regularly create instances that, except for certain key configuration details, share most profile elements, you could automate the process at the metadata level by say, creating a single script with a few fields that could be updated at run time from configuration data. But you'll need to know how the metadata is stored and how to get to it. Using the curl tool, let's take a look at the metadata from within a running instance.

No matter what your instance IP address is, by the way, you'll always use this exact URL. That is, 169.254.169.254/latest/metadata, which lists the categories of metadata that are available. Let's now try adding security group. So you add /security-groups. And we'll see the name of our group, which for some reason, is partly obscured at the head of my command prompt. Describing the full usage of metadata is way beyond the scope of this video, but at least you do now know that metadata exists and where it lives. We can't conclude this video without first discussing the various available instance pricing models.

Launching an instance directly from the dashboard, as we've been doing, we'll use the normal on demand model. Spot instances on the other hand, allow you to bid on unused Amazon EC2 instances and run them whenever your bid exceeds the current spot price, which will rise and fall according to supply and demand. The spot instance pricing model is potentially the most cost effective option for obtaining compute resources. To request spot instances, select an instance type that qualifies for the spot market, like a compute optimized type, and go to the configure page. Click the request spot instances box next to the purchasing option, and enter the maximum price you'd like to bid. You can add this instance to an existing launch group, if you'd like. You can set start and expiration times for your bid if you need to, and if you want, make your request persistent, meaning that should your instance terminate, a new bid will immediately be entered to replace it.

Reserved instances require a relatively low one time payment, but allow you to maximize savings by purchasing instances over longer terms that meet your particular needs. AWS itself offers reserved instances for one or three year terms. Shorter terms can be available through reserved instance marketplace sellers. Reserved instances are available in light, medium and heavy utilization to help you balance the amount you pay up front with your effective hourly price.

From the EC2 dashboard, click on reserved instances, click purchase reserved instances, and set your preferences using the drop down menus. Click search to display the instances matching your needs that are currently available. To select an offering, click add to cart.

About the Author
Avatar
David Clinton
Linux SysAdmin
Students
11905
Courses
12
Learning Paths
4

David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.

Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.

Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.

His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.