1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. AWS Solutions Architect Associate Level Certification Course - Part 1 of 3

EC2 and EBS

Contents

keyboard_tab
The AWS Solutions Architect Associate Level Certification
2
Overview about AWS
PREVIEW9m 26s
AWS Elastic Compute Cloud
AWS Simple Storage Service
AWS Identity and Access Management
Start course
Overview
Difficulty
Advanced
Duration
52m
Students
3513
Description

AWS certifications are among the cloud industry's most valuable, and the Solutions Architect Associate Level exam is the jump off point for the whole series. Once you've mastered associate level material, you'll be comfortable designing and deploying fault tolerant and highly available systems on AWS, estimating costs and building cost control mechanisms, and choosing just the right blend of services for any project you're asked to provision.

The first course of this three-part certification exam preparation series, besides offering a good general overview of AWS, focuses on the core AWS services: EC2, EBS, S3 and IAM. The second course will discuss networking and include a practical design and deployment of a real-world application infrastructure. The third and final course will explore data management and application and services deployment.

Who should take this course?

This is an advanced course that's aimed at people who already have some experience with AWS and a familiarity with the general principles of architecting cloud solutions.

Where will you go from here?

The self-testing quizzes of the AWS Solutions Architect Associate Level preparation material are a great follow up to this series and a pretty good indicator of your readiness to take the AWS exam. Also, since you're studying for the AWS certification, check out the AWS Certifications Study Guide on our blog.

Transcript

The primary data storage media used by AWS instances are managed through the Elastic Block Store, EBS which provides block level storage devices, much the same way you'd use unformatted disk drives for your local computers. At least one volume must be attached to every instance on which, of the very least the operating system is hosted. You can add further volumes either during or after the initial configuration of your instance. These volumes can be used the same way you might isolate data into separate partitions. You can also gain access to the data created on other AWS instances by creating a snapshot of an existing volume and then attaching it to a different instance. Let's go to EC2 and try this ourselves. Clicking on volumes displays a drive called Template Instance Volume, although you can't see the whole name in the dashboard. This volume is currently attached to an instance I already have running. I did create a file called custom file so we will later be able to confirm that we're actually working with the right volume. Now let's create a snapshot, that is to say, let's make an exact copy of this volume, the way it is right now, so we can later use it for a different purpose. For now we'll just give this a name, newsnap.

We'll now click on instances and launch a new instance. For simplicity, we'll select the Ubuntu 14.04 AMI, stick with the smallest T2 micro instance type, and instruct AWS to assign a public IP so we'll be able to access the instance from the outside. We won't add any new volumes beyond the default, which by the way is set to 8GB, the minimum needed for this AMI. We'll tag the instance, new task to make identifying it easier later. We'll stick with the default security group but restrict SSH access to sessions originating from my own IP.

I'll review and then click launch. Then confirm that I have access to the key pair called MyKey. Let's move to the instances dashboard, where we see our new instance, new task is still loading. The volume associated with the third item called template instance was the source of our snapshot. The second item, as you can see, has been terminated and will eventually disappear from this list. Now that new task is up and running, it's public IP address is displayed below. So we'll now be able to get in. In the meantime, let's go back to snapshots. With our current and only snapshot selected, we'll click on actions and then create volume. When creating a new volume, we get to choose between disk types. A general purpose SSD, which is now the default choice, provides the superior disk access speeds of Solid State Drives, but is capped at 3000 IOPS, or 3 IOPS per Gigabyte, up to 1000 Gigabytes. IOPS, which stands for Input Output Operations Per Second, is a unit of measurement for benchmarking storage drives. The third choice magnetic drive is cheaper, but will run at the far slower magnetic hard drive speeds generally in the low hundreds of IOPS. Choosing provisioned IOPS, the second menu item, allows you to more finely control the number of IOPS you apply against your drive, and by extension, more finely control your costs. So for instance, if you're creating a 100GB provisioned IOPS drive, you could configure it at 30 IOPS per gigabyte of drive, which in our case would equal 3000 IOPS. In our case, since we're creating an 8GB drive as a general purpose SSD, 8GB by the way since that was the size of the volume from which we took our snapshot, the access speeds will range between 24 IOPS, which is 3 times 8 Gigabytes, and 3000 IOPS. Now we move to volumes, and identify the volume we just created. The blue ball and available on the top line item, since it's the only one available right now, tells us that this is our volume. The other two volumes are the root drives of the two instances we currently have running. We must select our volume, click on actions, and then attach volume. Click inside the instance box will display a list of available instances.

We'll select new task and note that once it shows up within our Linux instance, it might be found at /dev/xddf rather than /dev/sdf. This information will make our lives a whole lot easier later. We'll leave that to continue cooking on it's own, and SSH into our new task instance.

From the directory on my own computer where my public key is stored we'll use -i to identify mykey.pem as the public key from the key pair, and Ubuntu@ the IP address that we were given. Ubuntu is the default login for an Amazon Ubuntu instance. I've already created a directory called /media/newdrive, so that's where we'll mount our volume. First, let's run lsblk, that is list block devices, to confirm that our volume is actually where it's supposed to be. We can see that it was indeed named xddf, or in the case of this particular partition, xddf1. We'll mount /dev/xddf1 to media/newdrive.

We'll cd, or move to the home directory and then to the Ubuntu directory that's below it. Running ls to list all files in this directory shows us only one file, customfile, the file I created when this was still a volume on our original instance. You should also beware that volumes are available only in the region in which they were created. However, you can create a snapshot of any volume in your account and then copy it to a different region, making the copy available there. Here we've got an existing volume. While it's selected, click the actions button, and then create snapshot. We'll give the snapshot a name and then click create. Now we go to the snapshots page, select our new snapshot and click copy. We select the region we'd like to copy it to, and then click copy.

About the Author
Avatar
David Clinton
Linux SysAdmin
Students
11905
Courses
12
Learning Paths
4

David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.

Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.

Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.

His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.