The AWS Solutions Architect Associate Level Certification
AWS Elastic Compute Cloud
AWS Simple Storage Service
AWS Identity and Access Management
AWS certifications are among the cloud industry's most valuable, and the Solutions Architect Associate Level exam is the jump off point for the whole series. Once you've mastered associate level material, you'll be comfortable designing and deploying fault tolerant and highly available systems on AWS, estimating costs and building cost control mechanisms, and choosing just the right blend of services for any project you're asked to provision.
The first course of this three-part certification exam preparation series, besides offering a good general overview of AWS, focuses on the core AWS services: EC2, EBS, S3 and IAM. The second course will discuss networking and include a practical design and deployment of a real-world application infrastructure. The third and final course will explore data management and application and services deployment.
Who should take this course?
This is an advanced course that's aimed at people who already have some experience with AWS and a familiarity with the general principles of architecting cloud solutions.
Where will you go from here?
The self-testing quizzes of the AWS Solutions Architect Associate Level preparation material are a great follow up to this series and a pretty good indicator of your readiness to take the AWS exam. Also, since you're studying for the AWS certification, check out the AWS Certifications Study Guide on our blog.
Let's take a big-picture look at EC2, which is the set of AWS tools designed to configure, provision, secure and manage the virtual computers you use for pretty much all AWS projects. The term used by AWS to describe each of these virtual computers is instance. As you'll soon see, a running AWS instance is made up of a number of critical components. A copy of a specially optimized operating system called an image, an AMI, a hardware profile or instance type, which acts as a virtual host for the operating system, a root storage device, and optionally multiple EBS block devices. And at least one security group, which is a collection of software policy settings to control access to and from your instance. Since the best way to learn cloud skills is to actually dive in and work with each service we'll quickly create an actual instance. You'll have to select a base operating system for your instance. The quick start tab on the left displays a collection of preconfigured and optimized operating systems called AMIs or Amazon Machine Images that are provided and supported by AWS. Besides Amazon's own branded Linux AMI, there's also a good variety of popular Linux distributions in various flavors of Windows Server. My AMI would display any AMIs that you've created or customized on your own. I obviously don't have any such AMIs associated with this account right now.
AWS Marketplace offers specialized and sometimes proprietary versions of AMIs made available often at a cost by commercial vendors. Community AMIs are freely available customized operating systems usually maintained by the open source community but that aren't officially supported by Amazon itself. You can narrow down the displayed choices and there's actually a huge number of them in total by selecting "Search Criteria." However, for this video we'll chose Amazon's own Linux distribution. We now have to select a hardware profile for our instance. Depending on our needs, we can set ourselves up with a virtual machine representing a very simple barebones system or push for more memory, more CPU power or both. AWS offers a very wide variety of profiles.
Since this instance won't be used for anything intense, we'll go for the t2.micro. All AWS instances must be associated with some network. If you're AWS account is a bit older, then when creating a new instance, you may have to choose between EC2 to Classic, the default VPC, or a custom VPC of your own. In newer accounts, the EC2-Classic option no longer exists. What's the difference? We'll deal with networks and VPCs in much greater detail in the second course of videos in this series, which will focus on networking. In the meantime, you can think of EC2-Classic instances as computers living within the limits of the vast AWS network in which they are by default, at least, forced to accept IP addressing and access rule patterns that fit the larger environment. VPCs, on the other hand, are designed by default to exist much more independently. A lot like the way the computers in your own office or data center can be freely isolated or opened up according to your needs. Therefore, for instance, you can customize your VPC IP address in subnets much more precisely than with EC2-Classic. EC2 like all AWS services is subject to service limits.
That is only so many resources are made available for a standard AWS account. So for instance, if you're using EC2-Classic you can have no more than five elastic IP addresses and 500 security groups. As you can see, these details are all available as part of Amazon's documentation. Because of the practical, structural differences between EC2-Classic and VPCs service limits for VPCs cover a great many more categories. But this is something we'll discuss is the Solution Architect Associate Level Certification Networking Series. The drives that will be attached to your instance may now be chosen. The size of your root drive, the one that will contain your operating system kernel can be edited. The default value is generally the minimum needed for your current configuration. Additional Elastic Block Store, EBS volumes, can also be added to act as extra partitions.
To keep your instances organized and allow you to easily identify them you might want to add a key perhaps name and it's value lets say, "Webserver." You can now specify a security group whose rules will control network access both to and from your instance. You could select a group that already exists on your account or you could create a new group. Let's create something new. Making only a simple edit to the default rule by restricting SSH access to sessions originating from behind my own local IP address, which Amazon automatically uses to populate the field. We'll obviously have much more to say about security groups in our Networking Series. We'll now review our instance configurations and click on the "Launch" button. Since I already have a key pair installed on this account, AWS asked me to confirm that I still have the public key on my own computer without which password-less SSH access would be impossible. Launching an instance will start the process of booting a copy of the specified AMI and loading the operating system. Until this process is complete, the instance state column in the instances dashboard will read "pending." It's from this point that you'll be billed for each hour or partial hour the instance is running, even if you're not currently logged in or running any service. Once the boot up is complete, the indicator will switch to "running." If necessary, you can stop or reboot your instance from the dashboard under "Actions" and "Instance State" or from the command line interface.
When you click "Stop," the instance state value will change to "stopping" and eventually "stopped." You won't be billed for the instance while it's stopped but you will still be billed for storage on any EBS volumes that are attached to it.
Data from instance store volumes will be lost when you stop and restart. While stopped, you can edit many elements of the instance to attributes. But you should remember that you'll be charged for a full hour each time you restart your instance even multiple times in a single hour.
Rebooting from the AWS dashboard, or through the command line interface, or API as opposed to running shutdown or reboot from within an SSH session has the advantage of retaining its public IP address and the data not only on attached DPS volumes but also on its instance store volume. Terminating an instance will permanently shut down the instance and permanently destroy any data on any attached instance store volumes.
David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.
Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.
Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.
His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.