1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Foundations for Solutions Architect Associate on AWS

Compute - Summary

Contents

keyboard_tab
Introduction
1
Overview
PREVIEW1m 23s
2
Terminology
PREVIEW12m 34s
Services at a Glance
play-arrow
Start course
Overview
DifficultyBeginner
Duration3h 8m
Students9501
Ratings
4.8/5
star star star star star-half

Description

The ‘Foundations for Solutions Architect–Associate on AWS’ course is designed to walk you through the AWS compute, storage, and service offerings you need to be familiar with for the AWS Solutions Architect–Associate exam. This course provides you with snapshots of each service, and covering just what you need to know, gives you a good, high-level starting point for exam preparation. It includes coverage of:

Compute
Amazon Elastic Cloud Compute (EC2)
Amazon EC2 Container Service (ECS)
AWS Lambda
Amazon Lightsail
Amazon Batch

Storage and Database
Amazon Simple Storage Service (S3)
Amazon Elastic Block Store (EBS)
Amazon Relational Database Service (RDS)
Amazon Glacier
Amazon DynamoDB
Amazon Elasticache
Amazon Redshift
Amazon Elastic MapReduce (EMR)

Services
Amazon Simple Queue Service (SQS)
Amazon Simple Notification Service (SNS)
Amazon Simple Workflow Service (SWF)
Amazon Simple Email Service (SES)
Amazon CloudSearch
Amazon API Gateway
Amazon AppStream
Amazon WorkSpaces
Amazon Data Pipeline
Amazon Kinesis
Amazon OpsWorks
Amazon CloudFormation

Course Objectives

  • Review AWS services relevant to the Solutions Architect–Associate exam
  • Illustrate how each service can be used in an AWS based solution

Intended Audience

This course is for anyone preparing for the Solutions Architect–Associate for AWS certification exam. We assume you have some existing knowledge and familiarity with AWS, and are specifically looking to get ready to take the certification exam.

Pre-Requisites

If you are new to cloud computing I recommend you do our introduction to cloud computing courses first. These courses will give you a basic introduction to the Cloud and with Amazon Web Services. We have two courses that I recommend - What is Cloud Computing?  and  technical Fundamentals for AWS

The What is Cloud Computing? lecture is part of the Introduction to Cloud Computing learning path. I recommend doing this learning path if you want a good basic understanding of why you might consider using AWS Cloud Services. If you feel comfortable with Cloud, but would like to learn more about Amazon Web Services, then recommend completing the technical Fundamentals for AWS course to build your knowledge about Amazon Web Services and the value the services bring to customers. 

If you have any questions or concerns about where to start please email us at support@cloudacademy.com so we can help you with your personal learning path. 

Ok so on to our certification learning path! 

Solution Architect Associate for AWS Learning Path 

This Course Includes:

  • 7 video lectures
  • Snapshots of 24 key AWS services

What You'll Learn

Lecture Group What you'll learn
Compute Fundamentals Amazon Elastic Cloud Compute (EC2)
Amazon EC2 Container Service (ECS)
AWS Lambda
Storage Fundamentals

Amazon Simple Storage Service (S3)
Amazon Elastic Block Store (EBS)
Amazon Relational Database Service (RDS)
Amazon Glacier
Amazon DynamoDB
Amazon Elasticache
Amazon Redshift
Amazon Elastic MapReduce (EMR)

Services at a Glance

Amazon Simple Queue Service (SQS)
Amazon Simple Notification Service (SNS)
Amazon Simple Workflow Service (SWF)
Amazon Simple Email Service (SES)
Amazon CloudSearch
Amazon API Gateway
Amazon AppStream
Amazon WorkSpaces
Amazon Data Pipeline
Amazon Cognito
Amazon Kinesis
Amazon OpsWorks
Amazon CloudFormation

 

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Hello, and welcome to this final lecture, where I just want to quickly summarize what we have learned throughout the course.

I started off by covering what is meant by compute. Whereby I explained that compute can be considered the brains and processing power required by applications and systems to carry out computational abilities via a series of instructions. So essentially, compute is closely related to common server components, which many of you will be more familiar with such as CPUs and RAM.

Following this, we then started to get into the meat of the AWS compute resources, to provide you with an understanding of the fundamentals of the different AWS compute services and features, which are:

  • Elastic Cloud Compute, EC2
  • Elastic Load Balancing and Auto Scaling
  • the Amazon EC2 Container Service
  • AWS Elastic Beanstalk
  • AWS Lambda
  • AWS Batch
  • and finally, Amazon Lightsail

During the EC2 lecture, we learned that it is one of the most common compute services in use today, and what it provided from a compute perspective. I also discussed the different components of this service, which covered:

  • Amazon Machine Images, AMIs
  • Instance types
  • the Instance Purchasing Options
  • Tenancy
  • User Data
  • Storage Options
  • and Security

I then performed a demonstration that showed you how to create a new EC2 instance, from within the AWS management console.

Following EC2, we looked at how Elastic Load Balancing and Auto Scaling has a relationship with EC2, allowing you to create a highly scalable and low balance architecture. I explained that the main function of ELB is to direct and route traffic destined for your fleet of EC2 instances, across an even distribution, which helps to maintain high evaluability and resiliency of your environment.

Whereas auto scaling is a mechanism that automatically allows you to increase or decrease your resources to meet demand, based off of custom defined metrics and thresholds. I also talked through how to create an ELB using the follow steps. By defining the:

  • load balancer
  • assigning security groups
  • configuring security settings
  • configuring a health check
  • adding your EC2 instances
  • and adding tags

When we discussed auto-scaling, I pointed out some of the main advantages of using it. As auto-scaling provides automatic provisioning based off of custom defined thresholds, your infrastructure will start to manage itself, preventing you from having to monitor and perform manual deployments. This will ultimately provide a better experience for your users. If there is always enough capacity within your environment, it's unlikely your end user will experience performer problems, which may prevent them from using your services again. Cost reduction. With the ability to automatically reduce the number of resources you have when demand drops, you will stop paying for those resources. You only pay for an EC2 resource when it is up and running. I then demonstrated how to create an auto-scaling group from an existing launch configuration.

The next service we looked at was the Amazon ECS container service. This can be defined as a service that allows you to run Docker-enabled applications, packaged as containers across a cluster of EC2 instances, without requiring you to manage a complex and administratively heavy cluster management system. As a result, there is no need to install any management software for your cluster. Neither is there a need to install any monitoring software either.

All of this and more is taken care of by the service, allowing you to focus on building great applications and deploying them across your scalable cluster.

Following ECS, I gave an overview of elastic beanstalk, which is an AWS managed service that takes your uploaded web application code, and automatically provisions and deploys the required resources within AWS, to make the web application operational. The components that make up elastic beanstalk are:

  • applications
  • application versions
  • environments
  • environment configurations
  • and configuration templates

I then covered how elastic beanstalk operates a very simple workflow process for your application deployment in four simple steps.

  1. Firstly, you create an application
  2. next, you must upload your application version of your application to elastic beanstalk
  3. the environment is then created by elastic beanstalk with the appropriate resources to run your code
  4. any management of your application can then take place

Next was an overview of AWS Lambda, which is a service that lets you run your own code in response to events in a scalable and highly evaluable serverless environment.

To reiterate again, serverless means that you do not need to worry about provisioning and managing your own compute resources to run your own code. Instead, this is managed and provisioned by AWS. Although it's named serverless, it does, of course, require servers, or at least compute power to carry out your code request. But because the AWS user does not need to be concerned with what compute power's used, or where it's coming from, it's considered serverless from the user perspective.

I explained what lambda functions were, and the elements that form them. Required resources, maximum execution timeout, IAM role, and handler name. Once we understood what lambda functions were, I covered how to create them, and the three simple steps to do so:

  1. Select a new blueprint
  2. configuring the triggers
  3. and configuring the function

Following lambda, I introduced AWS batch, which is used to manage and run batch computing workloads within AWS. Batch computing is primarily used in specialist use cases, which require a vast amount of compute power across a cluster of compute resources, to complete batch processing, executing a series of jobs or tasks. To understand how AWS batch works, I covered some of the key parts:

  • Jobs. A job is classed as the unit of work that is to be run by AWS batch
  • Job definitions. These define specific parameters for the jobs themselves
  • Job queues. Jobs that are scheduled are placed into a job queue until they run
  • Job scheduling. The job scheduler takes care of when a job should be run and from which compute environment
  • And finally, compute environments. And these are the environments containing the compute resources to carry out the job

Finally, we looked at Amazon Lighstail, which provides a Virtual Private Server, a VPS. It has been designed to be simple, quick, and very easy to use for small scale use cases. A Lighstail VPS provides you with the following features:

  • the virtual instance itself
  • an operating system
  • optional pre-installed applications
  • solid state drives
  • data transfer allowance
  • DNS management
  • and static IP addresses

An Amazon Lighstail instance can be launched and configured all from a single page, making this a simple solution.

You should now have a good understanding of the different AWS compute services and features available, allowing you to select the most appropriate service for your project. Each have their advantages by providing something of value that's different to others.

Be sure to give the labs a go that I've mentioned throughout this course, as they will really help you to implant what we have covered and helped you understand how some of the services are put together.

Thank you for taking the time to view this course, and if you have any feedback, positive or negative, I would very much appreciate your comments.

That now brings us to the end of this course. I wish you continued success with any future development at learning of cloud computing.

Thank you!

About the Author

Students63068
Courses91
Learning paths40

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.