CloudAcademy
  1. Home
  2. Training Library
  3. Compute Fundamentals of AWS for Cloud Practitioner

Course Summary

The course is part of this learning path

AWS Cloud Practitioner Certification Preparation
course-steps 10 certification 1 lab-steps 4
play-arrow
Start course
Overview
DifficultyBeginner
Duration56m
Students1621

Description

Course Description

To be properly prepared for the AWS Certified Cloud Practitioner exam, understanding the fundamentals of AWS is critical. The Compute category of services are key resources that allow you to carry out computational abilities via a series of instructions used by applications and systems. These resources cover a range of different services and features including:

  • Amazon Elastic Compute Cloud (EC2)
  • Elastic Load Balancing
  • Auto Scaling
  • AWS Elastic Beanstalk
  • AWS Lambda


This course will provide the fundamental elements of all of these compute services and features that will allow you to select the most appropriate service for your project and implementations. Each have their advantages by providing something of value that’s different to the others, which will all be discussed.

Learning Objectives

By the time you complete this course, you should be able to:

  • Describe the basic functions that each service performs within a cloud IT environment
  • Recognize basic components and features of each compute service
  • Understand how each service utilizes the benefits of cloud computing, such as scalability or elasticity

Intended Audience

This course is designed for:

  • Anyone preparing for the AWS Certified Cloud Practitioner
  • Managers, sales professionals and other non-technical roles

Prerequisites

Before taking this course, it is recommended that you have a general understanding of basic cloud computing concepts.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

- [Instructor] Hello, and welcome to this final lecture where I just want to quickly summarize what we have learned throughout the course. I started off by covering what is mean by compute whereby I explained that compute can be considered the brains and processing power required by applications and systems to carry out computational abilities via a series of instructions. So essentially compute is closely related to common server components which many of you will be more familiar with such as CPUs and RAM.

Following this we then started to get into the meat of the AWS compute resources to provide you with an understanding of the fundamentals of the different AWS compute services and features. During the EC2 lecture we learned that it is one of most common compute services in use today and what it provided from a compute perspective.

I also discussed the different components of the service, which covered Amazon Machine Images, AMIs, instance types, the instance purchasing options, tenancy, user data, storage options, and security.

Following EC2 we looked at how elastic load balancing and auto scaling has a relationship with EC2 allowing you create a highly scalable and load balanced architecture. I explained that the main function of ELB is to direct and route traffic destined for you fleet of EC2 instances across an even distribution which helps to maintain high availability and resiliency of your environment, whereas auto scaling is a mechanism that automatically allows you to increase or decrease your resources to meet demand based off of custom defined metrics and thresholds.

I also talked through how to create an ELB using the following steps. By defining the load balancer, assigning security groups, configuring security settings, configuring a health check, adding your EC2 instances, and adding tags. When we discussed auto scaling I pointed out some of the main advantages of using it. As auto scaling provides automatic visioning based off of custom defined thresholds, your infrastructure will start to manage itself preventing you from having to monitor and perform manual deployments.

This will ultimately provide a better experience for your users. If there is always enough capacity within your environment it's unlikely your end user will experience performance problems which may prevent them from using your services again. Cost reduction. With the ability to automatically reduce the amount of resources you have when demand drops you will stop paying for those resources. You only pay for an EC2 resource when it is up and running.

I gave an overview of Elastic Beanstalk, which is an AWS managed service that takes your uploaded web application code and automatically provisions and deploys the required resources within AWS to make the web application operational. The components that make up Elastic Beanstalk are applications, applications versions, environments, environment and configurations, and configuration templates. I then covered how Elastic Beanstalk operates a very simple workflow process for your application deployment in four simple steps.

Firstly, you create an application. Next, you must upload your application version of your application to Elastic Beanstalk. The environment is then created by Elastic Beanstalk with the appropriate resources to run your code. Any management of the application can then take place. Next, was an overview of AWS Lambda, which is the service that lets you run your own code in response to events in a scalable and highly available serverless environment. To reiterate again, serverless means that you do not need to worry about provisioning and managing your own compute resources to run your own code.

Instead, this is managed and provisioned by AWS. Although it's named serverless it does of course require servers or at least compute power to carry out your code request. But because the AWS user does not need to be concerned with what compute power is use, or where it's coming from, it's considered serverless from the user perspective. I explained what Lambda Functions were and the elements that form them. Required resources, maximum execution timeout, IAM role, and handler name. Once we understood what Lambda

Functions were I covered how to create them and the three simple steps to do so. Select a new blueprint. Configure the triggers. And configure in the function. Be sure to give the labs a go that I've mentioned throughout this course as they will really help you to implant what we've covered and help you understand how some of the services are put together. Thank you for taking the time to view this course, and if you have any feedback, positive or negative, I would very much appreciate your comments. That now brings us to the end of this course. I wish you continued success with any future development learning of cloud computing. Thank you.

About the Author

Students44315
Labs1
Courses50
Learning paths31

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data centre and network infrastructure design, to cloud architecture and implementation.

To date Stuart has created over 40 courses relating to Cloud, most within the AWS category with a heavy focus on security and compliance

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.