Understanding the fundamentals of AWS is critical if you want to deploy services and resources within the AWS Cloud. The Compute category of services are key resources that allow you to carry out computational abilities via a series of instructions used by applications and systems. These resources cover a range of different services and features, these being:
- EC2 - Amazon Elastic Compute Cloud
- ECS - Amazon Elastic Container Service
- ECR - Amazon Elastic Container Registry
- EKS - Amazon Elastic Container Service for Kubernetes
- AWS Elastic Beanstalk
- AWS Lambda
- AWS Batch
- Amazon Lightsail
This course will provide the fundamental elements of all of these Compute services and features that will allow you to select the most appropriate service for your project and implementations. Each have their advantages by providing something of value that’s different to the others, which will all be discussed.
Topics covered within this course consist of:
- What is Compute: This lecture explains what 'Compute' is and what is meant by Compute resources and services
- Amazon Elastic Compute Cloud (EC2): This is one of the most common Compute services, as a result this will likely be the longest lecture as you will cover a lot of elements around EC2 to ensure you are aware of how it’s put together and how it works
- Amazon ECS (EC2 Container Service): Within this lecture you will gain a high-level overview of what the EC2 Container Service is and how it relates to Docker
- Amazon Elastic Container Registry: In this lecture you will consider how this service links closely with ECS to provide a secure location to store and manage your docker images
- Amazon Elastic Container Service for Kubernetes (EKS): Here you will look at how EKS provides a managed service, allowing you to run Kubernetes across your AWS infrastructure without having to take care of running the Kubernetes control plane
- AWS Elastic Beanstalk: This lecture will provide an overview of the service, showing you how it’s used to automatically deploy applications using EC2 and a number of other AWS services
- AWS Lambda: This lecture covers the Lambda ‘serverless’ service, where you will explore what serverless means and how this service is used to run your own code in response to events
- AWS Batch: Here you will consider a high-level overview of this service that relates to Batch Computing
- Amazon Lightsail: Finally we will look at Amazon Lightsail, a Virtual Private Server solution used for small-scale projects and use cases
If you want to learn the differences between the different Compute services, then this course is for you!
With demonstrations provided, along with links to a number of our labs that allow you to gain hands-on experience in using many of these services, you will gain a solid understanding of the Compute services used within AWS.
If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.
Hello, and welcome to this final lecture, where I just want to quickly summarize what we have learned throughout this course.
I started off by covering what is meant by compute resources, whereby I explained that compute resources can be considered the brains and processing power required by applications and systems to carry out computational abilities via a series of instructions. So essentially, compute is closely related to common server components, which many of you will be more familiar with, such as CPUs and RAM.
Following this, we started to get into the meat of the AWS compute resources, to provide you with an understanding of the fundamentals of the different compute services and features, which are Elastic Compute Cloud, EC2, Amazon ECS, which is the EC2 Container Service, Amazon Elastic Container Registry, ECR, Amazon Elastic Container Service for Kubernetes, EKS, AWS Elastic Beanstalk, AWS Lambda, AWS Batch, and Amazon Lightsail.
During the EC2 lecture, we learnt that it was one of the most common Compute services in use today, and what it provides from a compute perspective. I also discussed the different components of the service, covering Amazon Machine Images, AMIs, instance types, instance purchasing options, tenancy, user data, storage options, and security. I then performed a demonstration that shows you how to create a new EC2 instance from within the console.
The next service we looked at was the Amazon EC2 Container Service, known as ECS. This can be defined as a service that allows you to run Docker-enabled applications packaged as containers across a cluster of EC2 instances without requiring you to manage a complex and administratively heavy cluster management system. As a result, there is no need to install any management software for your cluster, neither is there a need to install any monitoring software either. All of this and more is taken care of by the service, allowing you to focus on building great applications, and deploying them across your scalable cluster.
Following ECS, I gave an overview of the Elastic Container Registry, known as ECR. ECR is a fully managed service that provides a secure location to store and manage your docker images that can be distributed and deployed across your applications. With ECR, you do not need to provision any infrastructure to allow you to create a registry of docker images. This is all provided and managed by AWS. This service is primarily used by developers, allowing them to push, pull, and manage their library of docker images in this central and secure location. The main components of ECR are the registry, authorization tokens, repository, repository policies, and images.
Next, I looked at the Elastic Container Service for Kubernetes, known as EKS. EKS provides a managed service, allowing you to run Kubernetes across your AWS infrastructure without having to take care of provisioning and running the Kubernetes management infrastructure, in what's referred to as the control plane. You, the AWS account owner, only need to provision and maintain the worker nodes. The Control Plane dictates how Kubernetes and your clusters communicate with each other and tracks the state of all Kubernetes objects by continually monitoring the objects. Worker nodes run as on-demand EC2 instances, and includes software to run containers managed by the Kubernetes control plane. For each node created, a specific AMI is used which also ensures docker and the kubelet in addition to the AWS IAM authenticator is installed for security controls. To get started with EKS, you must perform the following steps. Create an EKS Service Role, create an EKS Cluster VPC, install the kubectl and the AWS-IAM-Authenticator, create your EKS Cluster, configure kubectl for EKS, provision and configure Worker Nodes, and configure the Worker Node to join the EKS Cluster. Following on from EKS, I covered Elastic Beanstalk, which is an AWS managed service that takes your uploaded web application code and automatically provisions and deploys the required resources within AWS to make the web application operational. The components that make up Elastic Beanstalk include application versions, environments, environment configurations, environment tiers, configuration template, platform, and applications.
I then covered how AWS Elastic Beanstalk operates in a very simple workflow process for the application deployment, in four simple steps. Firstly, you create an application, next, you must upload your application version of the application to your Elastic Beanstalk, the environment is then created by Elastic Beanstalk with the appropriate resources to run your code, and then any management of your application can then take place.
Next was an overview of AWS Lambda, which is a service that lets you run your code in response to events in a scalable and highly available serverless environment. To reiterate again, serverless means that you do not need to worry about provisioning and managing your own compute resource to run your own code. Instead, this is managed and provisioned by AWS. Although it's named serverless, it does, of course, require a service or at least compute power to carry out your code requests. But, because the AWS user does not need to be concerned with what compute is used or where it's coming from, it's considered serverless from the user perspective. I explained some of the components of Lambda, which included the Lambda function, the events source, downstream resources, and log streams. Once we understood what Lambda functions were, I covered how to create them, and the three simple steps to do so. Where you can select a blueprint, configure your triggers, and configure the function details.
Following Lambda, I introduced AWS Batch, which is used to manage and run batch computing workloads within AWS. Batch computing is primarily used in specialist use cases, which require a vast amount of computer power across a cluster of compute resources to complete batch processing, executing a series of jobs or tasks. To understand how AWS Batch worked, I covered some of the key parts. Jobs. A Job is classed as a unit of work that is to be run by AWS Batch. Job Definitions. These define specific parameters for the Jobs themselves. Job Queues. These are Jobs that are scheduled and placed into a Job Queue until they run. Job Scheduling. The Job Scheduler takes care of when a job should be run, and from which Compute Environment. And Compute Environments. These are the environments containing the compute resources to carry out the Job.
Finally, we're led to Amazon Lightsail, which provides a Virtual Private Server, and it has been designed to be simple, quick, and very easy to use at a low cost point, for small scale use cases by small businesses or single users. With its simplicity and small scale uses, it's commonly used to host simple websites, small applications, and blogs. And you can run multiple Lightsail instances together, allowing them to communicate, and it's even possible, if required, to connect it to other AWS resources, and to your existing VPC running within AWS, via a peering connection. An Amazon Lightsail instance can be launched and configured all from a single page, making this a simple solution.
You should now have a good understanding of the different AWS Compute services and features available, allowing you to select the most appropriate service for your project. Each have their advantages by providing something of value that's different to the others. If you have any feedback on this course, positive or negative, please contact us by sending an email to support@cloudacademy.com. Your feedback is greatly appreciated. Thank you for your time, and good luck with your continued learning of cloud computing. Thank you.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.