1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Foundations for Solutions Architect Associate on AWS

Terminology

Contents

Introduction
1
Overview
PREVIEW1m 23s
2
Terminology
PREVIEW12m 34s
Services at a Glance
Terminology
Overview
Transcript
DifficultyBeginner
Duration3h 8m
Students9448
Ratings
4.8/5

Description

The ‘Foundations for Solutions Architect–Associate on AWS’ course is designed to walk you through the AWS compute, storage, and service offerings you need to be familiar with for the AWS Solutions Architect–Associate exam. This course provides you with snapshots of each service, and covering just what you need to know, gives you a good, high-level starting point for exam preparation. It includes coverage of:

Compute
Amazon Elastic Cloud Compute (EC2)
Amazon EC2 Container Service (ECS)
AWS Lambda
Amazon Lightsail
Amazon Batch

Storage and Database
Amazon Simple Storage Service (S3)
Amazon Elastic Block Store (EBS)
Amazon Relational Database Service (RDS)
Amazon Glacier
Amazon DynamoDB
Amazon Elasticache
Amazon Redshift
Amazon Elastic MapReduce (EMR)

Services
Amazon Simple Queue Service (SQS)
Amazon Simple Notification Service (SNS)
Amazon Simple Workflow Service (SWF)
Amazon Simple Email Service (SES)
Amazon CloudSearch
Amazon API Gateway
Amazon AppStream
Amazon WorkSpaces
Amazon Data Pipeline
Amazon Kinesis
Amazon OpsWorks
Amazon CloudFormation

Course Objectives

  • Review AWS services relevant to the Solutions Architect–Associate exam
  • Illustrate how each service can be used in an AWS based solution

Intended Audience

This course is for anyone preparing for the Solutions Architect–Associate for AWS certification exam. We assume you have some existing knowledge and familiarity with AWS, and are specifically looking to get ready to take the certification exam.

Pre-Requisites

If you are new to cloud computing I recommend you do our introduction to cloud computing courses first. These courses will give you a basic introduction to the Cloud and with Amazon Web Services. We have two courses that I recommend - What is Cloud Computing?  and  technical Fundamentals for AWS

The What is Cloud Computing? lecture is part of the Introduction to Cloud Computing learning path. I recommend doing this learning path if you want a good basic understanding of why you might consider using AWS Cloud Services. If you feel comfortable with Cloud, but would like to learn more about Amazon Web Services, then recommend completing the technical Fundamentals for AWS course to build your knowledge about Amazon Web Services and the value the services bring to customers. 

If you have any questions or concerns about where to start please email us at support@cloudacademy.com so we can help you with your personal learning path. 

Ok so on to our certification learning path! 

Solution Architect Associate for AWS Learning Path 

This Course Includes:

  • 7 video lectures
  • Snapshots of 24 key AWS services

What You'll Learn

Lecture Group What you'll learn
Compute Fundamentals Amazon Elastic Cloud Compute (EC2)
Amazon EC2 Container Service (ECS)
AWS Lambda
Storage Fundamentals

Amazon Simple Storage Service (S3)
Amazon Elastic Block Store (EBS)
Amazon Relational Database Service (RDS)
Amazon Glacier
Amazon DynamoDB
Amazon Elasticache
Amazon Redshift
Amazon Elastic MapReduce (EMR)

Services at a Glance

Amazon Simple Queue Service (SQS)
Amazon Simple Notification Service (SNS)
Amazon Simple Workflow Service (SWF)
Amazon Simple Email Service (SES)
Amazon CloudSearch
Amazon API Gateway
Amazon AppStream
Amazon WorkSpaces
Amazon Data Pipeline
Amazon Cognito
Amazon Kinesis
Amazon OpsWorks
Amazon CloudFormation

 

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

About the Author

Students59760
Courses93
Learning paths38

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups. 

In this course we'll review some of the terminology you are expected to know when taking the AWS Certified Solutions Architect Exam. While most of these concepts are commonplace, you will do better in understanding the questions and scenarios with a consistent vocabulary. The goal of any design should be a loosely coupled architecture. Individual parts of the infrastructure will have no knowledge of how the other parts work. The communication through well-defined services such as Amazon Simple Workflow Service or Amazon Simple Queue Service. And ideally, you can replace any part of the system that is providing a service with another component or system that provides a similar service. A stateless system is exactly what it sounds like. It means a system that is not storing any state. The output of the system will depend solely on the inputs into it. Protocols like UDP are stateless meaning you can push packets that stand on their own and don't require the results of a previous packet in order to succeed. Full tolerance is what enables a system to continue running despite a failure. This could be a failure of one or more components to the system or maybe a third-party service. In the realm of AWS, this could mean operating your system in multiple availability zones. If an availability zone outage occurs, your system continues operating in the other availability zones. The goal of full tolerance is to be completely transparent to users with no loss of data or functionality. High availability means having little or no downtime of your systems. The gold standard in high availability is the five nines or 99.999, which equates to less than 5 1/2 minutes of downtime per year. Now not every system has to be built with that gold standard for high availability. The availability goals depend on the purpose of the system as well as the operating budget. Your data is at rest when it's been stored in some sort of storage medium such as Amazon EBS Volume, an Amazon S3 Bucket or a database. You'll most likely hear data at risk when it's in reference to encryption. Your data is in transit when it is being transferred from one machine to another. HTTP traffic is a classic example of data in this state. You'll hear this term used mostly in discussions on how to secure your data during transport. You will often hear vertical scaling referred to as scaling up. Scaling up means an increase in capacity on a single resource. So, for example, adding additional memory to a server to increase the number of processes the server can run is vertical scaling. In the world of AWS, this could take the form of upgrading to a new instance type. Horizontal scaling, also known as scaling out, involves adding more physical or virtual resources. Scaling horizontally in AWS is exactly what a service like horizontal scaling does. It will add additional service based on resource utilization. It may be a time of day or say a major event. Content delivery networks or CDNs replicate your content to points of presence or service all around the world to improve performance and availability by being closer to the end user's locations. AWS offers a CDN service called Amazon CloudFront, which has eight locations in multiple global locations. A network perimeter is a boundary between two or more portions of a network. It can refer to the boundary between your VPC and your network, it could refer to the boundary between what you manage versus what AWS manages. The important part is to understand where these boundaries are located and who has responsibly over each segment. Synchronous processing refers to processes that wait for a response after making a request. A synchronous process will block itself from other activities until either the response is received or a predefined timeout occurs. A asynchronous process will make the request and immediately begin processing other requests. When a response is finally made available, the process will handle it. This is how long-running activities are handled. AWS offers services such as Amazon Simple Queue Service and Amazon Simple Notification Service that can help in the overall implementation of asynchronous processing. Eventual consistency is a consistency model used in distributed computing to achieve high availability. Eventually consistent information guarantees that if no new updates are made to a given item of data, eventually all accesses to that item will return the last updated value. It may mean that a request made from another location to it, in an immediately updated object may return an older version of that object. Eventually consistency is a key factor in ensuring distributed computing works properly. You need to be clear on what eventual consistency means when designing your systems to ensure the right outcome for system users. Now RESTful web services are HTTP and HTTPS-based application programming interfaces that interact with other applications through a standard HTTP method. The client makes a request with an applicable input parameter. The server will process the request and return a response that is consumed by the client. A common data format exchanged in these services is JSON. JSON stands for Javascript Object Notation. It's a human-readable open standard state of format that is easily generated and passed from nearly all modern programming languages, not just Javascript. You should become familiar with this format. You should be able to read it, understand what it means, and manipulate and write in a JSON syntax. Security policies are just one of the many AWS services that are written in JSON format. Without understanding it, you would run the risk of leaving systems exposed. Amazon Simple Notification Services allow you to send push notification messages directly to apps on mobile devices. Now push notification messages sent to a mobile endpoint can appear in a mobile app as a message alert, a badge update, or even sound alerts. Push notification services maintain a connection with each app and the associated mobile device which is registered to use that service. When an app and mobile device register, the push notification service returns a device token. Amazon Simple Notification Service uses the device token to create a mobile endpoint to which it can send direct push notification messages. In order for Amazon SNS to communicate with the different push notification services, you need to submit your push notification service credentials to Amazon SNS to be used on your behalf. WAF stands for Web Application Firewall and helps protect your web applications from common web exploits that could affect application availability, compromise security or consume excessive resources. AWS WAF includes a full-featured API that you can use to automate the creation deployment and maintenance of web security rules. AWS WAF is a service and you pay for the rules you deploy and how many web requests your web application receives. The OSI model stands for Open Systems Interconnection Model. It is a standard that defines the layers of a communication system. There are seven layers in the model: the physical layer, the data link layer, the network layer, transport layer, session layer, presentation layer, and the application layer. Each layer has its own set of responsibilities. Traffic starting at an upper layer such as L7, or the application layer, would use the L6 or presentation layer to communicate. This continues all the way down to the layer one layer. In AWS you have some level of control from L2 and up. Most of the control is performed through AWS managed services. You should understand how each service corresponds to the OSI model and what you have control over versus what Amazon controls. Network Address Translation, or NAT for short, is a method for placing all systems on a network behind a single IP address. Each system on the network has its own private IP address. Externally, traffic originating from any of those systems appears as the same IP address. This is how a network that is assigned an IP address from an internet service provider can have multiple systems connected to the internet resources without each needing to be assigned its own public IP address. NAT is a common service and is fully available to VPCs in AWS. Routing tables are a collection of rules that specify how internet protocol traffic should be guided to reach an endpoints. A common route in a routing table will direct all traffic headed outside of your network through a router. This is how a system can reach websites. Another route might direct all traffic in a certain range to another network over a virtual private network connection. AWS lets you manage your own routing tables for your VPC. An access control list, commonly referred to as an ACL, defines permissions that are attached to an object. In the world of AWS, you can attach network ACLs to subnets which will grant or deny protocols to and from various endpoints. ACLs can be attached to S3 buckets to control access to the objects it contains. ACLs are crucial to understanding how to properly secure your environment. Firewalls are systems are the software or hardware that control the incoming and outgoing network traffic. You manage a set of rules to permit or block traffic based on endpoints and protocols. AWS implements this firewall using security groups that can be attached to one or more EC2 instances or to elastic load balancers and more. Security groups are part of the first line of defense in securing your environment. A load balancer works to distribute traffic across a number of servers. It can be a physical or virtual resource. Traffic is directed to registered servers based on algorithms that typically seek an even load or round-robin style distribution. A client may be directed to different servers on each request. Sticky sessions allow clients to stay with a single server during its session lifetime. Features such as server health checks ensure traffic stops being synced to a server that does not respond within defined thresholds. BGP, or Border Gateway Protocol, is a standardized exterior gateway protocol designed to exchange routing and reachability information among autonomous systems on the internet. The border gateway protocol is used with an IP set tunnel between the inside IP addresses to exchange routing from the VPC to your onsite network. Each BGP router has an autonomous system number or ISN. Your ISN has to be provided to AWS when the customer gateway is created. Okay, that concludes our terminology lecture. Let's move into learning a little bit more about the compute fundamentals of AWS.