1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Solution Architect Professional for AWS - Learning Path Primer

Terminology and Definitions


Learning Path Primer
Terminology and Definitions

This course serves as a "primer" for the Solution Architect Professional learning path. The objective of the course is to refresh our understanding of baseline concepts before we explore the more advanced topics relevant to the AWS professional certification domains. 

In our first lesson we’ll review some of the terminology and key concepts inherent to the SA professional exam. This first lesson will help us understand questions and scenarios better by ensuring we have a consistent vocabulary.

Next we review aspects of high availability, security and business continuity relevant to the professional certification domains.

We finish with a high level review of some of the AWS Services relevant to the Solution Architect Professional domains. 


In this course, we'll review some of the terminology you are expected to know when taking the AWS Certified Solutions Architect Professional exam. While most of these concepts are commonplace, you will do better in understanding the questions and scenarios with a consistent vocabulary. The goal of any design should be a loosely coupled architecture. Individual parts of the infrastructure will have no knowledge of how the other parts work. They communicate through well defined services such as Amazon Simple Workflow Service or Amazon Simple Queue Service. And ideally, you can replace any part of the system that is providing the service with another component or system that provides a similar service. A stateless system is exactly what it sounds like. It means a system that is not storing any stacks. The output of the system will depend solely on the inputs into it. Protocols like UDP are stateless, meaning you can push packets that stand on their own and don't require the results of a previous packet in order to succeed. Fault tolerance is what enables a system to continue running despite a failure. This can be a failure of one or more components to the system or maybe a third party service. In the realm of AWS, this could mean operating your system in multiple availability zones. If an availability zone outage occurs, your system continues operating in the other availability zones. The goal of fault tolerance is to be completely transparent to users with no loss of data or functionality. High availability means having little or no downtime on your systems. The gold standard in high availability is the five nines, with 99.999, which equates to less than five and a half minutes of downtime per year. Now not every system has to be built with that gold standard for high availability. The availability goals depend on the purpose of the system, as well as the operating budget. Your data is at risk when its being stored in some sort of storage medium such as Amazon EBS volume, an Amazon S3 bucket, or a database. You'll most likely hear data at rest when it's in reference to encryption. Your data is in transit when it is being transferred from one machine to another. HTTP traffic is a classic example of data in this state. You'll hear this term used mostly in discussions on how to secure your data during transport. You'll often hear of vertical scaling referred to as scaling up. Scaling up means an increase in capacity on a single resource. So for example, adding additional memory to a server to increase the number of processes the server can run is vertical scaling. In the world of AWS, this could take the form of upgrading to a new instance type. Horizontal scaling, also known as scaling out, involves adding more physical or virtual resources. Scaling horizontally in AWS is exactly what a service like autoscaling does. It will add additional servers based on resource utilization. It may be a time of day or say a major event. Content delivery networks, or CDN's, replicate your content to points of presence or servers all around the world to improve performance and availability by being closer to the end-user's location. AWS offers a CDN service called Amazon CloudFront, which has edge locations in multiple global locations. A network perimeter is a boundary between two or more portions of a network. It can refer to the boundary between your VPC and your network. It could refer to the boundary between what you manage versus what AWS manages. The important part is to understand where these boundaries are located and who has responsibility over each segment. Synchronous processing refers to processes that wait for a response after making a request. A synchronous process will block itself from other activities until either the response is received or a predefined timeout occurs. A asynchronous process will make the request and immediately begin processing other requests. When a response is finally made available, the process will handle it. This is how long running activities are handled. AWS offers services such as Amazon Simple Queue Service and Amazon Simple Notification Service that can help in the overall implementation of asynchronous processing. Eventual consistency is a consistency model used in distributor computing to achieve high availability. Eventually consistent information guarantees that, if no new updates are made to a given item of data, eventually all accesses to that item will return the last updated value. It may mean that a request made from another location to an immediately updated object may return an older version of that object. Eventual consistency is a key factor in ensuring distributor computing works properly. You need to be clear on what eventual consistency means when designing your systems, to ensure the right outcome for system users. Now RESTful web services are HTTP and HTTPS based application programming interfaces that interact with other applications through a standard HTTP method. The client makes a request with an applicable input parameter, the server will process the request and return a response that is consumed by the client. A common data format exchanged in these services is JSON. JSON stands for Javascript object notation. It's a human readable open-standards data format that is easily generated and passed from nearly all of the programming languages, not just Javascript. You should become familiar with this format. You should be able to read it, understand what it means, and manipulate and write in a JSON syntax. Security policies are just one of the many AWS services that are written in JSON format. Without understanding it, you would run the risk of leaving systems exposed. Amazon Simple Notification Services allow you to send push notification messages directly to apps on mobile devices. Now push notification messages sent to a mobile endpoint can appear in a mobile app as a message alert, a badge update, or even sound alerts. Push notification services maintain a connection with each app and the associated mobile device, which is registered to use that service. When an app and mobile device register, the push notification service returns a device token. Amazon Simple Notification Service uses the device token to create a mobile endpoint, to which it can send direct push notification messages. In order for Amazon SNS to communicate with the different push notification services, you need to submit your push notification service credentials to Amazon SNS to be used on your behalf. WAF stands for Web Application Firewall and helps protect your web applications from common web exploits that could effect application availability, compromise security, or consume excessive resources. AWS WAF includes a full featured API that you can use to automate the creation, deployment, and maintenance of web security rules. AWS WAF is a service and you pay for the rules you deploy and how many web requests your web application receives. The OSI model stands for Open Systems Interconnection Model. It is a standard that defines the layers of a communication system. There are seven layers in the model. The physical layer, the data link layer, the network layer, transport layer, session layer, presentation layer, and the application layer. Each layer has its own set of responsibilities. Traffic starting at an upper layer, such as L seven, or the application layer, would use the L six or presentation layer to communicate. This continues all the way down to the layer one layer. In AWS, you have some level of control from L two and up. Most of the control is performed through AWS managed services. You should understand how each service corresponds to the OSI model and what you have control over versus what Amazon controls. Network Address Translation, or NAT for short, is a method for placing all systems on a network behind a single IP address. Each system on the network has its own private IP address. Externally, traffic originating from any of those systems appears as the same IP address. This is how a network that is assigned an IP address from an internet service provider can have multiple systems connected to the internet resources without each needing to be assigned it's own public IP address. NAT is a common service and is fully available to VPC's in AWS. Routing tables are a collection of rules that specify how internet protocol traffic should be guided to reaching endpoints. A common route in a routing table will direct all traffic hidden outside of your network through a router. This is how a system can reach websites. Another route might direct all traffic in a certain range to another network over a virtual private network connection. AWS lets you manage your own routing tables for your VPC. An access control list, commonly referred to as an ACL, defines permissions that are attached to an object. In the world of AWS, you can attach network ACL's to subnets which will grant or deny protocols to and from various endpoints. ACL's can be attached to S3 buckets to control access to the objects it contains. ACL's are crucial to understanding how to properly secure your environment. Firewalls are systems, either software or hardware, that control the incoming and outgoing network traffic. You manage a set of rules to permit or block traffic based on endpoints and protocols. AWS implements this via security groups that can be attached to one or more EC2 instances, to elastic load balances, and more. Security groups are part of the first line of defense in securing your environment. A load balancer works to distribute traffic across a number of servers. It can be a physical or virtual resource. Traffic is directed to registered servers based on algorithms that typically seek an even load or Round Robin style distribution. The client may be directed to different servers on each request. Sticky sessions allow clients to stay with a single server during its Session Lifetime. Features such as server health checks ensure traffic stops being sent to a server that does not respond within defined thresholds. BTP, or border gateway protocol, is a standardized exterior gateway protocol designed to exchange routing and reachability information among autonomous systems on the internet. The border gateway protocol is used with an IP set tunnel between the inside IP addresses to exchange routing from the VPC to your onsite network. Each BGP router has an autonomous system number, or ASM, your ASM has to be provided to AWS when the customer gateway is created. Okay so that brings to a close our refresher on some of the terminology we need to be aware of for the Solution Architect Professional certification exam. We'll be going into each of those concepts in more detail during the other courses. Now let's refresh ourselves on some of the services that we need to be aware of in the next lesson.

About the Author
Andrew Larkin
Head of Content
Learning Paths

Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built  70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+  years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.

Covered Topics