In this introduction, we outline the content of the learning path, introduce common terminology that you are required to know for the Solution Architect Associate curriculum and provide a high-level introduction to some of the AWS services relevant to the Solution Architect Associate exam.
In this course, we'll review some of the terminology you are expected to know when taking the AWS Certified Solutions Architect exam. While most of these concepts are commonplace, you will do better in understanding the questions and scenarios with a consistent vocabulary. The goal of any design should be a loosely coupled architecture. Individual parts of the infrastructure will have no knowledge of how the other parts work. They communicate through well-defined services, such as Amazon Simple Workflow Service or Amazon Simple Queue Service. And ideally, you can replace any part of a system that is providing a service with another component or system that provides a similar service. A stateless system is exactly what it sounds like. It means a system that is not storing any state. The output of the system will depend solely on the inputs into it. Protocols like UDP are stateless, meaning you can push packets that stand on their own and don't require the results of a previous packet in order to succeed. Fault tolerance is what enables a system to continue running despite of failure.
This can be a failure of one or more components to the system or maybe a third-party service. In the realm of AWS, this could mean operating your system in multiple Availability Zones. If an Availability Zone outage occurs, your system continues operating in the other Availability Zones. The goal of fault tolerance is to be completely transparent to users with no loss of data or functionality. High availability means having little or no downtime of your systems. The gold standard in high availability is the five nines or 99.999, which equates to less than five and a half minutes of downtime per year. Now, not every system has to be built with that gold standard for high availability. The availability goals depend on the purpose of the system, as well as the operating budget. Your data is at risk when it's been stored in some sort of storage medium, such as Amazon EBS volume, an Amazon S3 bucket, or a database. You will most likely hear data at rest when it's in reference to encryption. Your data is in transit when it is being transferred from one machine to another. HTTP traffic is a classic example of data in this state. You'll hear this term used mostly in discussions on how to secure your data during transport. You'll often hear vertical scaling referred to as scaling up. Scaling up means increase in capacity on a single resource. So for example, adding additional memory to a server to increase the number of processes the server can run is vertical scaling. In the world of AWS, this could take the form of upgrading to a new instance type. Horizontal scaling, also known as scaling out, involves adding more physical or virtual resources. Scaling horizontally in AWS is exactly what a service like Auto Scaling does. It will add additional servers based on resource utilization. It may be a time of day or, say, a major event. Content delivery networks, or CDNs, replicate your content to points-of-presence or servers all around the world to improve performance and availability by being closer to the end user's location. AWS offers a CDN service called Amazon CloudFront, which has edge locations in multiple global locations. A network perimeter is a boundary between two or more portions of a network. It can refer to the boundary between your VPC and your network. It could refer to the boundary between what you manage versus what AWS manages. The important part is to understand where these boundaries are located and who has responsibility over each segment. Synchronous processing refers to processes that wait for a response after making a request. A synchronous process will block itself from other activities until either the response is received or a predefined timeout occurs. A asynchronous process will make the request and immediately begin processing other requests.
The OSI model stands for Open Systems Interconnection model. It is a standard that defines the layers of a communications system. There are seven layers in the model, the physical layer, the data link layer, the network layer, transport layer, session layer, presentation layer, and the application layer. Each layer has its own set of responsibilities. Traffic starting at an upper layer, such as L7, or the application layer, would use the L6, or presentation layer, to communicate. This continues all the way down to the Layer 1 layer. In AWS, you have some level of control from L2 and up. Most of the control is performed through AWS Managed Services. You should understand how each service corresponds to the OSI model and what you have control over versus what Amazon controls. Network Address Translation, or NAT for short, is a method for placing all systems on a network behind a single IP address. Each system on the network has its own private IP address. Externally, traffic originating from any of those systems appears as the same IP address. This is how a network that is assigned an IP address from an Internet Service Provider can have multiple systems connected to the internet resources without each needing to be assigned its own public IP address. NAT is a common service and is fully available to VPCs and AWS. Routing tables are a collection of rules that specify how internet protocol traffic should be guided to region endpoints. A common route in a routing table will direct all traffic headed outside of your network through a router. This is how a system can reach websites. Another route might direct all traffic in a certain range to another network over a virtual private network connection. AWS lets you manage your own routing tables for your VPC. An access control list, commonly referred to as an ACL, defines permissions that are attached to an object. In the world of AWS, you can attach network ACLs to subnets, which will grant or deny protocols to and from various endpoints. ACLs can be attached to S3 buckets to control access to the objects it contains. ACLs are crucial to understanding how to properly secure your environment. Files or systems are the software or hardware that control the incoming and outgoing network traffic. You manage a set of rules to permit or block traffic based on endpoints and protocols. AWS implements this via security groups that can be attached to one or more EC2 instances, to elastic load balancers and more. Security groups are part of the first line of defense in securing your environment. A load balancer works to distribute traffic across a number of servers. It can be a physical or virtual resource. Traffic is directed to registered servers based on algorithms that typically seek an even load or round-robin style distribution. The client may be directed to different servers on each request. Sticky sessions allow clients to stay with a single server during its session lifetime. Features such as server health checks ensure traffic stops being sent to a server that does not respond within defined thresholds. BGP or Border Gateway Protocol is a standardized exterior gateway protocol designed to exchange routing and reachability information among autonomous systems on the internet. The Border Gateway Protocol is used with an IPSec tunnel between the inside IP addresses to exchange routing from the VPC to your onsite network. Each BGP router has an autonomous system number or ASN. Your ASN has to be provided to AWS when the customer gateway is created. Okay, that concludes our terminology lecture. Let's move into learning a little bit more about the compute fundamentals of AWS.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.