Design a Multi-Tier Solution
Domain One of The AWS Solution Architect Associate exam guide SAA-CO2 requires us to be able to Design a multi-tier architecture solution so that is our topic for this course.
The objective of this course is to prepare you for answering questions related to this domain. We’ll cover the need to know aspects of how to design Multi-Tier solutions using AWS services.
By the end of this course, you will be well prepared for answering questions related to Domain One in the Solution Architect Associate exam.
You need to be familiar with a number of technology stacks that are common to multi-tier solution design for the Associate certification- LAMP, MEAN, Serverless and Microservices are relevant patterns to know for the exam.
What is Multi-Tier Architecture?
A business application generally needs three things. It needs something to interact with users - often called the presentation layer - it needs something to process those interactions - often called the logic or application layer - and it generally needs somewhere to store the data from that logic and interactions - commonly named as the data tier.
When Should You Consider a Multi-Tier Design?
The key thing to remember is that the benefit of multi-tier architecture is that the tiers are decoupled which enables them to be scaled up or down to meet demand. This we generally call burst activity and is a major benefit of building applications in the cloud
When Should We Consider Single-Tier Design?
Single tier generally implies that all your application services are running on the one machine or instance. Single-Tier deployment is generally going to be a cost-effective and easy to manage architecture but speed and cost is about all there is for benefits. Single tier suits development or test environments where finite teams need to work and test quickly.
Design a Multi-Tier Solution
First we review the design of a multi-tier architecture pattern using instances and elastic load balancers. Then we’ll review how we could create a similar solution using serverless services or a full microservices design.
AWS Services we use
The Virtual Private Cloud
Subnets and Availability Zones
Elastic Load Balancers
Security groups and NACLs
AWS WAF and AWS Shield
Amazon API Gateway
AWS Secrets Manager
We review sample exam questions to apply and solidify our knowledge.
Review of the content covered to help you prepare for the exam.
Okay, let's look at serverless architecture patterns. So our previous architecture was based on instances and a fleet of instances run in an auto-scale group. Another architectural design pattern we can deploy is to run a serverless design using AWS Lambda and Amazon API Gateway. Now by serverless we mean managed computing. AWS Lambda provides compute resources as a service i.e. you don't need to provision an instance. You don't need to create auto-scale groups or define auto-scaling rules. You don't even need to install code interpreters with AWS Lambda. That's all taken care of for you. Now the logic tier of our three tier architecture usually represents the brains of the application i.e. that's where the computing is done so to speak. So the logic layer is where using Amazon API Gateway and AWS Lambda can provide the most benefit compared to using server-based implementations. Because Lambda and API Gateway are managed services the scaling is done automatically, you don't need to provision the hardware or software vertically or horizontally. The scaling and most of the security is taken care of for you by AWS. In short using these two services makes it really easy to build highly available, scalable and secure solutions. So let's look at how we do this. If we use AWS Lambda instead of provisioning EC2 instances it means there's no operating systems to choose to secure patch or for us to manage. We don't have to size or monitor or scale the instances at all and we don't need to worry about over provisioning or under provisioning those instances.
Now if we use the API Gateway to manage communication between code functions and services that simplifies again how we deploy, monitor and secure our APIs. Both drastically reduce the amount of infrastructure management that we have to do. So deploying code on AWS Lambda means you don't have to define multiple availability zones. As a managed service we leave defining where the service will run up to AWS. However, you do still need to set up public and private subnets on some instances and some designs you will still need to use a VPC.
So using AWS Lambda for your logic tier means it is directly integrated with your AWS data tier. You need to ensure that this data tier is appropriately isolated in a privat subnet. So for your Lambda function to access resources that you don't want to have made public like say a private database instance you can place your AWS Lambda function inside the VPC and configure an Elastic network interface or an ENI to access your internal resources. The use of Lambda in the VPC means that databases and other storage media that your business logic depends on can remain inaccessible to the internet. The VPC also ensures that the only way to interact with your data from the internet is through the APIs that you've defined in the Lambda code functions that you've written. So using Lambda as your logic tier doesn't limit the data storage options available in your data tier. Plus we get improved API performance via caching and content delivery which immediately means that we don't need to create, manage and pay for Elastic load balances between our tiers. Okay, big saving there. In a serverless multi-tier architecture each of the APIs you create will need to be integrated with a Lambda function and that executes our business logic. So code functions in AWS Lambda are called handlers and you can configure API Gateway to trigger handler functions. And so those two are tightly integrated and generally it is one Lambda function per API or one Lambda function per API method.
When a handler is triggered by an event, say another function completes of there's an HTDPS request that's made to an API Gateway listener that handler is triggered. This design enables you to be more granular in how you expose your application functionality. Inside the Lambda function the handler can reach out to any of the other dependencies you have. So for example, other methods you've uploaded in your code, native binaries, external web services, other libraries or even other Lambda functions. So each Lambda function itself assumes an IAM role that is assigned when the Lambda function is first deployed. So the IAM role defines the other AWS services and resources your Lambda function can interact with. So that could be Amazon S3, it might be a DynamoDB table for example. So design wise you need to include services like AWS Key Management Service or AWSKMS to store environmental variables. You need to consider using services like AWS Secrets Manager to keep credentials or API Keys safe when they're not being used. One rule of thumb is do not store sensitive information inside a Lambda function. Our presentation layer is a static website where our content is hosted in an Amazon S3 bucket. Again we have content distributed by Amazon Cloud front. However, in this design we have implemented the AWS Certificate Manager service so that we can use a custom SSL tier list certificate.
Now our logic layer is serverless so we have Amazon API Gateway exposing three services: slash weddings, slash tickets and slash info. The API Gateway endpoints are secured using a custom authorizer so users can sign in using a third party identity provider like Google or Facebook for example which provides the user with an ID token. The token is then included in the API Gateway call and our custom authorizer validates these tokens and generates an IA in policy containing API execution permissions. We then have AWS Lambda functions executing our logic. So each Lambda function is assigned its own IAM role to provide access to the appropriate data source. Now in our data tier one of the benefits of using serverless functions is our logic tier is tightly integrated with the AWS data services. So in our design we are using Amazon S3 to host static content used by the slash info service. We also have Amazon DynamoDB as our persistent data store for the slash tickets and slash wedding services. So we are using the Amazon ElastiCache service as a non-persistent data cache in front of our DynamoDB instance for the slash wedding service. So remember Amazon ElastiCache improves our database performance. If the ElastiCache case doesn't hold the data needed by the HTDP request this is considered a cache miss and so the request is sent through to DynamoDB. Okay, so that's using serverless functions in our logic tier.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.