SAA-C03 Introduction
Decoupled Architecture
AWS Step Functions
Which services should I use to build a decoupled architecture?
Streaming Data
Mobile Apps
AWS Machine Learning Services
Design a Multi-Tier Solution
When To Go Serverless
Design considerations
AWS Migration Services
SAA-C03 Review
The course is part of this learning path
Domain One of The AWS Solution Architect Associate exam guide SAA-C03 requires us to be able to Design a multi-tier architecture solution so that is our topic for this section.
We cover the need to know aspects of how to design Multi-Tier solutions using AWS services.
Want more? Try a lab playground or do a Lab Challenge!
Learning Objectives
- Learn some of the essential services for creating multi-tier architect on AWS, including the Simple Queue Service (SQS) and the Simple Notification Service (SNS)
- Understand data streaming and how Amazon Kinesis can be used to stream data
- Learn how to design a multi-tier solution on AWS, and the important aspects to take into consideration when doing so
- Learn how to design cost-optimized AWS architectures
- Understand how to leverage AWS services to migrate applications and databases to the AWS Cloud
When talking about the AWS Snow Family, I want to answer 2 simple questions:
- What is the snow family
- and what does it consist of?
So firstly, what is it? The snow family consists of a range of physical hardware devices that are all designed to enable you to transfer data into AWS from the edge or beyond the Cloud, such as your Data Center, but they can also be used to transfer data out of AWS too, for example, from Amazon S3 back to your Data Centre.
It’s unusual when working with the cloud to be talking about physical devices or components, normally your interactions and operations with AWS generally happen programmatically via a browser or command line interface. The snow family is different, instead, you will be sent a piece of hardware packed with storage and compute capabilities to perform the required data transfer outside of AWS, and when complete, the device is then sent back to AWS for processing and the data uploaded to Amazon S3.
You can perform data transfers from as little as a few terabytes using an AWS snowcone all the way up to a staggering 100 petabytes using a single AWS snowmobile. Now of course when we are talking about migrating and transferring data at this magnitude, using traditional network connectivity is sometimes simply not feasible from a time perspective. For example, let’s assume you needed to transfer just 1petabye of data over a 1gbps using Direct Connect it would take 104 Days, 5 Hours 59 Minutes, 59.25 Seconds, not forgetting the cost of the data transfer fees too!
In addition to these devices packing some serious storage capacity for data transfer, some of them also come fitted with compute power, allowing you to run usable EC2 instances that have been designed for the snow family enabling your applications to run operations in often remote and difficult to reach environments, even without having a data center in sight, and when working with a lack of persistent networking connectivity or power. For example, the snowcone comes with the ability to add battery packs increasing their versatility. The enablement of running EC2 instances makes it possible to use these devices at the edge to process and analyze data much closer to the source.
So let’s now take a look at what the snow family consists of to get a better understanding of what these devices are.
As you can see from this table, both from a physical and capacity perspective, the snowcone is the smallest followed by the snowball, and finally the snowmobile. You may also notice that the snowball comes in 3 choices, compute optimized, compute optimized with GPU, and storage optimized, each offering a different use case, however, each of these 3 offerings all come in the same size device.
Storage Gateway allows you to provide a gateway between your own data center's storage systems such as your SAN, NAS or DAS, and Amazon S3, Glacier, and Amazon FSx on AWS.
The Storage Gateway itself is a software appliance that is stored within your own data center which allows integration between your on-premise storage and that of AWS. This connectivity can allow you to massively scale your storage requirements both securely and cost efficiently. The software appliance can be downloaded from AWS as a virtual machine which can then be installed on your VMware or Microsoft hypervisors.
Storage Gateway offers different configurations and options allowing you to use the service to fit your needs. It offers file, volume, and tape gateway configurations which you can use to help with your DR and data backup solutions.
File gateways allow you to securely store your files as objects using File Gateway for Amazon S3 or you can use File Gateway configuration on Amazon FSx File Gateway, which provides access to in-cloud Amazon FSx for Windows File Server shares.
Using the S3 File Gateway allows you to map drives to an S3 bucket as if the share was held locally on your own corporate network. When storing files using the file gateway they are sent to S3 over HTTPS and are also encrypted with S3's own server-side encryption SSE-S3.
In addition to this, a local a on-premise cache is also provisioned for accessing your most recently accessed files to optimize latency which also helps to reduce egress traffic costs. When your file gateway's first configured you must associate it with your S3 bucket which the gateway will then present as a NFS V.3 or V4.1 file system to your internal applications.
This allows you to view the bucket as a normal NFS file system, making it easy to mount as a drive on Linux or map a drive to it in Microsoft. Any files that are then written to these NFS file systems are stored in S3 as individual objects as a one to one mapping of files to objects.
Volume Gateways can be configured in one of two different ways, Stored volume gateways and cached volume gateways. Let me explain stored volume gateways first.
Stored volume gateways are often used as a way to backup your local storage volumes to Amazon S3 whilst ensuring your entire data library also remains locally on-premise for very low latency data access. Volumes created and configured within the storage gateway are backed by Amazon S3 and are mounted as iSCSI devices that your applications can then communicate with.
During the volume creation, these are mapped to your on-premise storage devices which can either hold existing data or be a new disk. As data is written to these iSCSI devices the data is actually written to your local storage services such as your own NAS, SAN or DAS storage solution. However the storage gateway then asynchronously copies this data to Amazon S3 as EBS snapshots.
Having your entire dataset remain locally ensures you have the lowest latency possible to access your data which may be required for specific applications or security compliance and governance controls whilst at the same time providing a backup solution which is governed by the same controls and security that S3 offers. Storage volume gateways also provide a buffer which uses your existing storage disks. This buffer is used as a staging point for data that is waiting to be written to S3.
Cached volume gateways are different to stored volume gateways in that the primary data storage is actually Amazon S3 rather than your own local storage solution. However cache volume gateways do utilize your local data storage as a buffer and the cache for recently accessed data to help maintain low latency, hence the name, Cache Volumes.
Again, during the creation of these volumes they are presented as iSCSI volumes which can be mounted by an application service. The volumes themselves are backed by the Amazon S3 infrastructure as opposed to your local disks as seen in the stored volume gateway deployment. As a part of this volume creation you must also select some local disks on-premise to act as your local cache and as a buffer for data waiting to be uploaded to S3.
Although all of your primary data used by applications is stored in S3 across volumes, it is still possible to take incremental backups of these volumes as EBS snapshots.
The final option with AWS Storage Gateway is a tape gateway known as Gateway VTL. Virtual Tape Library. This allows you again to back up your data to S3 from your own corporate data center but also leverage Amazon Glacier for data archiving. Virtual Tape Library is essentially a cloud-based tape backup solution replacing physical components with virtual ones. This functionality allows you to use your existing tape backup application infrastructure within AWS providing a more robust and secure backup and archiving solution.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.