‘Migrate and Modernize’ - Designing your solutions on AWS (Part 2)


SAA-C03 Introduction
Decoupled Architecture
AWS Step Functions
AWS Step Functions
AWS Machine Learning Services
Design considerations
SAA-C03 Review
‘Migrate and Modernize’ - Designing your solutions on AWS (Part 2)
3h 46m

Domain One of The AWS Solution Architect Associate exam guide SAA-C03 requires us to be able to Design a multi-tier architecture solution so that is our topic for this section.
We cover the need to know aspects of how to design Multi-Tier solutions using AWS services. 

Want more? Try a lab playground or do a Lab Challenge!

Learning Objectives

  • Learn some of the essential services for creating multi-tier architect on AWS, including the Simple Queue Service (SQS) and the Simple Notification Service (SNS)
  • Understand data streaming and how Amazon Kinesis can be used to stream data
  • Learn how to design a multi-tier solution on AWS, and the important aspects to take into consideration when doing so
  • Learn how to design cost-optimized AWS architectures
  • Understand how to leverage AWS services to migrate applications and databases to the AWS Cloud

If you’ve been working with different AWS storage services for any length of time then you may have already come across this service. AWS DataSync is a service that allows you to easily and securely transfer data from your on-premise data center to AWS storage services.  It can also be used to manage data transfer between 2 different AWS storage services too, so it’s a great service to help you migrate, manage, replace and move data between different storage locations.  

At the time of writing this course, AWS DataSync supports the ability to work with data stored on Network File Systems shares, Server Message Block shares, and any self-managed object storage, in addition to the following AWS services:

  • Amazon S3

  • Amazon Elastic File System

  • Amazon FSx for Windows File Server

  • AWS Snowcone

When performing data transfer operations, whether this be from on-premises or between AWS storage services, DataSync support AWS VPC Endpoints and so its able to utilise the high bandwidth, low latency AWS network to it’s advantage, this helps to both simplify the management of the request and automate your data transfer across secure infrastructure.  For more information on AWS Endpoints, please see our AWS Networking lecture found here.

With data transfer speed a being a key factor for a data transfer services, AWS Data Sync comes with its own purpose-built data transfer network protocol in addition to a parallel and multithreaded architecture to rapidly perform data transfer, this means that each DataSync task has the potential of utilizing 10 Gbps over a network link between your own on-prem data center and your AWS environment.

Obviously, when working with data, security is a key concern.  As a result, AWS DataSync provides 2 levels that provide end-to-end security, these being encryption, in addition to data validation.  

From an encryption perspective, encryption in transit is implemented by encrypting the data using the Transport Layer Security (TLS) protocol. When data reaches an AWS service, it also supports encryption at rest mechanisms that EFS and FSx for Windows service offers, in addition to the default encryption at rest option for Amazon S3.   

The 2nd point, Data Validation, ensures that your data arrives at its destination in one piece, exactly as it was when it left the source ensuring that it wasn’t compromised or damaged in any way during its transit. This additional check helps to validate the consistency of your data that was written to the AWS storage service, and that its a perfect match from when it left its source location.

The AWS Transfer Family is specifically designed to help you securely transfer data both into and out of two of the most commonly used storage services that AWS has to offer, these being Amazon S3 and the Elastic File System (EFS).  To learn more about Amazon S3 and Amazon EFS, please refer to the following content.

This file transfer can be completed using one of four supported protocols:

  1. Secure Shell File Transfer Protocol, more commonly known as SFTP, providing encryption over SSH

  2. File Transfer Protocol Secure, referred to as FTPS and uses TLS encryption

  3. File Transfer Protocol, FTP, which is an unencrypted connection

  4. Applicability Statement 2, know as AS2

Being a fully managed service, this transfer of data is enabled without you having to provision any of your own server infrastructure, instead the AWS Transfer Family utilizes its own file transfer protocol-enabled instance simplifying the process and reduces the need to alter the configuration of your applications.  It is also a highly available service, operating in up to 3 different availability zones with the additional support of being backed by auto scaling to ensure your transfer requests are met without an issue.

The Transfer Family utilizes a Managed File Transfer Workflow (MFTW) which enables you to configure, run and implement a level of automation to help you manage your file transfers allowing you to track the process of the transfer from beginning to end. By utilizing MFTW you can configure specific processing actions that you might want to take prior to transferring your data, such as tagging, copying, enabling encryption, and filtering, plus other common file-processing actions.

To begin using the AWS Transfer family there are a number of steps to take.  

  • Of course, you must first have your destination storage configured and available, which will either be an S3 bucket, or an EFS File System.  

  • Next, and through the use of IAM roles, you must grant the required permissions to the storage destination for the AWS Transfer Family.  

  • Once your storage destination is set up with the appropriate permissions applied, you can then configure a Transfer Family Server using one of the support protocols to manage your file transfers to that storage destination.  As a part of this configuration, you can also implement optional CloudWatch logging metrics to help you monitor your transfer requests.  

  • You must then add a User to the Transfer Server and associate it with the role you created previously allowing access to your S3 bucket or Elastic File System

  • You are then ready to complete the transfer using a Client, such as OpenSSH, WinSCP, Cyberduck or FileZilla.  For additional information on how to do this for each client, please visit the following URL.

About the Author
Learning Paths

Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built  70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+  years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.