SAA-C03 Introduction
Decoupled Architecture
AWS Step Functions
Which services should I use to build a decoupled architecture?
Streaming Data
Mobile Apps
AWS Machine Learning Services
Design a Multi-Tier Solution
When To Go Serverless
Design considerations
AWS Migration Services
SAA-C03 Review
The course is part of this learning path
Domain One of The AWS Solution Architect Associate exam guide SAA-C03 requires us to be able to Design a multi-tier architecture solution so that is our topic for this section.
We cover the need to know aspects of how to design Multi-Tier solutions using AWS services.
Want more? Try a lab playground or do a Lab Challenge!
Learning Objectives
- Learn some of the essential services for creating multi-tier architect on AWS, including the Simple Queue Service (SQS) and the Simple Notification Service (SNS)
- Understand data streaming and how Amazon Kinesis can be used to stream data
- Learn how to design a multi-tier solution on AWS, and the important aspects to take into consideration when doing so
- Learn how to design cost-optimized AWS architectures
- Understand how to leverage AWS services to migrate applications and databases to the AWS Cloud
In this lecture I want to cover some of the services that can be used to help you manage the migrate and modernize stage of your migration strategy, so the point at which you are actually moving your solutions and data into AWS. This lecture will be split into two different sections, firstly, I shall look at the services that help you to migrate your servers, database and applications, and then I shall focus on how to migrate data from your data centre to the AWS cloud.
The AWS Application Migration Service is a great service to help you migrate your applications to AWS with minimal downtime and interruption, such as those running SAP, Oracle, and SQL Server. The Application Migration Service is the suggested and recommended solution when migrating your applications instead of using CloudEndure.
The service simply works on the basis of a lift-and-shift approach by converting your existing physical and virtual machines to running natively across AWS infrastructure using an agent which replicates your source servers to virtual instances in AWS while the source server continues to run. This simplified approach helps to keep cost to a minimum and helps to simplify the migration process.
Once your servers have been migrated, you can leverage the AWS cloud to further optimize your infrastructure through either refactoring or replatforming your applications.
To begin the migration, you will first need to configure a replication settings template which will help you to configure how data replication will be managed from each source server. You will need to specify settings for a replication server which is used to replicate data between your source servers on-premise and instances in AWS, which will include:
-
The staging area subnet
-
Instance type
-
EBS Volume type
-
EBS encryption
-
Security Groups
-
Data routing and throttling
-
Resource tags
Once this template is created, you can then add your source servers, either Windows or Linux,, which is completed by installing an AWS replication agent. Once your agent is installed on your physical or virtual machines, it will appear within the Application Migration Service and will follow a Migration workflow and will display its status of each stage of the migration.
This image here shows how the agent integrates with the staging environment using your replication settings template, before being tested and cutover to being fully migrated resources.
Launch settings within the service can be used to help you configure how each source server will be Tested before Cutover to a migrated server. Here you will be able to configure settings such as:
-
Instance type right-sizing
-
Private IP addressing
-
The transfer of server tags
-
OS Licensing
-
And if you want the instance to launch once it’s cutover or if you want to start it manually
Of course, testing is an integral part of this service, you need to be sure that your application will be running as expected before being cutover into your production environment. Testing can be managed on an individual server basis, or as a group of servers. Once your servers have been successfully tested, they can then be launched as a Cutover instance in your production environment based on the launch settings previously specified.
As you probably expect from the name, this service has been designed to help you migrate your relational, NoSQL databases, and data warehouses, which could be from your own environment to AWS, with minimal downtime and security in mind. This is a very effective way to migrate your on premise databases from an array of commercial and open-source databases.
The Database migration service is very flexible in terms of its capabilities from a migration standpoint. You can migrate from and to the same database, such as Oracle to Oracle, but you can also migrate from different source to destination databases, making use of some of AWS’s most cost-effective DB solutions. An example of this would be to migrate from an on-premise database such as SQL Server and migrating your data to the AWS service of Amazon Aurora, which is designed for very high performance and availability.
You may also have scenarios where by you are looking to consolidate your database environments, using the Database Migration service you can move multiple database workloads into a single Amazon Redshift environment allowing you to scale to a petabyte-sized data warehouse, providing you with an opportunity to gain insights and analyze your data.
If you are looking to migrate your database to a target database engine that is compatible with one another then the migration process is simplified. If components of the database share the same schema structure, data types, and code then the operation becomes more of a single step process making the DMS service very efficient.
However, if the source and target database engines that you have selected are different, and you might want to do this to make use of the capabilities of the different AWS database services that are available, then additional steps will be required. For example, let's say your schemas between the source and target operate differently, then you will need to perform a schema transformation before the migration can take place using the AWS Schema Conversion Tool (AWS SCT).
When running a migration, you must specify your source and destination database targets. The DMS service will then create a Replication instance, which is effectively an EC2 instance with one or more replication tasks. Endpoints are also created to connect to your source datastore and your destination target datastore. Replication tasks are then run from your replication instance to move data sets from your source to your destination targets.
AWS service catalog is an organizational tool developed with the purpose of making provisioning and creation of IT stacks easier for both the end user as well as your IT admins.
These stacks can include almost everything under the AWS sun – such as EC2 instances, Databases, software, and all the underlying infrastructures to create multi-tiered applications and architectures.
Service Catalog allows your end users to select the content that they need from a list of pre-approved services that your IT or Admin teams set up ahead of time. This helps to bring down those barriers of entry for resource creation, as well as helping to keep best practices and system security a key component of any deployment and migration.
With the ability to browse through a list of ‘Products’ which are just a set of pre-approved services, we can create and build with the full confidence that we as developers are creating solutions using only acceptable components that our security, administration, and leadership teams approve of.
Products are an IT service that you want to make available for deployment on AWS. A product can consist of just a single AWS Resource or can consist of multiple items such as EC2 instances, their associated EBS volumes, Database that you want them connected to, and all the monitoring capabilities you would come to expect from these services within the cloud.
A product can even be a package listed on the AWS Marketplace. For example, this could be helpful if you were using a database that AWS does not natively support but was available on the marketplace. In the end, a product is a service. It can range from something as small as a single instance doing basic web hosting to an enormous multi-tiered web application.
AWS service catalog requires you to upload AWS Cloud Formation Templates, and from these templates, the service will add that entire stack into the catalog as a single product. Again that product can be something as small as a single EC2 instance, or a very large multi-tiered web application.
Once your products are created, you can add these to AWS Service Catalog Portfolios. A portfolio is a collection of products with configuration information that helps in determining who can use the products within. Each portfolio requires a name, description, and a product owner. That last one is very important because if something goes wrong with an available product it's important to know who to send your complaints to.
You can also share portfolios between other AWS accounts, and give the administrators of those accounts the ability to add their own products to your portfolio. This could be useful when you have teams that operate independently, that each deal with creating their own products.
One of the best features and really the whole goal of service catalog, is that you have full control over what your end users have access to. From an administrator perspective that is incredibly powerful and can help you maintain high levels of both security and credibility.
AWS Service Catalog allows you to apply constraints on the products within your portfolios. These constraints allow you to limit the scope and ability of your products based on predefined settings. They also allow you to have additional functionality, at least on the administrative side.
Ok, so now we have covered the first section, I can now move on to the next section where we look at the migration of data, starting with AWS DataSync.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.