1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Migrating to AWS From an End-of-Life Data Center

Reference Architecture

The course is part of these learning paths

Solutions Architect – Professional Certification Preparation for AWS
course-steps 48 certification 6 lab-steps 19 quiz-steps 4 description 2
Scenario: Migrating From an End-of-Life Data Center to AWS
course-steps 6 certification 3 lab-steps 8 quiz-steps 1
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 16m
Students705
Ratings
4.8/5
star star star star star-half

Description

This course is a "live" scenario discussion where the CloudAcademy team tackle a migration project. Our customer needs to migrate out of their current data center by a certain date. They also would like to modernise their business application. 

Our brief in the exercise is to deliver:
A target architecture which addresses the challenges described by the customer.

A migration plan detailing how to move the service to AWS with minimal interruption.

A recommendation on how to approach DR to achieve RPO of 24 hours and RTO of 4 hours.

An application optimization plan with a proposed enhancement roadmap.

As a scenario, this series of lectures is recorded "live" and so is less structured than other CloudAcademy courses. As a cloud professional you often have to think and design quickly,so we have recorded some of the content this way to best emulate the type of conditions you might experience in the working environment. Watching the team approach this brief can help you define your own approaches and style to problem solving. 

Intended audience: This course discusses AWS services so is best suited to students with some prior knowledge of AWS services. 

Pre-requisites: We recommend completing the Fundamentals of AWS learning path before beginning this course. 

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

- [Presenter] Okay, let's start thinking about our reference architecture, and the services that we might look to use. Document manager and payment modules, should be Lamda functions with data being stored in Dynamo DB. The document module keeps additional metadata relating to objects that are stored in Amazon S3. We could also leverage S3 metadata, but Dynamo DB provides a richer set of features to store and retrieve data from. Also, separating these modules moves the majority of the public-facing APIs to server-less services. Now, we've also got the payment module to consider, and that consists of subscription information, which, at the moment is fulfilled by a third-party, digital wallet, so that can be exposed as number of APIs via the API Gateway. The batch processing and digitizer modules could be merged into a document processing service processed by an Elastic Beanstalk worker environment, and we could use Beanstalk instead of Lambda, because jobs might take longer than five minutes to complete. We can use the Database Migration Service to migrate our schema and our data. We'll use Oracle RDS in the first instance, and then look to port our databases to either Amazon Aurora or Postgres. We can use Route 53 to manage our domains and our routing to domains, especially during any cut-over. And we can use CloudFormer for a migration template and for helping with our cloud formation scripting. We can look to use the Elastic Load Balancer to reduce load on our web tier, and to handle some of our SSL offloading for us. And we can implement ElastiCache to reduce load on the web tier. CloudFront, of course, can increase our durability and our performance. And integrated with AWS WAF, or AWS Shield, we can increase our security and durability. Now for our keys, we can use the CloudHSM appliance initially, and then look to move our keys to KMS to manage encryption keys going forward. And short term, we can use the VPN to connect back to our data center, and then look to use a direct connect, dedicated connection for connectivity back to our data center, going forward. Okay, so that architecture could provide the expertise-please service with a more agile, cost-efficient solution. Now we can start to break this into swim lanes, and work out when we can do things. From that, we can prioritize and tie-box it into stages.

- We've got, 7 to 14 days of migration, we probably need to think, sort of, two, well, maybe two to six months for this type of optimization. We probably want to a phase 6 to 12 months for this type of re-engineering, so if this is a 12 to 24 month plan depending on the customer's appetite for this, do it. I think that is something that must happen, that's something that should happen, because if we're going to make this better, we need to do all of this. That's definitely a must. And then, that's probably gonna be a could. If you want to make your business better, this is how we go about it.

- [Man With Brown Hair] If you map those stages back to the.

- Yep.

- What we're seeing is we're going from left to right across those stages. We're increasing security. We're improving business.

- Yeah.

- Business slide, bottom line. Operations is more efficient, there's less problems.

- Let's write those up in the blue marker.

- So increase security.

- Yep.

- Throughout the system.

- And reporting

- [Man With Brown Hair] Operations is more efficient, less down time.

- Yep, better disaster recovery. We still need to reduce the cost with shifting off the current database, providing the spot or reserved instances that can really add a lot of value.

- [Man With Brown Hair] The bottom line for the business is improved.

- Yeah we can give them some assurance that the service will continue in seven months

- [Man With Brown Hair] The player forms become more realistic.

- Yeah.

- [Man With Brown Hair] The cost optimized and it's more agile.

- So I guess there's a bit of thinking to do around how we do this, we have to think through the resourcing because if we are talking about smallteams working in parallel there gonna need some development resource to do that because they don't have any internal resource for it. The other thing we haven't thought about is keys. So everything is encrypted, they're running their own encryption service. They've got a safe net lunar encryption.

- [Man With Brown Hair] So stage one we were gonna go with Cloud HSM.

- Yeah.

- In terms of increasing time. But when we get into stage two and stage three we can start potentially use KMS.

- Yeah, yeah.

- [Man With Brown Hair] The cheapest solution.

- Good for a short term fix but we need to get into KMS to save money long term. So that could definitely part of our cost optimization.

About the Author

Students62088
Courses90
Learning paths40

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.