Introduction to the AWS Database Migration Service
The course is part of these learning pathsSee 2 more
The purpose of this course is to enable us to recognize and explain what the AWS Database Migration Service is and how the AWS Database Migration Service can be used to migrate data from one database to another. This course will also enable us to recognize and explain the AWS Database Schema Conversion Tool which can be used to migrate a database structure or schema.
Once we have a basic understanding of these two services and what they provide, we will learn to recognize and explain when to use the AWS Database Migration Service and to recognize common use cases where the AWS Database Migration Service may be applicable for database migrations. We will also learn to recognize and implement best practices when using the AWS Database Migration Service so we gain the most value from using the AWS Database Migration Service.
Suggested pre-requisites for this course:
A basic understanding of cloud computing.
A basic understanding of AWS relational database services.
To run the AWS Database Migration Service you will need an active AWS Account.
Following this lecture, we will be able to recognise and explain the steps we might take when planning and evaluating a database migration using the AWS Database Migration Service. With a database migration, we generally have a number of factors to consider, which can impact the complexity or success of your migration project. It's important to perform a structural assessment to ensure adequate consideration is given prior to starting any work. So let's run through some of the key areas to consider, so you're prepared for your migration project. Firstly we need a good understanding of what our requirements are. We need to have an idea of what expertise we need. We need to be thinking about whether there are any conversions other than data to consider with this migration. We need to have a good handle on how long we think it's gonna take to complete this migration, and we need to have a good knowledge of our source database, and knowledge of our network, and of how we're gonna connect to the the AWS infrastructure and a knowledge of our target database schemas. Having a clear understanding of why we want to move from one database platform to the next, is a key part of getting any database migration correct. What are the problems we have with the current database platform, what are the constraints that we might be experiencing, and most importantly, what do we believe that this new platform can provide us that will be better than the baseline we're currently experiencing. We need to ask ourselves if the source database needs to be available after the migration. Are we planning on doing any testing of it? And ideally we would have some benchmark tests worked out already that will help us prove that the requirements, or the baseline that we experience with our current platform will be met more effectively using the new database platform. We need to think about what our high availability requirements might be, and is this something that we're gonna need to have to run in a master slave, we're gonna have to run this across multiple availability zones, et cetera. We need to also think and ask ourselves, do we need to migrate all the data? If there is some parts of a database which can be moved to test our hypothesis, rather than migrating the entire collection, than that might be something to consider, and do we need to move all of the data to the same database. Often it's an opportunity to split data up and to find more efficient ways of storing it, or accessing it, and one other key thing to ask ourselves is, do we have a good understanding of the benefits that we'll have using the Amazon RDS manage service? Things like automated back ups, that high availability, but we also need to make sure that we're clear on the limits and the cost implications of using Amazon RDS. So what are the storage sizes? Are we gonna have any security issues? Are there any type of considerations around data sovereignty et cetera, that we might need to consider? And another thing to think through is what are we gonna do with our application joined migration process? Is it gonna be part of our stress test? How do we plan to test access speeds and availability of our database once we've got it up and running? And I think most importantly, you know, what is our contingency plan if the migration fails for any reason? So another consideration is the expertise we have available, and to perform a useful assessment or migration, we are going to need a good knowledge of a database engine that we're migrating from, and we need to have a good understanding of the database engine that we're migrating to. So it's not just the technical expertise that is useful for this stage of the project. A database migration can have a number of dependencies. The better we understand both the business and technical requirements, the more likely we are to be able to provide a solution, and it's useful to have an understanding of the constraints, and the limitations that our current database platform is experiencing, so that we can create adequate test cases to prove that this other platform is going to be more capable of delivering on those requirements. Now we should also ensure that we understand the performance and design of a network we're migrating from and how that network can be connected to AWS. We may need to consider having a connection such as a VPN or an AWS direct connect, which is a data catered connection between your on premise environment, and the AWS environment that we could implement to provide the best possible connectivity between us and the AWS cloud during the migration. We also want to ensure we have a thorough knowledge and access to AWS identity and access management tools and services so that we can provision the roles and permissions we'll need to create and connect to a public end point. Now conversion of code is a key consideration with any migration. The AWS Schema Conversion Tool can run a report to help you with this part of the assessment. Once connected to your source database, the conversion tool reverse engineers every object within the schema, and checks if the generated code can be run against the target instance without any change being required. If it can, it will make a note of this finding. If it can't, it will make a note about why it's not going to be possible. At the end, the assessment will generate a report. That report is called the Database Migration Assessment Report. You can access the Database Migration Assessment Report from the view menu of the conversion tool. There are two parts of the assessment report, the summary part and the actions item part. The summary page gives an overall picture of the migration possibility. It shows how many objects in each type of component it can convert, and how many it can't. The action items page will go deeper into the analysis. It will list every object that can't be converted to the target engine, and the reason for the failure. The AWS Schema Conversion Tool highlights the particular command, or syntax in the generated code that caused the problem. This code is viewable from the lower half of the action items screen. So the tool does provide a useful way to calculate how much time and work may be required. Now time is another consideration, and migration projects can take between two days, or several months to complete depending on how large and complex they are. So it's important to think through and calculate how much time you expect is going to be required for this proof of concept, or for the migration itself. A successful migration can often require several iterations. So it may be done in stages if you have a lot of data. The migration process can often take a lot longer than you expect. There's a lot of processing that may be required to translate your data, which could take a lot longer than you expect, and the other thing is, do we have a hard date that we need to be moving backwards from? So if we were just doing a proof of concept we probably don't have that constraint, but if we are actually working to a must be migrated by date, then we need to factor that into our time equation, and the other thing to consider is that the planning stage can often take a lot longer than the migration itself, and we also wanna have a good knowledge of our source database. So we wanna know the size of our database. We wanna know how many schemas and tables we actually do have, and do we have any large tables? Large being anything over five gigabytes in size. Do we know what our transaction boundaries look like, and does our database have any data types, or fields that may not be supported by the current AWS Database Migration Service? Another consideration is, how busy is your source database? What type of roles and user permissions do we need to also migrate, and do you have any complexity around that? Is your database the smallest it possibly can be? We also need to have a good knowledge of our network and how we're going to connect to our AWS network. So we're going to need to connect to our on premise network if we're running it from an on premise database. So we're going to need to create a public facing end point. Now that's going to need a port dedicated to it, and we're going to need to have the security and the routing around that to make it possible. Do we have AWS EC2 security group already configured? Do we have a VPC configured with an internet gateway, and roughly how much bandwidth do we think we need to move all of this data? The Database Migration Service creates only the tables and the primary keys in the target database. So you're gonna have to recreate any of the other database keys or constraints. We'll need to also make sure that we postpone or lock down any schema changes until after this migration has been made. Best practice when looking at migrating any mission critical or production database, is to run a proof of concept first, so we can test our hypothesis about the database platform, and evaluate any test criteria that we've defined around data field types, performance, and the like. Now in that assessment, we want to define the framework of a migration, and discover or outline any aspects of the source and target environments that we may need to alter to make our migration successful. Which object do we want to migrate? Do we need to migrate all of them? Are the data types we have in the current database platform compatible with those covered by the AWS Database Migration Service? Does the source system have the necessary capacity, and is it configured to support a migration, and essentially what prototype migration configuration shall we run with? It's a good idea to start small, run a small test, and see what type of performance we get from the various data types we have in the database. We then want to design our migration. One thing we need to factor in is how we are going to test the end to end migration. The more we can test that migration, the more accurate we can be in projecting how long this migration will take in real time, and then we want to schedule our migration.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.