Why Large-Scale Enterprises Are Migrating to the Cloud
Regardless of the industry, just about every company has become a technology company out of necessity. Your enterprise breathes through IT, and if you’re limited to on-premises servers then you’re likely giving your competitors an advantage. Migrating to the cloud means breaking away from the technologies that are holding you and your enterprise back from the speed, scalability, and cost savings that the cloud has to offer.
Cloud migration is about moving your data, applications, and even infrastructure from your on-premises computers or infrastructure to a virtual pool of on-demand, shared resources that offer compute, storage, and network services at scale.
Rather than continuing to invest in aging and expensive infrastructure that can’t keep pace with the changes in modern technology, migrating to the cloud is a choice for the future. In addition to the immediate benefits of cost savings and scalability that can be realized, you are essentially laying the foundation to be able to respond more quickly to changes in the market, scale your growth, and drive innovation for the long term.
As you’re planning your cloud migration, understanding how to get there depends on your unique business model and goals as well as your current infrastructure and applications. You’ll need to rely on the skills and experience of your IT teams to understand the ins and outs of your current environment and the interdependencies of your applications to determine which applications to migrate and how. The “5 Rs of cloud migration” from Gartner are a great place to start when considering all of the options for migrating your applications to the cloud.
Whether it’s your initial migration or your fifth iteration, your cloud migration requires a strategy and planning to be successful. Here’s what you need to know.
The 5 “Rs” of Cloud Migration: Rehost, Refactor, Revise, Rebuild, and Replace
Rehosting is the process of moving your existing physical and virtual servers to a solution based on Infrastructure as a Service (IaaS). Also known as lift and shift, the key benefit of this approach is that systems can be migrated quickly with no modification to their architecture. This is often the path that companies take when they’re new to cloud computing. When rehosting, you’re basically treating the cloud as just another data center, which means you’re not getting the most out of the available cloud services.
Consider a web application as a simple example. Imagine you have an ASP.NET application running on Windows and you want to rehost it on AWS. You can create a Windows VM that meets the size requirements and deploy the application. With a change to the DNS record, you’re pretty much ready to go live. In this way, rehosting is an easy way to move to the cloud. However, this solution isn’t highly available, or scalable, and it still requires you to manage OS patches.
This is the process of running your applications on the infrastructure of your cloud provider, also referred to as Platform as a Service (PaaS). PaaS means that developers are able to reuse the frameworks, languages, and containers in which they’ve already invested. For applications or workloads that can be refactored to leverage cloud capabilities, you’ll be able to take advantage of some cloud-native features offered by the PaaS infrastructure for reduced costs and increased scalability. However, the biggest disadvantages of this option include transitive risk, missing capabilities, and framework lock-in. One of the common issues that developers run into here is that many PaaS options use ephemeral storage. This typically requires a change to the codebase to use cloud storage, rather than the local file system for saved files.
An example of refactoring to use PaaS might be to take an existing Ruby on Rails application and deploy it to Heroku or to take an existing Drupal application and modify it to run on Acquia Cloud or Pantheon. PaaS options will allow you to focus on the application without having to deal with the underlying OS.
Certain applications will need to be modified more extensively in order to migrate them to the cloud. Some will require added functionality while others may need to be re-architected completely before they can be rehosted or refactored and eventually deployed to the cloud.
This can be a difficult option because modifying a large codebase to become more cloud-native can be time consuming and expensive. An example would be taking a complex, monolithic Python-based application and moving it to Google App Engine. The design of your application will determine the amount of changes you’ll require. You may find that you need to break it out into multiple applications and swap out components such as message queues to get the most out of the move.
In this scenario, an application is re-architected, original coding is discarded, and it is re-built on a PaaS infrastructure. Rebuilding the application allows you to take advantage of more advanced and innovative features from your cloud provider to improve your application even further. A major drawback of this option is vendor lock-in.
For instance, if the provider makes a technical or pricing change that the customer cannot accept or that breaches the service level agreement (SLA), the customer may be forced to switch back to the previous application, potentially losing some or all of its application assets.
For example, you may rebuild your application so that it is completely serverless. By using technologies such as AWS Lambda, API Gateway, DynamoDB, S3, and others, you could run your application without having to manage servers for yourself. This sort of cloud-native application would be inexpensive to operate and highly scalable. However, it also means that you’re locked in to using a particular cloud vendor. This isn’t intrinsically bad, but it is a factor that you will need to consider.
In this scenario, you completely replace the existing application(s) with software delivered as a service (SaaS). An advantage of the replace model is that it allows you to avoid IT development costs. However, you may encounter problems in accessing data, unpredictable data semantics, and vendor lock-in.
This can be a great option for minimizing the amount of services and applications that you need to manage. An example might be to replace your local database with a managed option such as Cloud Datastore, Cosmos DB, or Dynamo. This can be one of the easiest ways to bring up your SLA. These services are all known for their scalability and availability. In contrast, running a database yourself and dealing with data replication and failover can be a lot of work.
The Bottom Line: Migration Projects of Any Size Require Careful Planning
A successful cloud migration requires careful preparation and planning. There is no one-size-fits-all approach to migrating to the cloud. Your teams will need a deep knowledge of the infrastructure and applications that your business runs on in order to fully understand the complexity, challenges, and costs involved.
Some great content to get you started on Cloud Migration:
Cloud Migration Risks & Benefits
If you’re like most businesses, you already have at least one workload running in the cloud. However, that doesn’t mean that cloud migration is right for everyone. While cloud environments are generally scalable, reliable, and highly available, those won’t be the only considerations dri...
Real-Time Application Monitoring with Amazon Kinesis
Amazon Kinesis is a real-time data streaming service that makes it easy to collect, process, and analyze data so you can get quick insights and react as fast as possible to new information. With Amazon Kinesis you can ingest real-time data such as application logs, website clickstre...
Google Cloud Functions vs. AWS Lambda: The Fight for Serverless Cloud Domination
Serverless computing: What is it and why is it important? A quick background The general concept of serverless computing was introduced to the market by Amazon Web Services (AWS) around 2014 with the release of AWS Lambda. As we know, cloud computing has made it possible for users to ...
Google Vision vs. Amazon Rekognition: A Vendor-Neutral Comparison
Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs. This post is a fact-based comparative analysis on Google Vision vs. Amazon Rekognition and will focus on the tech...
New on Cloud Academy: CISSP, AWS, Azure, & DevOps Labs, Python for Beginners, and more…
As Hurricane Dorian intensifies, it looks like Floridians across the entire state might have to hunker down for another big one. If you've gone through a hurricane, you know that preparing for one is no joke. You'll need a survival kit with plenty of water, flashlights, batteries, and n...
Amazon Route 53: Why You Should Consider DNS Migration
What Amazon Route 53 brings to the DNS table Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service offered by AWS. It is named by the TCP or UDP port 53, which is where DNS server requests are addressed. Like any DNS service, Route 53 handles domain regist...
How to Unlock Complimentary Access to Cloud Academy
Are you looking to get trained or certified on AWS, Azure, Google Cloud Platform, DevOps, Cloud Security, Python, Java, or another technical skill? Then you'll want to mark your calendars for August 23, 2019. Starting Friday at 12:00 a.m. PDT (3:00 a.m. EDT), Cloud Academy is offering c...
What Exactly Is a Cloud Architect and How Do You Become One?
One of the buzzwords surrounding the cloud that I'm sure you've heard is "Cloud Architect." In this article, I will outline my understanding of what a cloud architect does and I'll analyze the skills and certifications necessary to become one. I will also list some of the types of jobs ...
Boto: Using Python to Automate AWS Services
Boto allows you to write scripts to automate things like starting AWS EC2 instances Boto is a Python package that provides programmatic connectivity to Amazon Web Services (AWS). AWS offers a range of services for dynamically scaling servers including the core compute service, Elastic...
Content Roadmap: AZ-500, ITIL 4, MS-100, Google Cloud Associate Engineer, and More
Last month, Cloud Academy joined forces with QA, the UK’s largest B2B skills provider, and it put us in an excellent position to solve a massive skills gap problem. As a result of this collaboration, you will see our training library grow with additions from QA’s massive catalog of 500+...
DevSecOps: How to Secure DevOps Environments
Security has been a friction point when discussing DevOps. This stems from the assumption that DevOps teams move too fast to handle security concerns. This makes sense if Information Security (InfoSec) is separate from the DevOps value stream, or if development velocity exceeds the band...
Test Your Cloud Knowledge on AWS, Azure, or Google Cloud Platform
Cloud skills are in demand | In today's digital era, employers are constantly seeking skilled professionals with working knowledge of AWS, Azure, and Google Cloud Platform. According to the 2019 Trends in Cloud Transformation report by 451 Research: Business and IT transformations re...