Why Large-Scale Enterprises Are Migrating to the Cloud
Regardless of the industry, just about every company has become a technology company out of necessity. Your enterprise breathes through IT, and if you’re limited to on-premises servers then you’re likely giving your competitors an advantage. Migrating to the cloud means breaking away from the technologies that are holding you and your enterprise back from the speed, scalability, and cost savings that the cloud has to offer.
Cloud migration is about moving your data, applications, and even infrastructure from your on-premises computers or infrastructure to a virtual pool of on-demand, shared resources that offer compute, storage, and network services at scale.
Rather than continuing to invest in aging and expensive infrastructure that can’t keep pace with the changes in modern technology, migrating to the cloud is a choice for the future. In addition to the immediate benefits of cost savings and scalability that can be realized, you are essentially laying the foundation to be able to respond more quickly to changes in the market, scale your growth, and drive innovation for the long term.
As you’re planning your cloud migration, understanding how to get there depends on your unique business model and goals as well as your current infrastructure and applications. You’ll need to rely on the skills and experience of your IT teams to understand the ins and outs of your current environment and the interdependencies of your applications to determine which applications to migrate and how. The “5 Rs of cloud migration” from Gartner are a great place to start when considering all of the options for migrating your applications to the cloud.
Whether it’s your initial migration or your fifth iteration, your cloud migration requires a strategy and planning to be successful. Here’s what you need to know.
The 5 “Rs” of Cloud Migration: Rehost, Refactor, Revise, Rebuild, and Replace
Rehosting is the process of moving your existing physical and virtual servers to a solution based on Infrastructure as a Service (IaaS). Also known as lift and shift, the key benefit of this approach is that systems can be migrated quickly with no modification to their architecture. This is often the path that companies take when they’re new to cloud computing. When rehosting, you’re basically treating the cloud as just another data center, which means you’re not getting the most out of the available cloud services.
Consider a web application as a simple example. Imagine you have an ASP.NET application running on Windows and you want to rehost it on AWS. You can create a Windows VM that meets the size requirements and deploy the application. With a change to the DNS record, you’re pretty much ready to go live. In this way, rehosting is an easy way to move to the cloud. However, this solution isn’t highly available, or scalable, and it still requires you to manage OS patches.
This is the process of running your applications on the infrastructure of your cloud provider, also referred to as Platform as a Service (PaaS). PaaS means that developers are able to reuse the frameworks, languages, and containers in which they’ve already invested. For applications or workloads that can be refactored to leverage cloud capabilities, you’ll be able to take advantage of some cloud-native features offered by the PaaS infrastructure for reduced costs and increased scalability. However, the biggest disadvantages of this option include transitive risk, missing capabilities, and framework lock-in. One of the common issues that developers run into here is that many PaaS options use ephemeral storage. This typically requires a change to the codebase to use cloud storage, rather than the local file system for saved files.
An example of refactoring to use PaaS might be to take an existing Ruby on Rails application and deploy it to Heroku or to take an existing Drupal application and modify it to run on Acquia Cloud or Pantheon. PaaS options will allow you to focus on the application without having to deal with the underlying OS.
Certain applications will need to be modified more extensively in order to migrate them to the cloud. Some will require added functionality while others may need to be re-architected completely before they can be rehosted or refactored and eventually deployed to the cloud.
This can be a difficult option because modifying a large codebase to become more cloud-native can be time consuming and expensive. An example would be taking a complex, monolithic Python-based application and moving it to Google App Engine. The design of your application will determine the amount of changes you’ll require. You may find that you need to break it out into multiple applications and swap out components such as message queues to get the most out of the move.
In this scenario, an application is re-architected, original coding is discarded, and it is re-built on a PaaS infrastructure. Rebuilding the application allows you to take advantage of more advanced and innovative features from your cloud provider to improve your application even further. A major drawback of this option is vendor lock-in.
For instance, if the provider makes a technical or pricing change that the customer cannot accept or that breaches the service level agreement (SLA), the customer may be forced to switch back to the previous application, potentially losing some or all of its application assets.
For example, you may rebuild your application so that it is completely serverless. By using technologies such as AWS Lambda, API Gateway, DynamoDB, S3, and others, you could run your application without having to manage servers for yourself. This sort of cloud-native application would be inexpensive to operate and highly scalable. However, it also means that you’re locked in to using a particular cloud vendor. This isn’t intrinsically bad, but it is a factor that you will need to consider.
In this scenario, you completely replace the existing application(s) with software delivered as a service (SaaS). An advantage of the replace model is that it allows you to avoid IT development costs. However, you may encounter problems in accessing data, unpredictable data semantics, and vendor lock-in.
This can be a great option for minimizing the amount of services and applications that you need to manage. An example might be to replace your local database with a managed option such as Cloud Datastore, Cosmos DB, or Dynamo. This can be one of the easiest ways to bring up your SLA. These services are all known for their scalability and availability. In contrast, running a database yourself and dealing with data replication and failover can be a lot of work.
The Bottom Line: Migration Projects of Any Size Require Careful Planning
A successful cloud migration requires careful preparation and planning. There is no one-size-fits-all approach to migrating to the cloud. Your teams will need a deep knowledge of the infrastructure and applications that your business runs on in order to fully understand the complexity, challenges, and costs involved.
Some great content to get you started on Cloud Migration:
Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live
Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...
Google Cloud Certification: Preparation and Prerequisites
Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...
Understanding AWS VPC Egress Filtering Methods
Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...
S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3
Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...
Microservices Architecture: Advantages and Drawbacks
Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...
What Are Best Practices for Tagging AWS Resources?
There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...
How to Optimize Amazon S3 Performance
Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...
How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy
One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...
What are the Benefits of Machine Learning in the Cloud?
A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...
How to Use AWS CLI
The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....
AWS Summit Chicago: New AWS Features Announced
Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...
From Monolith to Serverless – The Evolving Cloudscape of Compute
Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...