With the average cost of downtime estimated at $8,850 per minute, businesses can’t afford to risk system failure. Full access to services and data anytime, anywhere is one of the main benefits of cloud computing.
By design, many of the core services with the public cloud and its underlying infrastructure are replicated across different geographic zones. This helps ensure the durability and availability of your data and services and protects against downtime. However, outages happen. To protect against costly downtime, many companies spread their services across multiple providers to reduce the chances of failure.
But is a multi-cloud strategy the only solution for ensuring high availability?
In February 2017, an engineer’s typo caused a major service disruption on Amazon S3 in its US East region. The outage impacted many companies that relied on S3, and specifically those that relied on S3 exclusively in the region.
Workloads impacted by the S3 disruption fell into two categories: those considered “not mission critical” and those that lacked sufficient architecture and chaos testing. Companies who lacked a robust architecture sufficient for testing felt the impact most acutely. In this instance, replicating files on another cloud provider could have mitigated the effects of the disruption. However, cross-cloud replication would also add more complexity, perhaps unnecessarily. Using a single cloud provider with cross-region replication is another solution.
Let’s explore the technical feasibility of using multiple cloud providers to achieve high availability in three scenarios:
Application Distribution: Teams will have to work to abstract away vendor-specific functionality if they want to achieve high availability for the same functionality across different cloud providers. This means that you will be limited to the features that are common to your selected platforms. At the individual service level, the differences between various cloud providers’ implementations can create a lot of extra work in the form of abstraction layers.
Containers: At the application level, due to IaaS implementation differences across providers, containers could serve as a viable abstraction. This approach would require running the same container orchestrator on multiple platforms and limiting the use of underlying functionality (or accessing underlying functionality through a common interface). While using containers to run the same application across providers may be technically possible, the implementation is far from practical, making it more prone to human error and potential outages down the road. The potential increase in errors may be caused by differences in how data is replicated and differences in the IaaS offerings themselves.
Security and Compliance: Managing security for any single deployment across multiple public clouds will not be easy. Serving up virtual networks, firewall rules, monitoring, logging, and identity and access management can be difficult and time-consuming. Ensuring compliance across multiple providers adds a whole new level of complexity, especially at the rate that cloud providers release updates. Additional tooling, processes, and training will be required to ensure cross-platform consistency.
Is multi-cloud a solution for high availability?
New tooling or processes should be added to solve problems, not side effects of other problems. Adding the tooling required to implement a multi-cloud deployment is solving a side effect of using multiple platforms to accomplish what could be done with a single platform.
The bottom line is this: Multi-cloud could theoretically solve certain high availability issues, but it’s more likely to add undue complexity. Instead, a better understanding of technology and implementing best practices should be your starting point before looking for a multi-cloud solution.
This post is excerpted from our new whitepaper, Separating Multi-Cloud Strategy from Hype: An Objective Analysis of Arguments in Favor of Multi-Cloud.
You will learn:
- The reality vs. hype of multi-cloud deployments
- How to achieve high availability while avoiding vendor lock-in
- The advantages of a best-fit technology approach
- The arguments that should be driving your multi-cloud strategy
How to Go Serverless Like a Pro
So, no servers? Yeah, I checked and there are definitely no servers. Well...the cloud service providers do need servers to host and run the code, but we don’t have to worry about it. Which operating system to use, how and when to run the instances, the scalability, and all the arch...
AWS Security: Bastion Host, NAT instances and VPC Peering
Effective security requires close control over your data and resources. Bastion hosts, NAT instances, and VPC peering can help you secure your AWS infrastructure. Welcome to part four of my AWS Security overview. In part three, we looked at network security at the subnet level. This ti...
Top 13 Amazon Virtual Private Cloud (VPC) Best Practices
Amazon Virtual Private Cloud (VPC) brings a host of advantages to the table, including static private IP addresses, Elastic Network Interfaces, secure bastion host setup, DHCP options, Advanced Network Access Control, predictable internal IP ranges, VPN connectivity, movement of interna...
Big Changes to the AWS Certification Exams
With AWS re:Invent 2019 just around the corner, we can expect some early announcements to trickle through with upcoming features and services. However, AWS has just announced some big changes to their certification exams. So what’s changing and what’s new? There is a brand NEW ...
New on Cloud Academy: ITIL® 4, Microsoft 365 Tenant, Jenkins, TOGAF® 9.1, and more
At Cloud Academy, we're always striving to make improvements to our training platform. Based on your feedback, we released some new features to help make it easier for you to continue studying. These new features allow you to: Remove content from “Continue Studying” section Disc...
AWS Security Groups: Instance Level Security
Instance security requires that you fully understand AWS security groups, along with patching responsibility, key pairs, and various tenancy options. As a precursor to this post, you should have a thorough understanding of the AWS Shared Responsibility Model before moving onto discussi...
Cloud Migration Risks & Benefits
If you’re like most businesses, you already have at least one workload running in the cloud. However, that doesn’t mean that cloud migration is right for everyone. While cloud environments are generally scalable, reliable, and highly available, those won’t be the only considerations dri...
Real-Time Application Monitoring with Amazon Kinesis
Amazon Kinesis is a real-time data streaming service that makes it easy to collect, process, and analyze data so you can get quick insights and react as fast as possible to new information. With Amazon Kinesis you can ingest real-time data such as application logs, website clickstre...
Google Cloud Functions vs. AWS Lambda: The Fight for Serverless Cloud Domination
Serverless computing: What is it and why is it important? A quick background The general concept of serverless computing was introduced to the market by Amazon Web Services (AWS) around 2014 with the release of AWS Lambda. As we know, cloud computing has made it possible for users to ...
Google Vision vs. Amazon Rekognition: A Vendor-Neutral Comparison
Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs. This post is a fact-based comparative analysis on Google Vision vs. Amazon Rekognition and will focus on the tech...
New on Cloud Academy: CISSP, AWS, Azure, & DevOps Labs, Python for Beginners, and more…
As Hurricane Dorian intensifies, it looks like Floridians across the entire state might have to hunker down for another big one. If you've gone through a hurricane, you know that preparing for one is no joke. You'll need a survival kit with plenty of water, flashlights, batteries, and n...
Amazon Route 53: Why You Should Consider DNS Migration
What Amazon Route 53 brings to the DNS table Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service offered by AWS. It is named by the TCP or UDP port 53, which is where DNS server requests are addressed. Like any DNS service, Route 53 handles domain regist...