The course is part of this learning path
This module introduces the key concept of virtualization and the role this plays in effectively delivering cloud services, before defining the four different cloud deployment models.
The objectives of this course are to provide you with and understanding of:
- What virtualization is and how it relates to cloud computing.
- The four cloud deployment models and the application of each model.
The course is aimed at anybody who needs a basic understanding of what the cloud is, how it works and the important considerations for using it.
Although not essential, before you complete this course it would be helpful if you have a basic understanding of server hardware components and what a data center is.
We welcome all feedback and suggestions - please contact us at firstname.lastname@example.org to let us know what you think.
There are four main cloud deployment models that organizations can use – public cloud, private cloud, community cloud and hybrid cloud. In this video we’ll go through what each one means and who would use them.
As the name suggests, public clouds are available to anybody. Cloud providers offer a shared infrastructure which includes storage, database and network resources, and usually platform and software services. Users and organizations can access services on the public cloud from anywhere as long as they have an internet connection. However, this doesn’t mean that just anybody can connect to an organization’s resources running in the public cloud – it’s entirely possible (and highly recommended) for access to be restricted to authorized users or networks.
The server infrastructure is owned by the service provider who manages and administers them, so organizations don’t need to buy or maintain their own hardware. Services are generally provided through utility pricing which means organizations only pay for what they use and can be scaled up or down to meet demand.
Just as with any IT resources, reliability can be an issue with the public cloud and, whilst highly unusual, it’s not unheard-of to experience service interruptions that can affect multiple users – like the Salesforce CRM disruption in 2016 that caused a 10-hour storage collapse. However, providers can switch provision to alternative servers to quickly alleviate most service disruptions.
With a public cloud, the consumer never sees the hardware and they don’t know the exact physical location of their data. However, they can specify the geographic region in which it resides to help avoid data latency. It makes sense to host the cloud infrastructure as close to where the organization’s customers or end users are to provide the best overall performance.
Public cloud infrastructure
Before we look at the other three models, let’s look at how public cloud providers arrange their infrastructure, which has an impact on the speed of service, security and upgrades.
Providers group their resources into regions which entirely reside within one country. For example, all of the major providers have at least one UK region. Regions are relevant for several reasons:
Providers won't copy data from one region to another unless they’re asked to;
There can be price variations between regions, for example, US-based services are usually slightly cheaper than UK-based ones; and
Not all services and features are available in all regions – newer services tend to become available in the provider’s home country first;
The public cloud provider runs multiple, geographically distributed data centers (or groups of data centers) in each region. This enables an organization to provision multiple resources, like virtual machines, in multiple data centers to achieve better fault tolerance. If one of the data centers suffers an outage, it’s unlikely to affect resources running in other data centers. Effectively every vendor provides each organization with a failover data center for all their resources.
A private cloud is the same as the public cloud in design. The key difference is that the private cloud infrastructure is provisioned for exclusive use by a single organization with multiple users in it, for example, different business units. It can be owned, managed and operated by the organization itself, a third-party service provider, or a combination of them both. It can also exist on or off the organization’s premises.
You might think this sounds a lot like an in-house data center and that's because it is. The difference is that resources within the data center are provided and consumed through a cloud infrastructure.
A private cloud model has the advantage of more customizable storage and network components, tighter control over corporate information, and high levels of security and reliability. However, these do come at a price.
A community cloud model is similar to a private cloud – the only difference is the set of users. In a private cloud only one company provisions the infrastructure, but in a community cloud several organizations with a similar interest share the infrastructure and related resources.
It might be owned, managed, and operated by one or more of the organizations in the community, a third-party cloud provider, or a combination of them both and can exist on or off premises. Each organization in a community cloud has the same security, privacy and performance requirements, and the model is particularly suited for organizations that work on joint projects and can share the costs.
An example of community cloud is the GovCloud regions of some of the public cloud providers. Access is only available to customers working on US Government projects who require stringent security measures but need to share the same resources.
As the name suggests, a hybrid cloud makes use of two or more distinct cloud infrastructures – private, public or community. For example, an organization can balance its IT infrastructure by locating mission-critical resources on a secure private cloud and deploying less-sensitive resources on a public cloud. A hybrid cloud might also be used for seasonal burst traffic or disaster recovery.
A hybrid model is established when network links are configured between the clouds, essentially extending the logical internal networks of the clouds.
Daniel Ives has worked in the IT industry since leaving university in 1992, holding roles including support, analysis, development, project management and training. He has worked predominantly with Windows and uses a variety of programming languages and databases.
Daniel has been training full-time since 2001 and with QA since the beginning of 2006.
Daniel has been involved in the creation of numerous courses, the tailoring of courses and the design and delivery of graduate training programs for companies in the logistics, finance and public sectors.
Previous major projects with QA include Visual Studio pre-release events around Europe on behalf of Microsoft, providing input and advice to Microsoft at the beta stage of development of several of their .NET courses.
In industry, Daniel was involved in the manufacturing and logistics areas. He built a computer simulation of a £20million manufacturing plant during construction to assist in equipment purchasing decisions and chaired a performance measurement and enhancement project which resulted in a 2% improvement in delivery performance (on time and in full).