Cloud Computing Architecture: an Overview

Cloud Computing Architecture: Front End and Back End

Cloud computing resources are delivered by server-based applications through digital networks or through the public Internet itself. The applications are made available for user access via mobile and desktop devices. This much is pretty obvious.

According to the National Institute of Standards and Technology (NIST), these are the five specific qualities that define cloud computing:

  • on-demand self-service
  • broad network access
  • resource pooling
  • rapid elasticity or expansion
  • measured service

That’s cloud computing. However, we’re going to discuss the architecture that drives it all; the essential loosely coupled components and sub-components that make the cloud work. Broadly speaking, we may divide the cloud computing architecture into two sections:

  • Front End
  • Back End

These ends connect to each other via a network, generally the Internet.

Front End

This is the visible interface that computer users or clients encounter through their web-enabled client devices. But it should be clear here that not all cloud computing systems will use the same user interface.

Back End

On the other hand, the back end is the “cloud” part of a cloud computing architecture, comprising all the resources required to deliver cloud-computing services. A system’s back end can be made up of a number of bare metal servers, data storage facilities, virtual machines, a security mechanism, and services, all built in conformance with a deployment model, and all together responsible for providing a service.

Points to consider

  1. It is the primary authority and responsibility of the back end to provide a built-in security mechanism, traffic control, and protocols.
  2. The operating system on a bare metal server – known popularly as a hypervisor – makes use of well-defined protocols allowing multiple guest virtual machines to run concurrently. The hypervisor guides communication between its containers and the connected world beyond.

A central server is responsible for managing and running the system, systematically reviewing the traffic and client requests to make certain that everything is running smoothly. Hypervisors come in various flavors:

  1. Native hypervisors: They are run directly on a bare metal server without an intermediary operating system and thus carry full responsibility for performance and reliability.
  2. Embedded hypervisors: They are assimilated into a processor on a separate chip, improving server performance.
  3. Hosted hypervisors: These run as a distinct software layer above both the hardware and the OS, This sort of hypervisor is beneficial for both private and public clouds to achieve performance improvements.

The server virtualization methodology used by hypervisors bypasses some of the physical limitations that stand-alone servers can face. Virtualization allows software to trick a physical server into thinking it is, in fact, part of a multiple server environment, and therefore capable of drawing on extra, otherwise underutilized, capacity.

As the numbers of services hosted by a cloud computing provider grow, the demands of higher traffic and compute loads that obviously grow with it must be anticipated and accommodated. But exponentially growing demands for storage space can’t be ignored.

To properly maintain and protect a client’s data, a cloud computing architecture requires greater redundancy that might be needed for locally hosted systems. The copies generated by this necessary redundancy allow the central server to jump in and access backup images to quickly retrieve and restore needed data.

Cloud Computing Architecture: conclusions

In a cloud computing architecture, all applications are controlled, managed, and served by a cloud server. Its data is replicated and preserved remotely as part of the cloud configuration. A well-integrated cloud system can create nearly limitless efficiencies and possibilities.

Cloud Academy