1. Home
  2. Training Library
  3. Module 4 - Cloud Development

Application Mindset and Practices

Developed with



The course is part of this learning path

Application Mindset and Practices

Course Description 

This module looks at the relationship between the cloud, DevOps and Agile ways of working, reflecting on the importance of the right application mindset and practices, before identifying the key considerations for migrating services to the cloud. 


Learning Objectives 

The objectives of this course are to provide you with and understanding of: 

  • What DevOps is and the importance of the right development mindset. 
  • The role DevOps and cloud computing play in business transformation and their link to agile ways of working. 
  • The primary technical security implications and controls for cloud computing. 
  • The key stages to migrate services in the cloud. 
  • The primary considerations for keeping cloud services up to date. 


Intended Audience 

The course is aimed at anybody who needs a basic understanding of what the cloud is, how it works and the important considerations for using it. 



Although not essential, before you complete this course it would be helpful if you have a basic understanding of server hardware components and what a data center is.  



We welcome all feedback and suggestions - please contact us at qa.elearningadmin@qa.com to let us know what you think. 


A mindset shift 

With a more traditional data center approach, networks are built, hardware is purchased and applications are deployed to that infrastructure. So, we think of applications in terms of servers and hardware.  

In the cloud, a network can be built using a script, which means they’re documented and can be reproduced. The resources deployed into that network can then be created using scripts. So, in the cloud, we need to stop thinking of resources as long- running hardware and see them as software resources.  

In a traditional world, if a resource begins to fail, it needs to be taken out of service, analysed, repaired and then put back into service. In the cloud, the resource is simply terminated and the script that created it is re-run – assuming the reason for the failure wasn't the script itself! 

For many individuals working in IT, this is a mindset shift.  


Planning for failure 

Service resilience is an important consideration.  

Cloud providers have hundreds of thousands of customers running multiple workloads on millions of pieces of equipment. At this scale, failures are guaranteed to happen. They might not happen to your organisation, but planning for failure is a critical part of cloud application design. 

Planning for failure means avoiding single points of failure and having at least two of everything. So, instead of having one powerful web server, have at least two less powerful ones. It'll cost the same per hour, but if one fails, your application will at least be partially available.  

With some cloud services, the provider takes care of this for you. For example, cloud object storage services will typically put multiple copies of your data into multiple data centers simultaneously. With others, you'll need to manage this yourself, using some of the provider’s other services. 

The cloud provider will also offer load balancing services that will efficiently distribute incoming network traffic across a group of servers.  

The load balancer acts as the ‘traffic cop’ sitting in front of the servers and routing user requests across all servers capable of fulfilling those requests in a way that maximises speed and capacity utilisation, and ensures that no single server is overworked, which might degrade performance and slow down the service.  

If single server goes down, the load balancer redirects traffic to the remaining online servers. When a new server is added to the server group, the load balancer automatically starts to send requests to it. 

Failover is facilitated by what are known as ‘loosely coupled systems’.  These avoid one component having to know too much about another component so, if a server goes down, it’s harder to manage if the applications need to know the precise network address of another component to maintain the service. It’s better to have named services rather than hardcoded addresses and have the servers and services register themselves to the named service when they become available.  

The cloud provider will provide services to help to manage this approach. 


Version upgrades 

Cloud services providers typically offer only the more up to date versions of software and operating systems. This sounds great but organisations often rely on earlier versions, and moving from a very old database engine to a more up to date one might require some work. 

Before moving to the cloud, an organisation should check whether their chosen vendor actually supports the version of an operating system or application that it needs to run in the cloud. This might be as simple as upgrading to the latest version, but the devil is always in the detail.  

About the Author
Daniel Ives
Head of Learning - Cloud and Principal Technologist – Amazon Web Services
Learning Paths

Daniel Ives has worked in the IT industry since leaving university in 1992, holding roles including support, analysis, development, project management and training.  He has worked predominantly with Windows and uses a variety of programming languages and databases.

Daniel has been training full-time since 2001 and with QA since the beginning of 2006.

Daniel has been involved in the creation of numerous courses, the tailoring of courses and the design and delivery of graduate training programs for companies in the logistics, finance and public sectors.

Previous major projects with QA include Visual Studio pre-release events around Europe on behalf of Microsoft, providing input and advice to Microsoft at the beta stage of development of several of their .NET courses.

In industry, Daniel was involved in the manufacturing and logistics areas. He built a computer simulation of a £20million manufacturing plant during construction to assist in equipment purchasing decisions and chaired a performance measurement and enhancement project which resulted in a 2% improvement in delivery performance (on time and in full).