1. Home
  2. Training Library
  3. Module 6 - Software Deployment and Lifecycle

Testing, Audit and Review

Developed with
QA

The course is part of this learning path

Contents

keyboard_tab
Module 6 - Software Deployment and Lifecycle
play-arrow
Start course
Overview
DifficultyBeginner
Duration28m
Students9

Description

This course introduces the development lifecycle and describes how robust development practices, including testing and change control, can considerably reduce security related vulnerabilities in a production system. It then builds on this by looking further into different test strategies and approaches, including the role of auditing in reducing risk exposure.

 

Learning Objectives

The objectives of this course are to provide you with and understanding of:

  • The software development lifecycle
  • The role of testing and change control in reducing security related vulnerabilities in a production system
  • How the risks introduced by third-party and outsourced developments can be mitigated
  • Test strategies and test approaches, including vulnerability testing, penetration testing and code analysis
  • The importance of reporting, and how reports should be structured and presented to stakeholders
  • The principles of auditing and the role played by digital forensics

 

Intended Audience

This course is ideal for members of information security management teams, IT managers, security and systems managers, information asset owners and employees with legal compliance responsibilities. It acts as a foundation for more advanced managerial or technical qualifications.

 

Prerequisites

There are no specific pre-requisites to study this course, however a basic knowledge of IT, an understanding of the general principles of information technology security, and awareness of the issues involved with security control activity would be advantageous.

 

Feedback

We welcome all feedback and suggestions - please contact us at support@cloudacademy.com if you are unsure about where to start or if would like help getting started.

Transcript

Welcome to this video on testing, audit and review.

 

Rigorous testing before a system is released provides management with confidence that it won’t leave the organization exposed in any way.

 

This video covers different test strategies and test approaches, including vulnerability testing, penetration testing and code analysis. It also reflects on the importance of reporting and how reports should be structured and presented to stakeholders.

 

Finally, it looks at the principles of auditing, including the role played by digital forensics.

 

Consider a scenario where a software development team has created a new application for the finance team. They’ve been coding for four months and now have what they believe is a completed application that can be deployed to the whole organization.

 

However, as the Information Security Manager, you’re concerned that there may be problems with this code. You need assurance that the new software is secure and won’t cause an unacceptable security risk. Before any new software package is deployed, it needs to be thoroughly tested. This provides management with confidence that the new system won’t introduce vulnerabilities into the infrastructure or leave the organization exposed in any way.

 

There are various methods that can be used to test a system. The most common approach is a hybrid regime, which comprises a range of methodologies. It provides a holistic approach to evaluating the security posture of the application and ensures the most common coding errors and mistakes are eradicated before it’s deployed to the live environment.

 

Business testing by end users with realistic test cases provides an authentic method of discovering issues. Testers can also conduct ad-hoc testing using their business knowledge and initiative to attempt to break the system. Alongside user testing, vulnerability analysis, code reviews and targeted penetration testing can also be undertaken.

 

The test approach should be updated as the application evolves, and repeat testing based on knowledge of new vulnerabilities and methods should be performed.

The frequency of testing is determined by:

·        The frequency of updates to the code base;

·        The extent of the updates; and

·        Changes to supporting environments.

 

For example, although the code base hasn’t changed for 12 months, the underlying Java Runtime Environment might have changed, forcing the IT support team to update the Windows operating system deployed on their PCs. As a result, the supporting infrastructure may have introduced an unknown vulnerability that repeat testing might find.

 

Testing frequency should be defined in the security policy and detailed in the operational security plan. This will cover all aspects of the system, including bespoke solutions, infrastructure components, configuration and access control systems. It can even include testing physical security and administrative processes.

 

Let’s move on now to look in more detail at the different types of testing that should be carried out.

Vulnerability testing is the process of identifying vulnerabilities within a software system with actions to mitigate the potential exploitation being identified and documented.

Automated tools, such as Nessus, can be used to test the vulnerabilities in applications and infrastructure components. These look for typical issues with components, such as the potential for buffer overruns, SQL injection attacks and cross site scripting attacks.

 

A penetration test is a simulated attack where the penetration tester uses the tools a hacker would use to try and break into a system. Penetration testing should be conducted when any new software is deployed, or a new infrastructure system is created. Unlike the vulnerability analysis which identifies the problems in a system, the penetration test determines whether any vulnerabilities can be exploited. 

 

The other benefit of penetration testing over vulnerability testing is that it can be conducted from a black box or white box perspective. The difference is based on the amount of prior knowledge the testing team is given by the system owners:

·        With black box testing, the penetration testing team is given no, or very little, information about the system they’re attempting to hack. This means they’re testing whether a criminal would have the necessary tools and skills to break into a system.

·        With white box testing the test team gets full access to accounts, documentation and other resources. This helps them consider what might be possible should the system be attacked by someone who has inside knowledge.

 

Both approaches can provide important risk assessment metrics for evaluation. However, most importantly, penetration testing can help determine which countermeasures would be best to implement.

 

The final testing approach we’ll look at is code analysis. This is a type of assurance review that checks the software application for security-related bugs. It requires expertise in the programming language used to code the application, and an understanding of the weaknesses and inherent vulnerabilities that could be introduced by the programmer.

 

Various software tools are available to assess the security of code developed through many different languages, including C++, C# and Java. These tools are often used as the first pass by expert code analysis teams, who will use the output to start further investigations.

 

Penetration testing companies can also offer code analysis as part of their service. Often accreditation authorities, especially for government projects, require that code analysis is undertaken as part of a wider security review.

 

The test and review process must be augmented with accurate and comprehensive reporting. Reports should be honest and concise and contain all the findings from the testing assessment. Attempts to hide or downplay the significance of vulnerabilities can lead to unnecessary exposure to attack.

 

The report must be balanced in terms of technical content and non-technical summaries. A traffic light system is often used to describe the impact and the likelihood of a vulnerability being exploited. This allows the system owner to make decisions on the priority of potential security fixes.

 

The final point to note is that the distribution of a test report should be protected, because it contains details of vulnerabilities in the system and the methods used to exploit those vulnerabilities. So, it’s the perfect blueprint for a hacker to plan an attack on the system!

 

Many organizations mark their test reports with the highest classification of data and treat them as strictly ‘need to know’. 

The test and review process must be augmented with accurate and comprehensive reporting. Reports should be honest and concise and contain all the findings from the testing assessment. Attempts to hide or downplay the significance of vulnerabilities can lead to unnecessary exposure to attack.

 

The verification stage is a critical element of the ‘Plan-Do-Check-Act’ cycle. This stage is used to verify that the original design specification has been met and whether the processes for design and development have been properly followed by the development team.

 

In relation to software assurance and systems security testing, checks at this stage confirm that the security requirements imposed on the system have been met. These requirements might come from a specific risk assessment, or from secure coding guidelines identifying how to avoid security pitfalls in the code. 

 

If a process created for the developers to follow is almost always being circumvented, then it should be re-evaluated to see if it’s flawed. The term ‘verification linkage’ refers to the need for rigorous system testing of both the system and any proposed administration processes that support the application. This confirms the requirements built into the system can be achieved in live operation.

 

User instructions and system operating procedures are types of administrative processes that could support a new or updated system. Where a system has an impact on administration process, the testing must be end-to-end and involve real users.

 

As IT systems run and perform their functions, each individual component of the infrastructure can generate information which can be recorded. For example, as the Windows operating system boots up, it records the start of processes and the logging in of users through the Windows Security Event Log.

 

The subsequent analysis of this information is known as system auditing, which provides detective security controls.

 

Accounting records are created by system components, typically containing:

·        A source and destination IP address;

·        A username;

·        The time the event occurred; and

·        The object the action was taken on.

 

Most event logs contain a lot more information than this but, without this event information, especially the subject, object and time, they’re of little use. Security Information and Event Management, or SIEM, systems are widely used to manage the vast quantity of event information that organizations need to capture and process.

 

Often these products will normalise the data into a standard format to make it easier to data mine and report on. The Common Event Format is a standard which provides a schema for normalising event data. 

 

Collecting audit events can be an onerous task in large, complex systems and often requires specialised software to collect and analyse the data. Many hundreds of thousands of events can be generated over the course of a day and this can make locating attacks difficult and time-consuming.

 


 

The basic principles when creating an effective auditing solution are to:  

·        Collect relevant security data from all relevant endpoints on the network;

·        Normalise the data so that searches can include event logs from different sources;

·        Ensure all security enforcing capabilities, such as anti-virus software, firewalls, content checkers and authentication and authorisation solutions, feed into the event management system;

·        Raise alerts when events which indicate a security incident are received;

·        Use specialised staff to analyse the data and conduct investigations;

·        Ensure that there are procedures in place to manage audit information as digital evidence; and

·        Ensure a system-wide accurate time source.

 

An important thing to remember when creating an audit infrastructure is that this will become the basis of all future investigations. Keeping data for up to 6 months, or longer for some Government requirements, is common because it may be later in the investigative process when a security breach is discovered. For example, access to data before an employee was dismissed can be beneficial for internal and police investigations.

 

Analysis of audit logs is a highly skilled task often carried out by expert auditors. The auditors require a high degree of technical knowledge and keen investigative and forensic skills. Auditors need training in digital forensics practices, so they understand the evidential aspects of managing logs. They also need to know how to investigate an incident and present the evidence of a crime in a forensically accurate way, possibly in a court of law.

 

Integrity of the data is paramount, so the investigation should follow the stated legal processes. This is known as maintaining the chain of custody; poorly handled information or weak reporting can discredit an expert witness.

 

As part of the analysis phase, auditors will look for patterns of behaviour that might indicate something suspicious. Events can be correlated across various system components to look for attacks that move from one system to another.

 

The organization’s SIEM system must be able to support the investigative process through triggers and thresholds which allow the auditors to look for anomalous behaviour.

 

That’s the end of this video on testing, audit and review.

 

About the Author

Students148
Courses11
Learning paths1

Fred is a trainer and consultant specializing in cyber security.  His educational background is in physics, having a BSc and a couple of master’s degrees, one in astrophysics and the other in nuclear and particle physics.  However, most of his professional life has been spent in IT, covering a broad range of activities including system management, programming (originally in C but more recently Python, Ruby et al), database design and management as well as networking.  From networking it was a natural progression to IT security and cyber security more generally.  As well as having many professional credentials reflecting the breadth of his experience (including CASP, CISM and CCISO), he is a Certified Ethical Hacker and a GCHQ Certified Trainer for a number of cybersecurity courses, including CISMP, CISSP and GDPR Practitioner.