Software Development and Support

Contents

Module 6 - Software Deployment and Lifecycle
Software Development and Support
Difficulty
Beginner
Duration
28m
Students
481
Ratings
5/5
starstarstarstarstar
Description

This Course introduces the development lifecycle and describes how robust development practices, including testing and change control, can considerably reduce security-related vulnerabilities in a production system. It then builds on this by looking further into different test strategies and approaches, including the role of auditing in reducing risk exposure.

Learning objectives

The objectives of this Course are to provide you with and understanding of:

  • The software development lifecycle
  • The role of testing and change control in reducing security-related vulnerabilities in a production system
  • How the risks introduced by third-party and outsourced developments can be mitigated
  • Test strategies and test approaches, including vulnerability testing, penetration testing, and code analysis
  • The importance of reporting, and how reports should be structured and presented to stakeholders
  • The principles of auditing and the role played by digital forensics

Intended audience

This Course is ideal for members of information security management teams, IT managers, security and systems managers, information asset owners and employees with legal compliance responsibilities. It acts as a foundation for more advanced managerial or technical qualifications.

Prerequisites

There are no specific pre-requisites to study this Course, however, a basic knowledge of IT, an understanding of the general principles of information technology security, and awareness of the issues involved with security control activity would be advantageous.

Feedback

We welcome all feedback and suggestions - please contact us at support@cloudacademy.com if you are unsure about where to start or if would like help getting started.

Transcript

Welcome to this video on software development and support.

There are many risks associated with software development, which can be managed by following a rigorous development approach. This video will introduce the development lifecycle and describe how robust development practices, including testing and change control, can considerably reduce security related vulnerabilities in a production system.

It will also look at how the risks introduced by third-party and outsourced developments can be mitigated.

Let’s start by looking at system design. The software development process should start by consulting with the user community to ensure the end product will meet the business need it’s designed for.

The design must also be tested to confirm it delivers against the stated requirements which will typically include user requirements, system requirements and security requirements.

The security requirements originate from the development technologies being used and the environment the system will run in. They are a subset of the overall statement of requirements from which the product or system is created.

Like all requirements, the security-related ones are agreed with the stakeholders at the beginning of the project. This helps to reduce re-work by avoiding issues later in the process when it’s harder to redesign or re-code.

Changing user requirements can lead to security concerns and need to be agreed through the change control process. Stakeholders must understand the implications of any changes, including those related to security, through the stakeholder management process.

Throughout the development phase, the Information Security Manager will help align the information assurance requirements with the product and ensure that the testing regime includes comprehensive security tests. They’re also responsible for:

·        Ensuring the developers and project managers don’t trade out security requirements due to cost or complexity without ensuring any related risks are fully understood and agreed by the stakeholders.

·        Monitoring the end to end development processes and influencing developers to adopt a defensive approach to coding which ensures appropriate methods for backup and restoration of data are considered.

·        Meeting the legal, auditing and accounting requirements which all relate to the product being secure and compliant.

The Information Security Manager should be in close contact with the project and technical development teams to ensure that security policy is understood in every part of the delivery structure. 

Prior to deployment, new systems must go through a series of acceptance tests to prove they’re fit for purpose. Typically, this involves functionality testing, user acceptance testing and accreditation testing.

Irrespective of whether the new system is a commercial off-the-shelf solution – one that was bought from an external vendor, like Microsoft or Adobe – or developed in-house, the acceptance process is still required to assure the product is fit for purpose.

The underlying infrastructure and functionality should be validated against the CIA Triad criteria of confidentiality, integrity and availability. This is a fundamental risk management process and should be carried out by the information security team.

The amount of testing time and effort should be defined by the initial product risk assessment. For example, if the product has a level of initial software assurance rating EAL4+ it can be trusted more than an open source application downloaded from a website.

Once the development is completed, the development release process controls how and when the application can move from the development environment to the test environment. Various documents accompany each released package, including:

·        Installation instructions

·        The application resource specification, and

·        The test plans

After the paperwork and media are validated by the testing team, the tester will install the application and the testing process can begin. Testing must ensure all the agreed requirements have been met. Defects are recorded and returned to the developer for fixing and re-testing. Once the test process is signed-off, security testing can commence.

The test report that accompanies the software details all the tests that have been carried out and states whether the system passed or failed.

Once the application is deemed acceptable, it can move onto the user acceptance stage where operational users run typical workflow tests to ensure they can perform their operational tasks. The system is then accepted, and the software is ready to be deployed. Once authorisation for rollout is achieved, the package is given to the administrators to deploy according to the approved process.

All new systems contain defects, and these are generally identified during the testing and validation stages of development.

By ensuring the development, test and live environments are physically separated, the system can be tested without exposing the live operational environment to malware or bugs during the development process. If the development environment is built on virtualised servers and workstations, it’s easy to roll back or rebuild it if things go wrong.

The test environment is where test users, penetration testers and system administrators validate that the development requirements have been met and the system is secure. As far as possible, it should be a replica of the production system and be maintained as close as possible to the build level, software version levels and data of the production system. If budget permits, it should also replicate the production system network topology.

Some organizations also run an additional staging environment which is completely representative of the user environment. This enables 100% accurate like-for-like testing before the system is deployed.

Clearly more development and testing environments mean:

·        More management time

·        Increased IT and maintenance costs, and

·        More patching and update work is required

So, the decision on the types of testing environments must involve stakeholders and appropriate business managers.

A copy of the new code and documentation should be lodged in a secure place to support business continuity and disaster planning.

The most dangerous threat from commercial off-the-shelf, or CoTS, applications is from rogue code performing a malicious attack on the organization. However, if the vendor is a trusted and reputable source, this threat decreases. For example, if a new version of Microsoft Office contained rogue code, it would almost certainly be discovered by the security testing community that routinely look for bugs. While the damage to the individual might be minimal, the damage to Microsoft’s business would be great. However, Microsoft’s internal software quality and security assurance process are rated very highly, so their development rigour can be trusted.

Regardless, CoTS applications can contain bugs. Understanding the vendor’s bug reporting system and installing security patches should be mandatory.

When purchasing CoTS products, checks should be made to ensure the source is reputable. Pirate software is unlicensed and can contain deliberately planted malicious code. Failure to use properly licensed software can leave an organization open to prosecution under local copyright and intellectual property protection laws.

Some businesses and government bodies, such as military and intelligence organizations, use Accreditors to ensure that their systems meet the requirements of their security policy. Through a pre-defined accreditation process, the Accreditor decides whether the system is fit for purpose. Accreditation is typically granted when systems have passed all aspects of penetration and vulnerability testing and have illustrated that they meet all aspects of the security requirements.

The Accreditor judges whether any non-compliances are acceptable, and accreditation is then granted or denied.

Accreditation shouldn’t be performed by the senior risk owner, although they can override accreditation decisions. Accreditation is a dynamic process conducted regularly through the system lifecycle to monitor modifications and upgrades. New threats and new hacking approaches continually change the risk profile and affect the accreditation status of the system.

After a system is deployed, the change control process manages ongoing developments, changes and enhancements during its lifecycle. The process starts with an outline of the change request and the associated rationale being issued to the stakeholders. The change request, including the benefits, risks and costs, are considered by the change approval board, which generally includes information assurance, security, operational and technical representatives.

The role of the information assurance representative is to ensure:

·        Any new risks and vulnerabilities introduced by the change have appropriate mitigation actions identified

·        Related work is costed, and

·        Penetration and vulnerability testing plans are created

If the board approves a change, they can specify conditions to address risks.

Once the changes are completed the system is subject to functionality and regression testing which helps to ensure the confidentiality, integrity and availability of the live system is maintained.

A copy of the new code and documentation should be lodged in a secure place to support business continuity and disaster planning.

Outsourcing development is a common approach which offers potential cost advantages and economies of scale. Outsourcing offshore is considered higher risk because it’s more difficult to manage the coding practices of the third-party developer who might also operate to different standards. Malicious code can also be inadvertently or purposely introduced.

Loss of intellectual property is also a major concern in countries where the organization’s national law has no jurisdiction; recovery of intellectual property, and enforcement of ‘cease and desist’ orders or lawsuits can be impossible. Contracts and specifications help reduce the risk and should include clear and unambiguous requirements.

A prototype stage should also be built into the development cycle to validate that the specification has been followed.

The software development methodology should include processes to inspect the software application for malware. Direct visual inspection of the underlying programming code is possible. Code analysis can also be carried out using automated scanning tools.

User testing can help identify covert channels inadvertently built into the software. A covert channel is a way of bypassing a security control.  If source code has been written or provided by a third-party supplier, the organization is totally dependent on that supplier for support, updates, patches and security fixes.

In extreme cases, where a supplier has gone out of business or has been acquired by a competitor, the commissioning organization can be forced to pay extra to fix problems that should have been resolved under the standard maintenance agreement. A mitigation for this risk is escrow.

The organization and supplier agree on a neutral third party, which is often a law firm, to hold a copy of the source code and development documentation. If the contract is breached, the source code is released to the commissioning organization.

In some cases, risk can be mitigated by an amount of money from the contract value being held in escrow and being released to the vendor after a pre-defined period of time, perhaps 3 years. This means that, if the supplier ceases to trade or doesn’t complete the development, the money is returned to the commissioning organization to help fund the redevelopment or replacement programme.

That’s the end of this video on software development and support.

About the Author
Students
1458
Courses
11
Learning Paths
2

Fred is a trainer and consultant specializing in cyber security.  His educational background is in physics, having a BSc and a couple of master’s degrees, one in astrophysics and the other in nuclear and particle physics.  However, most of his professional life has been spent in IT, covering a broad range of activities including system management, programming (originally in C but more recently Python, Ruby et al), database design and management as well as networking.  From networking it was a natural progression to IT security and cyber security more generally.  As well as having many professional credentials reflecting the breadth of his experience (including CASP, CISM and CCISO), he is a Certified Ethical Hacker and a GCHQ Certified Trainer for a number of cybersecurity courses, including CISMP, CISSP and GDPR Practitioner.