1. Home
  2. Training Library
  3. Security
  4. Courses
  5. CISSP: Domain 8 - Software Development Security - Module 1

Understanding and Applying Security in the Software Development Life Cycle

The course is part of this learning path

play-arrow
Understanding and Applying Security in the Software Development Life Cycle
Overview
DifficultyIntermediate
Duration45m
Students50

Description

This course is the first of 3 modules of Domain 8 of the CISSP, covering Software Development Security.

Learning Objectives

The objectives of this course are to provide you with the ability to:

  • Understand and apply security in the Software Development Life Cycle (SDLC)
  • Enforce security controls in the development environment

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

So we're going to begin our discussion of software development security in section one, in which we will understand and apply security in the software development life cycle. So our module topics are first going to take us through the development life cycle methodologies, maturity models, then we'll delve into operation and maintenance, change management and the integrated product team.

Now, the security of software environments, of course, must emphasize the CIA triad, in that the system and its resources must be available when and where needed by those authorized users who need them, that the integrity of the processing of the data and the data itself is ensured, and that, where necessary, the confidentiality of the data is protected.

Now, our current software environment is much more distributed than it has been at any time in the past. This is due in part to a substantial increase in open protocols, interfaces, and the supplies of source code. This results in increased sharing of resources that will also require increased protection, during the sharing relationships and due to the widespread nature of the teams involved, and thus it proves to be a much more complex and potentially much more difficult environment to manage.

Now, the need for architectural design has always been one that focused on efforts to address problems or performance from an architectural level, that is to say, designed in, rather than bolted on later. The idea behind this was that to have an architectural design, as you would with a car, a building, a plant, an airport, or anything else that is large and complex, is that instead of working as an ad hoc development, starting from a plan would save money.

The idea behind security in the architecture is that designing it in is much more cost-effective over the long term, than adding it on piecemeal later. It has been shown by many studies over the years that a dollar, for example, spent in the design phase that eventually got built into the program can equate to as much as a 150 dollars in add-on break-fix by the time the system is operationally ready.

Overall, this will prove a drastic improvement in quality and productivity. Re-use, of course, is very good because it saves time and it saves money, and the architectural process in this is intended to be focused on the ability to re-use. We see there on our graphic that we begin at the concept and as we travel through the system lifecycle towards disposal, we should be able to save money, change the program, adapt it to future needs, and do so all in a way that produces greater cost savings than if we're approaching it without the architectural design to guide the product roadmap.

So the relationship of security to the system lifecycle basically begins, as you would expect, at the beginning. Our infrastructure engineering starts off the project and that runs through all the different phases. It's an iterative cycle, as we all know, with functional and participative overlap, so that an integrated and re-iterative process is involved at every step, all the way through to production operations, and information assurance needs to be involved from the beginning, whether it's informing the design process, informing the implementation process, or the operation, it needs to be involved every step of the way, because the object is to integrate the information assurance and risk assessment processes with the security engineering functions and do so into every phase of the system lifecycle and any IT project intended for production operations, regardless of the development model being employed.

When we look at assurance, we have two basic types: operational and lifecycle. Now, operational assurance focuses on the features and architecture of a system, that is, we look at the system's integrity, processes for trusted recovery and a test for covert channels, as some examples. In this, we look at the software development and functionality issues to ensure that they will address the operational needs. We also need it to be consistently performed and properly documented through change management and the management and maintenance processes.

A code traveler with operational assurance is lifecycle assurance. Lifecycle assurance ensures that the trusted computing base, which we'll discuss later, is designed, developed, and maintained with formally controlled standards that enforce protection at each stage in the system's lifecycle. It therefore requires security testing and trusted distribution, and very importantly, it relies heavily on configuration management to make sure that over its lifecycle it continues to meet operational needs in a well-managed and controlled fashion.

Done properly, the architecture and the design, followed by its implementation, will provide these kinds of characteristics in the way that the system works. It will make a system more resistant, so that it can better withstand attempts to subvert normal operations within design limits. It infuses robustness, such that it has the strength to function and perform correctly under a range of conditions without complete failure. It helps to make our systems more resilient, such that they have the flexibility of functionality so that operations can continue even after attack or an error impact. It makes them more recoverable, that is, they have the structure and features that facilitate trusted recovery.

In many cases, features and performance functions have redundancy and that gives us compensating capabilities to ensure continued operation in the event of a component failure. Overall, this contributes to higher reliability, such that we can trust our system to perform in a manner that reflects the necessary qualities of the trust and assurance that we need over its lifecycle.

Now, the system lifecycle phases, as different from the software development lifecycle phases, begins with a feasibility study. In this, we're looking for ways of determining the technical possibilities and cost-effectiveness of developing a solution to solve a particular problem, whether a system is in place and needs to be considered for replacement or we're dealing with a new problem without a system dealing with it at the present.

From there, we move on to concept analysis, in which we're looking at determining the nature of the problems that exist with an existing machine or the existing system and situation that we're planning to solve with the new proposed system. From there we move to our architectural design, in which we're going to theorize, propose, and then model to confirm the best alternatives that will ultimately go into the design of the system to replace the existing or to go in and solve the problem. From there we go into the build and programming process. Here we pick the selected components, perform the integration, perform the testing and that leads to installation and implementation.

In the build and programming phase, we have the software development lifecycle as a sub-process that fits into this particular place in the system lifecycle approach. With the implementation, we're going to put the system into effect, test it, make sure that it works and then turn it over to operation and maintenance, and during operation and maintenance we're going to be doing our normal sustaining engineering, our change management and enforce operational control of the system.

Now moving on to the software development lifecycle, this defines the phases of systems operations from concept operations and ultimately to retirement and replacement. Now, SDLC, as I mentioned, is a sub-process of the SLC, and it defines and governs the phases of software development from its concept through its retirement, and we have a number of models that are worth considering based on the project that we're going to be doing and what our anticipated deliverables and operations are going to be.

So our systems lifecycle phases through these. The risk management process, as we know, must be performed throughout this process to reduce risk through the development cycle begins in phase one, and here you see the seven traditional steps: roject initiation and planning, functional requirements definition, system design specifications, development and implementation, final transition into production, operation and maintenance and then retirement and replacement.

Now, the next several slides are going to take us into greater detail into each one of these phases. So with our project initiation, this is where we set all of this up as a good project. Goals, scope, resources, staffing, schedule, etc., all of that goes in here, along with the risk assessment process that begins at a very general level to look at the sensitivity of the anticipated information the system will be handling, defining the criticality of the system in terms of the overall operation and its place in it, we define our security risks, we define the level of protection needed and we define the regulatory legal or privacy issues so that we can be sure that through the development steps that follow they'll be properly addressed.

At this particular point, as I say, we're doing a lot of this at a very general level, but we must make an assessment as to whether the proposed changes, additions, features and operations that we're going to make in this system will be able to adequately address these requirements, such that we actually can anticipate that the system will, in fact, be delivered as intended.

Step two, we go to a higher level of detail, but still somewhat general. Here we're doing functional design and analysis planning, we're going to start laying out the functional system requirements and begin to conceptualize in somewhat more detail, known as progressive elaboration, each function that the program or the system is anticipated to provide. We're going to set certain things, such as an acceptable level of risk, which may be characterized by a level of loss, a percentage of loss, or permissible variants in the kinds of outputs that a particular system module produces.

We'll need to look at system security requirements and the various controls in terms of their functional application. Part of that will be to determine exposure points in the processes and then to define controls to mitigate the exposure that may occur. We have to be sure, once again, that the requirements can be met by the application, and this will require the two steps that we'll do at the conclusion of each phase: verification and validation. Part of this process will be a categorization of a privacy impact rating system governed by the regulations and the organizational needs to ensure that we can comply with those requirements as well.

From the functional design, we move into system design specifications, a much more detailed step in this process. Here we're going to design and define detailed functional components. We're going to look at the actual program controls in their mechanical operations. We're going to design security mechanisms and assess attack surface. We must, of course, design a test plan so that we know as we configure these controls that we're going to be able to test them to prove or disprove that they work and that they produce the proper results, and here is where we do design verification to make sure that as we reach this step, because our next step is the one in which we will build it, that we can actually build what we have conceived.

In step four of this process, we get into the actual build of the product that we have envisioned. We gain our authorization for development for management, we start authorizing people to actually begin working on the system, instead of in the design phases, now we are actually building the system. All steps along the way must be documented and we need to think about training the support personnel and the users who will be interacting with it once it achieves operational status. We're going to have different roles and these need to be properly separated and make sure that we have proper separation so that there is no overlap or other conflicts of interest that may occur.

We have multiple techniques by which we do this, we have the group of developers, the people that will actually be building and coding, from them we separate a group of testers, and the two should not have any overlap. We're going to have to have the production people take a look at the program as each piece is produced and the two philosophies that are gaining much more steam in the marketplace today: DevOps and a variant of it, DevSecOps.

When we move from the design and build phase, we move into installation and implementation. Now, evaluation and testing should not be conducted by the development staff. We have to certify that our security functionality that has been designed in does, in fact, do the job it was designed to do and produce the results it was designed to produce. We have to certify the processing to ensure that the integrity of the inputs and the outputs is assured through every phase, and we have to use different testing methods, different testing types, to be able to verify that these are, in fact, the results we've achieved.

We will do unit testing at the integration level, at the acceptance level, and a form of testing necessary, regression testing. We'll cover these in more detail later in this module. Ultimately, when all of this is complete and the customer has accepted the product, we will put it into operation and maintenance. Here we're going to install it, configure it, and insert it into the network and into the working environment to do the job it was designed and intended to do. Once it's been put into O and M, we will do regular testing on this to confirm that it does, in fact, continue to operate only as intended by doing vulnerability tests, continuous monitoring, and then periodically a spot-in-time check called auditing.

Every system goes end-of-life at some stage. When we decide that it's time to replace it with something new, due to a failure of components and too costly to maintain it, or it's simply time to get a refresh of technology. That decision then leads to having to decide how the disposal will be done. This, like all of the phases, needs to be done in a proper, orderly, controlled way. Some of the considerations may be that we have to archive or backup or retain, for compliance reasons and others, the data and perhaps even the code itself. It could be discarded but this should only be chosen if there is no security or privacy or other proprietary type issues that arise. It can be overwritten or crypto-shredded. It can, of course, be physically destroyed by some assured method and in the table you see here on the slide, there are methods for doing that and the references by which we can learn what they are.

Now, part of this process, of course, is the development and implementation phase. Here we're going to be generating source code. We also have to develop our test scenarios and test cases. Various forms of unit integration testing must be performed to ensure that all the pieces that we have designed and built will, in fact, go together properly, and we have to document what we've built for the maintenance purposes that will come later after its conversion into full operation.

One of the most important, final phases will be acceptance testing. This will require an independent group with tests to ensure that it will function within the organization's environment, or if it's intended for sale to customers, that it will work in the anticipated environments that the customers will have. It therefore needs to have both the functional types and the security types of tested requirements to be done to ensure that the system will in fact perform as intended.

Now, some basic rules about testing and evaluation. Test data should include data that at the ends of the acceptable data ranges, various points in between, and data beyond expected or allowable data points, to make sure that we understand the ranges of operational resilience and robustness that our programs contain. We should, of course, always test with known good data, but never, never testing with live production data. If we are testing a system that is developed to be operated on privacy or other highly secure information types, we have to be sure that the data that is used for testing purposes properly represents those data types, but that itself has been sanitized so that we can be sure that there are no advertent or inadvertent exposures during the testing process.

Now, the certification and accreditation process, which is described in the NIST special publication 800-37, currently release two of 2018, elaborates two distinct phases: the certification and the accreditation. Now, during the certification part of this process, the system or application is examined in a technical and non-technical aspect so that we can judge its mission readiness. This will include a documentation review, development cycles and processes used to prepare it, the change control process, any information regarding threat and vulnerability assessment gathered through vulnerability scanning or penetration testing, we will have to look at operations procedures, and we will have to look at the incident response process and the business continuity and disaster recovery processes surrounding the system to evaluate its recovery ability. At the conclusion of the certification phase, a report will be generated that will then be taken and prepared for presentation to management, showing the findings and recommending any corrective actions or approval.

Moving on to the accreditation phase, this is where the report that is prepared at the conclusion of certification will be prepared and presented to management, and the report and its findings will be reviewed with the technical staff, management and the examiner team. The examiner team will spread out their findings, give their opinions, they'll talk about risk issues, those they've discovered, those that were remediated, those that remain extant in the system and then remediation options concerning them. Furthermore, they will get into the compliance posture that the system in its current state is in, and make any recommendations or state any concerns that they may have about them. At the conclusions of this discussion will come an opinion about whether management should accept the system as is, whether or not there should be a corrective action, and if all is well, the issuance of an authorization to operate called an ATO, or an interim authorization to operate if a corrective action plan needs to be pursued before, or in the worst case, a recommendation of non-acceptance, making sure that the corrective action plan is done, re-evaluated and then a new report is prepared on the newly modified system with all of the things fixed.

Following a recommendation to accept, the authorizing official for the organization that's receiving the system will issue an ATO, an authorization to operate, or an interim, which will usually be accompanied by a corrective action plan. New user training will then ensue and the system will then be implemented, if it's a new one, or one that is concurrent operation going through its periodic re-evaluation, through the C and A process will be left in operation and any critical corrective actions will be taken.

One of the main standards used to judge quality performance, security engineering and systems qualities has been the Capability Maturity Model for systems and software. Developed originally by the Software Engineering Institute at Carnegie Mellon University, its primary focus is on quality and management processes to produce high-quality software with a minimum of confusion and much more rigor and organization to the process. Historically, it's well known that the CMM, as it's called, has five maturity levels.

Now, these processes are not discussed in great depth on the test, but they are discussed in great depth in the ISO/IEC 21827 and 15288, both of recent vintage and the NIST special publication 800-160 and all of these address the standards for system security engineering and software engineering. As originally conceived, the Capability Maturity Model developed five levels of maturity. Now called the CMM Integration Method, CMM level one began by describing a process that was very informal, ad hoc, even chaotic, it's almost as though they were saying that getting it right was almost an accident, but it highlighted the fact that the management of software projects was very much a chaotic, adhoc process. As they moved from CMM level one into level two, processes and rigor was adopted to bring about repeatability, such that the processes were planned and tracked.

Typical duration to get from the beginning of CMM level one through the completion of CMM level two was anywhere from 18 to 24 months on average. Followed by CMM level three, formally defined. This is where the rigor began to take the shape of well-defined, formal and enforced processes. By the time they had achieved CMM level three, they were now ready to move into the managed of CMM level four, where now we're talking about quantitatively managed and controlled. Here's where producing high-quality software became more the standard than something that was accidental or coincidental. As it perceived how processes would work in the future, CMM level five now focused on how the process itself could be improved. By maintaining the production of quality software and then looking at the process of that production itself, it maintained a trend of continually getting better and better at doing it, so that it institutionalized the quality of production to bring about better quality software so that that was the standard and that not that errors were totally eliminated but that these things were much more quickly caught and disposed of than they were at the very beginning.

To achieve the goals of the CMM, there needed to be a model of implementation, the IDEAL model. IDEAL, standing for initiating, diagnosing, establishing, acting and learning, was a model that was employed in each one of the steps. Each one of the steps would begin with initiating and work its way through the five levels to learning, so that by the time the IDEAL Model had done its job at each of the CMM levels, you would be able to move on to the next level of the CMM. This too was developed by the Software Engineering Institute at Carnegie Mellon, and as you see here in the diagram, it coincides very well with the PDCA, or plan-do-check-act model.

Now, ISO itself, of course, has published many standards that have been adopted worldwide to achieve greater quality in systems engineering, security engineering and the software production. Their standard, the ISO/IEC 90003 of 2014, focuses on total quality management, largely in the software production arena. What it does is it takes the standards originally published in the ISO 9001 quality standard and applies it to software design and development. The application of this process is an approach to apply quality management, first by understanding and having consistency in meeting the requirements, so again, the theme is established of process, it considers the processes in terms of the added value, the achievement of effective process performance, so it focuses on the process and the quality that the process has, and then it improves the processes by evaluating the output from each of the processes and uses it to further tune and sharpen each process.

Like the Software Engineering Institute IDEAL Model, the 90003 incorporates the plan-do-check-act cycle to guide its activities. Ultimately, we move into the operation and maintenance phase. Here, the software is doing its job, and so in doing its job, we set up monitoring of the performance of that job to make sure that we are watching the quality that is being produced by the product that was designed, engineered, and built using the processes we've just spoken of. One of the aspects is to ensure continuity of operations, reflecting the recoverability that has been designed into the software, periodically running tests to ensure that what we've designed and built actually works as designed. We set up processes to detect defects or weaknesses. We manage and prevent system problems, correct them as quickly as we can. We have to set up recovery processes, so that if a process is interrupted or fails in some way we can recover in an appropriately short period of time and put the system back into operation, and then, of course, we have to have a proper process for managing the change process.

Now, the change management process is one of the ways that we take production software and oversee its operations in terms of how it proceeds along the product roadmap that was developed during the design and build phases. Change management takes a program and looks at how management can benefit from the various upgrades and changes that will be made to it. One of its main functions is to ensure that the product follows the roadmap and stays properly in its performance parameters, that it evolves in a way that the organization needs, constantly staying in alignment with the operation. So there are many benefits to having a strong change management program. It has an effect on the entire workforce because they know what to expect. Part of change management is effective communication. It sees to it that there is effective training, that users have good education in how the program will work, how it will evolve, what to expect. It serves to counter the resistance as human beings tend not to like change or they tend to like change so much that it doesn't bother them, but the problem with that is we need to control how this evolves to ensure that it remains in alignment with the organizational mission that it's serving. It serves to provide controls so that we can monitor the implementation and the operation of the program as it evolves.

About the Author
Students1680
Courses31
Learning paths2

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.