Assessing the Effectiveness of Software Security
Assessing the Effectiveness of Software Security

This course is the final module of Domain 8 of the CISSP, covering Software Development Security.

Learning Objectives

The objectives of this course are to provide you with an understanding of:

  • Considerations for secure software development
  • How to assess the effectiveness of software security
  • Assessing software acquisition security

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.


Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.


If you have thoughts or suggestions for this course, please contact Cloud Academy at


We're going to move into section six, in which we're going to discuss an assessment of the effectiveness of software security. Thus, we're going to look at auditing and logging of changes, risk analysis and mitigation in the development process, testing and verification, and regression and acceptance testing. To do all of these things, we're going to employ the Risk Management Framework that we see as part of the Certification and Accreditation Process. This is a static and procedural activity to a more dynamic approach. This has been drawn from the NIST Special Publication 800-37 Revision 2 of 2018.

Now, the RMF methodology you see here in this graphic, step one, we prepare for the assessment in which we're going to frame the risk. This means we're going to determine our strategy overall, through which we're going to address the risks that we may find. Then we move on to step two where we actually conduct the assessment. We're going to execute the steps, identify threats, sources and events, identify vulnerabilities and predisposing conditions, determine the likelihood of occurrence, determine the impact magnitude of the occurrence, and therefore derive the overall risk. We're going to use qualitative and quantitative methods in the process of conducting the assessment.

Then we move on to step three where we communicate the results. This will result in informed decision-making by the powers and the consequence will be a strategy in which we will do mitigation, transference or acceptance of these risks. In step four, we maintain the assessment. The environment in which we employ this methodology is, of course, a living environment. And no assessment of this type can be taken as the be all and end all and must be repetitively done over the course of the lifespan of the system involved.

So we're going to look at this from the standpoint of doing our assessment on the system, either as a routine matter each year, or each period, depending upon the system sensitivity and its rate of change. And we're going to look at this as the way to build in continuous process improvement, rather than the historical strategy of repetitive remediation. So that over time, we get better at this process and we get better, much-improved results to our risk assessments.

Now, the specific characteristics of this RMF are these, the RMF encourages the use of automation where appropriate. Very importantly, it forces the integration of information security with operations and the business management processes in the organization. It emphasizes a process of selection, implementation, assessment, and monitoring of security controls, recognizing the living state of their systems. It links risk management processes at the information system level to risk management processes at the organizational level, thus ensuring alignment of the system and its risk management practices with what goes on elsewhere in the organization. And as importantly, it establishes responsibility and accountability on various roles for those system controls. On this slide, you see a website, that is where you can go to find full information on the RMF.

Now the question arises, the government does certification and accreditation for its systems. But the question is, could this benefit private organizations as well. Private organizations, whether they use this particular process or not, very likely have a process that they use to do this. It's exceedingly rare that an organization would invest so much of its business integrity and livelihood on a system that has been very incautiously built and delivered. They will use, therefore, some form of certification framework, the RMF being a very good example of one they might select. Because it provides a control framework, having a framework allows for lower overhead, the implied use of standards and it includes all aspects of a system's security from technological through process and policy.

Now, every security program has to include proactive and reactive elements. From the reactive perspective, we're going to use auditing and the logging of changes so that we can see how systems and network device reporting is telling us how things have been working. They will highlight all normal events, and highlight all anomalous or suspect or concerning events such that we can see what has happened and step up our corrective action plan.

Now, the logs themselves are, of course, records of action and events that have taken place. So this is an historical record. Even so, it's a very sensitive one because it provides a very clear view of who owns a process, what action was initiated, when it was initiated, where the action occurred and why the process ran. That can reveal a great deal about a process that is normal and wanted and a process that is neither of those things. As such, logs need to be protected so that the integrity that they bring to our review and decision-making processes can remain trustworthy.

Using logs, we perform the functions of auditing. Now, the auditing is the way of going through reviewing and analyzing the evidence that the logs contain. As such, this process must be very painstaking because the results can be either a clean bill of health or they can represent that something untoward, something unwanted, something even dangerous may be going on, whether it's from an outside source or just as importantly, from an inside source. And so we should have policies in place that direct how this will be done, partly for operational reasons, partly for compliance reasons, so that we can officially collect and keep the overhead associated with auditing at a very high level of effectiveness.

Along with those, we need to have our testing and verification processes. When we do a risk assessment, we develop a strategy, we put in place things that are going to be designed, implemented and then operated. Before they're implemented, they must be tested to be sure that they will, in fact, address the concerns that prompted their building in the first place. And once they're in they must be verified that they function exactly as envisioned, or if not, why they don't and then made to by reworking it.

So in the end, mitigations are implemented, they must be tested. And the development environments themselves must be supported with testing teams and quality assurance so that the testing and verification process itself will have integrity and produce trustworthy results. Thus, we need to have testing and verification roles as distinct and separate from designers and builders and the operators. Now, security findings should be addressed in the same way as any other change request, when they generate some form of remedial action. The developer and system owner does not declare that the risk is mitigated without the occurrence of an independent verification and validation review. In short, we take nothing for granted. Otherwise, we have unknown, unquantified and thus unmitigated risk.

One form of ensuring integrity is by code signing. Now, this technique uses public-key encryption as a way of ensuring code integrity by putting the name literally of who developed a piece of code and you give information about the purposes for which the developer intended that a piece of code should be used. Attaching digital certificates through digital signing to the code modules, it helps lock the code, keeping its integrity intact so that it's used only for the intended purpose, and it can provide assurance that users can trust when downloading these files to know that they are not compromised. 

Now, as the program or the module progresses through the development cycle, various forms of testing will be used. One will be regression testing, another will be acceptance testing. Regression testing is what is used when developers make a change and modify their software. They have to be on the lookout for even small changes causing other kinds of consequences. So in the regression testing, this is put through anytime a code module or an application has been modified to ensure that it continues to work only in the ways that it's prescribed to work. Its goal is to catch bugs that may have been introduced in a new build or into a new release candidate. Because each change that is made is itself supposed to be, at the worst, risk-neutral. It should be fixing whatever it's intending to fix or adding whatever new feature or functionality it's intending to add. But in the course of design and build and testing, it should be reduced to being risk-neutral.

Another function of regression testing is to make sure that any previous bugs that existed in the program are not resurrected through the change activity. Now, a library of tests can be equated to a toolbox. Having a library of tests, we develop a standard set to test functionality for programs. Now, in developing the standard battery of test cases that can be run, they can be selected from, to run against the code module to determine whatever the code module is intended to do that it does in fact do that and nothing else. And it can be reused each time modifications are made to the program module.

One of the things that we have to bear in mind about a library of tests is that, like tools in a toolbox, the tests themselves must on occasion be tested as well to ensure that the test actually does the job it is supposed to, so that the results that are derived from the test are trustworthy and properly representative of what we are testing for. Like any tool, tests need to be calibrated periodically.

Now, acceptance testing is the final test before the program will be implemented in the customer environment. This is a formal examination conducted to ensure that the test itself meets all of the acceptance criteria that the customer has. And it should enable the customer to determine whether or not everything that they had agreed would be built into the system does in fact exist and works as intended. The way current testing works, acceptance testing should be a process that should reveal no surprises. If the testing program has been properly designed and is done thoroughly, there should be no surprises by this point. However, that's often not the case. And that itself states that acceptance testing is one of the most important tests that a program must be put through to ensure that nothing has slipped in at the last moment.


About the Author
Learning Paths

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.


Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.