CSSLP Domain 2:3 Requirements

The course is part of this learning path

Functional Requirements

This course is the third installment of three courses covering Domain 2 of the CSSLP, covering the topic of functional and operational security requirements.

Learning Objectives

  • Explore the functional and operational requirements for building secure software

Intended Audience

This course is designed for those looking to take the Certified Secure Software Lifecycle Professional (CSSLP)​ certification, or for anyone interested in the topics it covers.


Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.


If you have thoughts or suggestions for this course, please contact Cloud Academy at


And here's where we begin, a more in-depth discussion of requirements, both functional and operational. Now, as we've been saying right along, requirements are the foundation and guide for the design build and the proving of the software or system that we're talking about developing. Having the requirements, places a set of expectations for what the users of the final product will derive in the way of benefit or operational function, what they can accomplish and the benefits of its use. But as with any project or design, the course followed to the final result is greatly affected by how the requirements were initially acquired, how they were understood at every phase and at every checkpoint along the way, and most of all, how they were met.

So in order to avoid issues, the conflicts that can arise and the potentially disruptive change to the intended product, requirements, when they're acquired must be managed and the rate of change of them must be controlled to assure that quality is maintained and ultimately the expectations are met.

So let's begin by talking about functional requirements. As I mentioned earlier, these describe actions, capabilities, processing, algorithms, and other active processes regarding what the software or system will actually be doing when it comes into contact with the data objects contained within it. These are reflections of processing to meet the normal every day business needs. With them will come descriptions on how the DRP will work, who will do what and so on. And the ways in which the database, if there is one, will operate to store, retrieve and format its data.

Security, as we've been saying right along, must be an integrated element. And it must also have its set of requirements separate from, but integrated with the system or application. With these functional requirements must come various other things, such as roles and with those, the defined responsibilities for the roles. These of course contain definitions of rights and permitted actions of users. We have the subjects, the active entities and systems, and these would be users or processes that perform various actions, whether or not they're in user by a system, or whether they're running autonomously within the system.

We have our objects. These are the passive entities or data containers upon which the subjects will act either directly or through a process. These will include data and the various files and directories, and other repositories that are acted upon by subjects. And so we have a 'Subject-Object-Activity-Matrix. This illustrates the range of actions possible on objects and those defined as permitted to subjects. So as before the capability tables represented in the two dimensional access control matrix are associated with the subjects and the access control list is defined as being associated with the object. And the combination of the two, says what the given subject can do, if anything, with the given object.

Now continuing on about functional requirements, one of the first things we need to define is a set of use cases. Now these will be 'Subject-Object-System' interaction scenarios that show how the system is intended to work and is based on the known functional requirements. These will be illustrations of how known requirements will perform and provide the opportunity to identify and capture previously unknown requirements that enable or correct planned performance.

We need to be very clear about one thing. Requirements will be discovered at different phases through the development cycle. We need to be prepared to capture them, to clarify them and then evaluate them for whether they should be included or discarded, set at a 'Mandatory' level, at a 'Highly Desirable' or a simple 'Nice-to-Have', but not really required level. And this is the sort of evaluation we will need to do throughout the entire process for every requirement that may be presented. And it's one of the aspects of controlling the rate of change to ensure, as they say, that the rate of change does not ever exceed the rate of progress. 

Now on the negative side, we need to examine and construct various types of abuse cases. These will show 'Subject-Object' interaction scenarios that demonstrate prohibited or undesirable actions. These are also going to include attack scenarios to illustrate how one initiates and progresses. They would be used also to identify potential weaknesses and vulnerabilities in the system or application being developed and lead to decisions about how we can work with them and counteract them.

Part of the requirements will include development and evaluation of sequence and timing because within a system, there is competition for resource. We have two different kinds of threats and vulnerabilities that can arise in this area. And what we must do is develop a set of requirements in order to help us cope effectively with concurrency issues. One form would be out of order processing caused by processing timing variances. One user shows up, grabs an object, begins to process. Another user shows up, grabs another object, but there's a point of intersection between the two operations. The one that needs to go first ends up going second. And so what was done by the out of order first one is now undone and made even more corrupt by the also out of order second one.

Then we have resource competition, sometimes referred to as a 'TOC/TOU' or 'Time Of Check' to 'Time Of Use' access of an object by a subject. We have these and they're called race conditions. And this is a problem of an interdependency between two processes. Essentially, it's basically that one 'A' cannot proceed until 'B' finishes because both require the same variable and each is locked due to the other. And these race windows, which are time intervals in a program execution sequence, when race conditions can exist. To put it simply, 'A' cannot proceed because it needs what 'B' has. But 'B' cannot proceed because 'A' has what 'B' needs and neither can give up what they have because they're locked and can't proceed until they get the resource that the other possesses, but neither one can give up what they have until they get what the other has.

So you see a race condition can produce a deadly embrace or deadlock situation between these processes. We have another one, which we know is an infinite loop. These are sometimes caused by attacks, but what they represent, is they represent the possibility of an overly complex program with logic that results in multiple decisions and can result in looping when the decisions that are being made by the program cannot be achieved and remain in work or loop breaking controls to prevent this condition fail. Ultimately it needs to resource exhaustion and eventually a system crash.

Now, part of what we can do to ensure that we don't run into these problems without any form of coping or resolution, is to use secure coding standards. Now these are generally language specific rules and recommended best practice processes. These illustrate optimal ways of writing program instructions that increase efficiency and reduce the technical fragility. Sometimes we get into the mode of writing programs and because we seem to be making progress, we continue to add various decisions. But there comes a point at which any more leads to a toppling over and the program gets into either a race condition or perhaps it simply crashes, or it gets locked into an infinite loop.

Using these language specific rules and best practices can reduce or even eliminate the possibilities of getting into such unworkable situations. We also have prescribed forms that avoid such tactical vulnerabilities and do not give rise and even defeat exploitable conditions. Secure cutting standards, emphasize repeatability and reliability. Basically a way of saying that when we figure out the best way of doing something, let's do that thing in that way every time we need to, so that it becomes imminently repeatable. It avoids many of these kinds of conditions, such as infinite loops.

By following published industry guidelines, to standardize, we bring about more of that sort of logic and more of that sort of efficient processing and reduce the possibilities and risk of having decreased efficiency and more trouble from tactical fragility. It also ensures that we don't overlook things, like proper air traffic and handling, which are sometimes found in poorly designed programs. A part of this process will be to log all of the activities and capture all the errors to ensure that we have this information and can analyze it later, following testing, to address the conditions should they arise, to analyze them, see what produced them and see what may be done to mitigate or eliminate them.

About the Author
Learning Paths

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.


Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.

Covered Topics