Software Development Methodologies
The course is part of this learning path
This course is the fourth installment of four courses covering Domain 1 of the CSSLP, covering the topic of software development methodologies.
- Learn about the secure development lifecycle and the implications it has on your software
- Understand the various software development methods for keeping your environments secure
- Learn about the software development lifecycle
This course is designed for those looking to take the Certified Secure Software Lifecycle Professional (CSSLP) certification, or for anyone interested in the topics it covers.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Now some aspects about the secure development life cycle. We need to be sure that the members of the team, whether it's partly outsourced, entirely outsourced or all in-house, at all levels, executive, management, supervisory and actual developers, every member of the team needs to be aware. They need to be educated and they need to be prepared for what is going to happen. That is to say, they need to be prepared with knowledge and experience of the basic principles and any advanced topics that are likely to pop up within the context of what's being built.
The team must be well-managed. It must operate in conformance with iron triangle type principles and there must be teamwork and cohesiveness to ensure that communications are in place. They're clear, they're regular, they're fully disclosing. They need to be aware of the priorities, but also aware of what trade offs may have to be made and the basis for which those trade-offs will be decided upon.
There needs to be engagement and compromise. That is everyone must be committed to pursuing it through to its successful completion, but they must be aware that along the way, change is inevitable and there may be need to compromise. However, compromise must be made with all due care and forethought. So, that compromises do not result in a failure to comply or that they impact unnecessarily, secure by design, secure by default sorts of principles. In other words, they cannot be made lightly. They must be made in the full light of all considerations. There must be a continuous risk management process that begins at the beginning of the project and works its way through to full operation and then ultimately, to the disposal of the product once it's reached the end of its life.
In the project management phases, we're going to need to track them through as they're developed. So for this purpose, we develop a risk register and a bug tracking mechanism. One of the ways that we can consider doing this is by using DREAD, which is an acronym for damage, reproducibility, exploitability, affected user and discoverability.
In the context of where we're going to use DREAD, we need to be sure that we define the metrics and the measurement method for each of these five characteristics. Looking at the bugs that this'll be applied against, we need to measure them in terms of their probability and possible damage ranking in the areas of damage, reproducibility, exploitability, affected user and discoverability so that we can put them in a proper priority. We can do it in somewhat of a qualitative measure with a high, medium and low rating, or we can do it with a norm numerical rating system. It depends on how precise you want it to be and what the risk appetite and measurement preferences of the builder or the buyer.
We will also use threat modeling and various forms of categorization of threats. Now, this is a process of course, that will discover what the threats are and examine the various elements of them, how it might manifest if it does and how it would prosecute an attack against a program or a system. And by doing this, we're using the old adage of offense informs defense. If we know how and where, and by what means we're going to be attacked, we can more effectively develop a strategy to defend against it.
As we're looking at this, this is a decomposition method that helps us understand the following. What is being built to discover its susceptibilities, how it might be attacked to exploit them and produce the impact, what agents would be the most likely that appear and how they might succeed. And that leads to what mitigations or modifications are feasible against them. These things also help us understand the opposite of all of these, so that we will know what will happen if we don't come up with a solution, because we have to accept that a solution may not be possible or 100% effective in all cases.
As we're doing this and this slide is one you've seen before, this is here to help categorize precisely and clarify what our threats are in terms of what will happen, what they look like, how they might manifest and other attributes about them. The more clearly we understand what our threats and threat agents might be and what damage they can cause and what sort of manifestation we can expect, the better a job we can do at constructing what sort of defenses may be possible.
Now, along with DREAD, we use STRIDE. This one characterizes the kinds of attacks and the net effect that the attack type may produce. STRIDE of course, stands for spoofing, tampering, repudiation, information disclosure, denial of service and elevation of privilege. And these produce their respective attack results such as masquerading, tampering produces alteration, repudiation gives the ability to deny, the information disclosure results as data loss or leakage or breach.
A denial of service produces some form, some duration of downtime and elevation of privilege where a normal user or non-user elevates to the level of super and then is able to cause any of the others and a host of others. One of the most helpful things we can do in this process will be to determine what attacks may be possible and then diagramming how they might come to pass.
We want to diagram the infrastructure itself so that we understand all of the functional pieces. Then we want to identify the data flow. We want to make sure that we find all of the privilege boundaries that will have to be crossed at some point, either with the same level of privilege or by escalating it in some way. Then we have to identify the attacks for each diagrammed element to see how the various pieces are going to come together. And here we have an example of how to diagram the threat so that we can reveal, visually, where those different threat elements may manifest and what their impacts might be.
We have here at the left, a user, they're in the process of transacting a login. Login request is filed. The login response is given by the web servlet and in doing so, we're crossing one of the authorization boundaries. The web servlet authenticates the user by coordinating the internal portion of the login process. And through it, we get the authentication, which is fed back across that first perimeter, back to user with a successful login. Then another authentication takes place when they use a SQL query. This also crosses an authorization boundary. Each one of these you'll note, is in a loop. Each one of these data flows can be interrupted or intercepted by an attacker or by a process as a proxy for the attacker if it's positioned properly.
Then by having authorization to access the college library in this example, the college library database then accesses by proxy for the user the actual database files that store the actual information that the ultimate end user is after. So, all along this way, what we're looking for is the boundary layers that have to be crossed and what is required to do so successfully. What transaction information is passing between the user, the web servlet, the login process and the college library database, so that we know what is passing and whether or not there's content that the attacker might be interested in. This informs us about where the program may be at risk of exposing sensitive information and making it subject to compromise.
So from this, we perform the reduction analysis by decomposing the threat attack program as it progresses. We look at the trust boundaries to see what the boundary is. What the element in the system is that puts up the trust boundary and what crosses it, what is allowed to cross what would be denied.
Looking at the data flow paths, we're able to see how it flows so that we can be sure that there are no covert channels by which this data might also flow or over which it might be stolen and then passed in a way that this system will not see. Where are the input points and what do they go into and come back out of? Are there any privileged operations that are concerned with the data flow? If there are, where are they? What does it require? What do you have to do so that we can see whether the process works, whether it's needed, and whether it can be intercepted or erupted in some way? And then looking at the overall picture to determine what the security stance and approach is going to be to addressing this particular flow and all of the issues identified within it.
Now, the other threat examination methods will include things like fuzzing. Now, fuzzing is a practice of feeding a variety of inputs through an interface to produce different outputs. One of the things that we try to do during fuzzing is to do stress testing, to see just what the boundaries and limits are that our program will accept and either fail or continue to function. We want to discover the range of resilience and whether or not the program can and will properly handle any error conditions that may result. We can do this to test for injection attacks and susceptibility. We can do this to check on input specific or cross site scripting.
We will, of course, at various places along this flow, we will conduct security reviews where we're looking at the various features, their functionality, running tests to make sure, looking at where these requirements and the regulations are derived that put these in place. We want to be sure that they are in the proper place and that they're functioning correctly every time they're invoked. Then we want to consider what mitigations are in place and what mitigations may be required.
Now, our options will be to do something or potentially to do nothing. Doing something means we're actively opposing whatever the conditions are that we find that produce adverse consequences. Doing nothing may be because it adds complication, but without any benefit. So we have to consider the doing something or the doing nothing as the approach, but in doing nothing as the slide says, we're not in effect doing nothing, literally. We're making an informed decision about how to approach.
Added complexity does not increase security necessarily. And in fact, may contribute the reverse. So, we always need to be careful to consider the positive and negative aspects of doing something or doing nothing before we make our decision. We can remove the problem as one kind of solution or we can fix the problem. What we have to consider is which is actually best? Which is actually feasible? And do we introduce technological complexity, which also includes introducing greater fragility overall? The more things we have in motion, of course, the more things we have that can go wrong. And these are the kinds of considerations we have to give for every mitigation option we're going to consider.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.