The course is part of this learning path
This is the first course in Domain 3 of the CSSLP certification and covers the essential ideas, concepts, and principles that you need to take into account when building secure software.
- Understand the main security models used for developing software in a secure way
- Learn how to develop and implement security models
- Understand how to manage risk and threats
This course is intended for anyone looking to develop secure software as well as those studying for the CSSLP certification.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
Regardless of the kind of an environment you're in, it needs to be a collaborative approach, bringing all the elements such as security, the development team of course, and operations, as well as a representative of management's interests so that we can develop a model that looks at the following things. We need to first define the security objectives for the application or system. Like any problem, the clearer and more precise we can make the problem statement, the easier it will be to understand a clear statement so that we can address it equally clearly as we seek a solution.
Now along with this, we need to look at the target which is the system or application, and we need to do a breakdown analysis of it so that we can look at the various components at a functional level right now and see how they may be exploited. From this we will have to identify the threat-agent impact point and the actual agent or agents that we believe will be the ones most likely to impact it. And then do a comparative analysis to determine an optimal mitigation strategy. Now, once this model prototype is constructed, we will need to perform various exercises and validation testing to conduct on it before we accept the model as truly valid and complete. And as every model is, it's a representation of what we think and so the team should be prepared to make modifications to it if the time and conditions arise.
So the first step in this process will be to define the security objectives for the product being designed and developed. So we begin by looking at the operational objectives of course, and with them we include both security and privacy objectives for this application or system. Now, they will include information from a lot of different sources that will be from contractual commitments to regulatory and compliance. It will also involve corporate standards as well as industry standards. And the more complete and accurate a definition of these aspects that we can produce, the better the insight and comment from team members will be to ensure the capture of all the requirements both functional and non-functional, and ensure that it is reasonably complete.
Now, bear in mind that we are not trying for 100% completeness, this product will evolve all the way through its process until it's delivered, possibly right up until the last day before it's offered for implementation and delivery to the customer. But the more attention we pay to this, the better our product will be in the end. Then we go into the target deconstruction. We're going to do a breakdown analysis of the product as we've conceived it and we will probably do some exercises in the area of doing data flow diagrams which will provide visual expression of the data as it flows between system elements, such as stores, processes, and movement between the various services and other modules.
We need to capture information about the interfaces and the trust boundaries because this is essential. It defines an awful lot of the characteristics as well as possible intrusion points that an exploitation attack can use. These clearly include user file systems, processes at any other point within or between these elements where privilege levels change. We have to look at network traffic coming and going between the application elements and within the application. Any function calls or remote procedure calls. We of course need to clarify user activities. And this will involve normal/non-normal use and abuse cases by the cause and effect produced.
Looking finally we need to see any external system or interface which will serve as a point-of-ingress or egress for any normal authorized or abnormal unauthorized usage. Fundamental to this threat modeling will be threat source analysis. And here you see an identification of threat-agent impact points and the respective agent type elements that need to be defined. It includes sources. These are human or natural.
Now, in human, it means human-made or human-motivated. This of course would include any technology made by man which can be misused or simply has the propensity to fail at some point. We have to look at the character because not all threats are technological, some are non-technological, social engineering for example. Looking at the threat agent and its source, we have to determine, is this something intentional or is it unintentional? It's naive to think that an attack could ever be unintentional. And so we need to be very clear and objective about how we evaluate what we believe the threat source is behind the threat itself.
Origin and geography in recent years has become far more important. The trusted insider, internal threats of various types including failures. Also the outsider. We look at the scope and extent which could be either isolated, well contained, or it can be pervasive or expensive such as a distributed denial of service attack might be. Looking at the extrinsic or intrinsic attributes. This could be something that has been designed in or omitted from the design that are within the target itself that may serve to retard or bar the attacking agent altogether. And they can be systematic or non-systematic. Hostile/non-hostile obviously.
An important attribute, foreseeability versus non-foreseeability. Some of the threats we're going to look at will be very defensible. Others will not be at all, for example, a zero day. You have to consider how to defend against something that you don't know what it is or how it will work. We have to include accidents. We have to include failures in addition to the intentional sorts of actions from hostile parties. And then whether something is an intense type, a high and fast, or a gradual, such as a low and slow. And what we're developing here is a profile of threat agents.
We have to be sure that we keep things in perspective, for example, we can't emphasize external threats to the exclusion of anything internal. Internal threats as we know all too well and painfully can be far more damaging than external ones. External ones also grab a lot of headlines and this tends to produce a lack of focus, a lack of balance on what we're looking at. And that's entirely too important to ignore, keeping perspective and trying to achieve balance in terms of where they're coming from and what they intend to do.
The STRIDE model is used here as part of how we determine what we can expect the threat agent to produce in the way of an effect. So using STRIDE, we're going to look at the model itself. The action of course is in order of the letters in the acronym itself, spoofing, tampering, repudiation, information disclosure, denial of service and elevation of privilege, and all the variants of each one of these because as we know there are many. So from that, we produce a model of what the attack result would look like if we encountered it.
Spoofing produces masquerading in various forms, tampering of course, any kind of unauthorized alteration, possibly even an accidental data entry error. Once again, we need to include failures and accidents as part of our threat modeling. Repudiation produces some form of deniability on the part of the attacking agency, so that we can't actually determine whether it was a legitimate source doing something wrong unintentionally or an attacker doing something wrong intentionally, any form of data loss or leakage produced by any kind of unauthorized information disclosure, denial of service which results in downtime planned or unplanned, extensive or short, and then elevation of privilege, bringing their access to a hot and very, very high level, such root or system. This of course can lead to the possibilities that they do the other things. So we need to apply this model again, keeping perspective and keeping things in balance as we do so. Once we've determined these factors, we then move to comparative mitigation.
Now, there are several different approaches we can take in our mitigation strategy. And while each threat may require a specific sort of mitigation, these need to be part of an overall strategy on mitigation so that we can avoid gaps, redundancy, invalid prioritization, and confused operations. Now, standard mitigation treatments, those that are known to be effective should be first choices as these reduce the risk presented rather than trying exotic or untried options. They may need to be considered at some point for unique threats. But let's start with the basics first, things that we know will work.
One method would be to employ attack trees that diagram how the approach will take place from impact to its ultimate effect. And then we use this as a way to identify possible options that had not previously been considered. Now, this approach assists us in prioritization in one threat scenario over another through the visualization of the attack threat and asset involved and should highlight any options that may be available to us now that we've looked at it from this aspect. The objective of course is to put together a program of mitigations which need to be aligned with the system objectives, meaning the business priorities and so on, and the various requirements. And as I've said right along, we need to be sure that the priority of these is kept correctly aligned and well-orchestrated and on balance with each other.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.