The course is part of this learning path
This is the second course in Domain 3 of the CSSLP certification and covers the essential ideas, concepts, and principles that you need to take into account when building secure software.
Learning Objectives
- Understand security design principles as different from actual software design principles
- Understand the relationship between the interconnectivity and the security management interfaces
- Learn how to balance potentially competing or seemingly conflicting requirements to obtain the right security level
Intended Audience
This course is intended for anyone looking to develop secure software as well as those studying for the CSSLP certification.
Prerequisites
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
So let's move from that into more foundational security design principles that will serve to help us in this project acquire the characteristics that I've been referring to. Now, there are several common yet insecure design elements that we typically find in software that has not been conceived by the process that we're discussing. These include improper implementation of least privilege. This is something that can happen as a mechanism that would enable it during the design process and of course how it's operated after it's converted to production use.
Software that fails in securely, any authentication mechanism that can be in any way bypassed. Employing security through obscurity as a primary system design objective rather than perhaps a contributing factor in certain cases. Improper error handling. Not often seen as a security related thing but one that can contribute to it certainly. And then weak input validation, something that we know produces a variety of very serious system problems.
We have to be sure that we take these foundational principles and to the contextually determined degree, we use them to overcome the built-in vulnerabilities that might otherwise exist. We always have to strive for balancing the secuare design principles with the functional utility of the program that we're working on. Because we have to realize that it may not always be possible to design each of these security principles to be implemented and operate in its fullest state of function.
As I mentioned early, we're going to have to consider compromised decisions. And so the trade-offs that are going to face us from time to time need to be very carefully considered so that we know what we're trading off in favor of what we're getting and how we're doing in achieving the necessary balance between these two elements. So, let's explore the functional design principles that need to be first understood in the context of our project, and then we need to make those carefully considered compromised decisions about how we will then seek to employ it in an optimal fashion.
As I mentioned before, we have to try for good enough security. Now good enough simply means that the security that we're seeking to establish in the operational form is equated to the type of data and the critical nature or non-critical nature of the application that we'll be processing. It means that it's going to work at a level that will achieve the goals with you might say out overdoing it unnecessarily. The idea that we're trying to put in good enough means that we're avoiding specific things such as design and functionality simplicity, and that we're avoiding complexity because as is well known, excessive complexity is the enemy of good security and operational reliability.
We're going to have to establish the means to build in least privilege, first by knowing what that would mean in the program context that we're developing. Now, this of course means that it's only the necessary and minimum level of access rights or privileges that have been explicitly assigned to a given subject or process. Since we are in the design considerations phase, we need to consider how we will do that to make sure that we've got controls that will be appropriate and adequate to the need.
We also have to be sure later on that the information that we capture about how we got to this point will be able to inform how it will be operationalized so that a breach cannot be performed other than by intentional act or by hopefully a vanishingly unlikely accident. This would include talking about separation of duties. Because that is something that needs to be embodied in various ways in our design processes and the target program.
We need to look at compartmentalization, the functionality of any two or more conditions because all of these will need to be satisfied before the operation can be complete to enforce separation of duties. We also need to consider the possibility of segmentation of duties as part of the question of separation. Of course as you should expect, we need to discuss defense in depth. This is a classic and well-proven philosophy about how we design, and it is reflecting that we build in multiple overlapping and hopefully mutually reinforcing layers.
The opposite of this would be to design in a way that we are in fact building a software fortress. That of course puts all of the defenses at one level, but makes it possible that if that one level should be penetrated, it might leave wide open the program and all of the data that it processes. So by doing this, we achieve the goal of defending in one layer followed by another followed by another, that will employ different kinds of mitigating techniques, making sure that they reinforce and not produce the opportunity for a cascading catastrophic failure by penetrating one, penetrating all.
So in looking at defense in depth, we're going to look at other features that can be produced by it or augmented by it. And this will include fail-safe which is a principle that ensures that the software functions reliably still when it's attacked and that it is rapidly recoverable into a normal business and secure state to ensure resiliency. It also means that as it fails, inevitably all software will at some stage under some sort of threat action. It fails into a state that closest down around the data so to speak, to make sure that it is not by its failure exposed in any way. It's a well known strategy that hackers will attempt to break these controls in an effort to expose the data through that controls failure. And by designing fail-safe, we seek to counteract that.
As I mentioned before, we want to minimize unnecessary or even eliminate unnecessary complexity. Complexity is the enemy of good security because it introduces technological fragility. Thinking of this in terms of how we can do that will vary from one project to the next. But by employing the philosophy of economy of mechanism, we seek to keep things as simple as we practically can. It's not always avoidable. But to avoid needless complexity is something that we should always strive to do.
Then we have complete mediation. The principle that states that access requests should be mediated each and every time so that the authority is not circumvented in subsequent requests. Now to put this another way, complete mediation done by an access control system in the form of a reference monitor type of system means that there is no unmediated access to anything by anything. If that were possible, that means that the access control system has a tremendous gap. Because for any system that we can't control all interactions, and this is one of those rare cases when we say all, we mean all, 100% of all. It means that there is not a reliable control mechanism, nor is there a reliable logging mechanism. So, complete mediation is absolutely a necessity.
Along with the notion of keeping things simple is the notion of open design. To put it simply, this means there are no black boxes. This is an interesting outcome and probably unintentional by Auguste Kerckhoff who was a mathematician and philosopher in the 1800s where he stated that a principle of implementation of security specifically in a particular paper that he wrote meant that the safeguards should be independent of the design. Which is a way of saying that knowing how they were designed should not give away details of how they act such that even though we can see what they do, we don't have any way of discerning the actual mechanism. And that this produces the inability to intervene and exploit it in some way or stop it from functioning. It also means that we have counteracted the black box by ensuring that everyone involved, all authorized persons on our staff can know how the thing works so that it can be seen how it is supposed to function. And there are no secrets.
Least common mechanism refers to the minimization to a reasonable point of shared programs and other elements. Simply because we're trying to instantiate appropriate isolation to reduce the possibility of a covert channel being formed by combining these various forms of shared services to create that covert channel, meaning a hidden one that we may not be able to discern right off or that the system may not even be able to tell us about. It helps us eliminate covert channels through which a hacker, if successful would be able to first infiltrate and then exfiltrate data, but also to prevent the formation of catastrophic cascading failures.
One thing that may be a little unobvious would be the notion of psychological acceptability. Now this deals a lot with how security and the user will interact. In our design phases, we have to take the approach that while security must be recognized as being intrusive to a degree, it would be very hard for it not to be. But we need to examine ways and means that this degree and the possible complexities that may introduce is kept to a user acceptable minimum. And there's a very important reason that we must include this. When software of any kind doing any sort of function gets to be too difficult, too painful, or too complex for a user. It prompts them to consider shortcuts. And this is something that must be avoided. By encouraging them to not seek shortcuts, by employing psychological acceptability, we make the software more secure because the user themselves even unauthorized one, now becomes taken out of the problem statement or reduced in the impact that they might have.
As always, we need to look at weakest links and single points of failure. Now these are of course, the weakest link that may prove to be the way in which a hacker impacts a system or a software application and 'causes it to fail or perform in the way that they want and not in the way that we want. And that single point of failure certainly is a weakest link. But through the single point of failure, the failures can be of a far greater magnitude in catastrophic cascading types.
One of the prime types of attributes that we want to pursue is the leveraging of existing components. By leveraging existing, what we're doing this for examining what already is proven to be valid and trustworthy. And doing this, we can make prudent and balanced use of the elements already present rather than reinventing the wheel or creating outright new and untested and unproven things. This allows us to build on what we have so that we build on a stronger trusted foundation than something that may be riskier simply for trying to design new for whatever we wish.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.