The course is part of this learning path
This is the first course in Domain 3 of the CSSLP certification and covers the essential ideas, concepts, and principles that you need to take into account when building secure software.
Learning Objectives
- Understand the main security models used for developing software in a secure way
- Learn how to develop and implement security models
- Understand how to manage risk and threats
Intended Audience
This course is intended for anyone looking to develop secure software as well as those studying for the CSSLP certification.
Prerequisites
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
It is well demonstrated that fixes and bugs in software, and the effort you have to go through to find them and fix them, shoot them dead as we say, it costs an awful lot more money, than it would be simply to write around them. There is much research that demonstrates this. As much as 100 to 150 times the dollar value of writing a new routine to bypass the flawed code, than it would take to fix the flawed code itself. And the later in this process that this happens to occur, the more expensive it gets. As much as 100 times, once the application is in production as if it would have been in corrected in the design phase or perhaps better, not even introduced.
It is equally well-proven that choosing a bolt-on approach is not more cost-effective than a built-in approach. This is due to poor software quality overall, and the higher maintenance costs that very likely, and according to research, historically follows that. So we want to use a process that we come to know in medicine quite a bit. It goes like this. Early detection equals early intervention, leads to early cure. And so with that logic in mind, we want to try to do early detection and correction, so that the benefits that it will yield will be these. Minimal redesigned and improved consistency and performance. Well-proven in this research. The earlier correction of business rules, logic flaws in the software would also be far less expensive and even more so, again, if they're not introduced.
The resulting product will be more resilient and more recoverable, leading to an overall higher quality and better maintainability. So we begin with an assessment of the attack surface as we estimate will be the case with the software we're planning to produce. Now, this of course is an observable function and code presented to an unauthorized parties. And it includes various aspects. It includes the interfaces, the protocols, input fields, resource files, services, and a variety of other things. The combination of which will yield the attack surface that we're evaluating.
Now, the size of the attack surface is going to be a function of the sheer number of these elements and their potential severity. If, or perhaps we should say, when, they're exploited. These are, of course, the first things that hackers will look for when they encounter a software target. And unfortunately, they're not disappointed nearly as often as we would like. So we must assess this, track it and manage it throughout the development process. One of the central questions of evaluating attack surface and measuring it is looking at the ways and means of access, through which we would expect a hostile party to penetrate this software and exploit it.
Now, the answer includes an analysis of common mechanisms that are used or shared in the various applications and services within the systems, such as access control lists might be. Included in this will be things that are running, services, open sockets and pipes, the various types of accounts that exist but shouldn't, such as guest. And active accounts with high privilege levels that shouldn't be in there, such as anything with a high admin level that really shouldn't have been created, and certainly shouldn't be used at this point, by anyone not authorized to do so.
Now, many of these elements may be seen as generic, and therefore, perhaps, even discounted as important, not accessible, or otherwise disregarded to all similar platforms. It is the context in which this analysis is conducted that matters, because this is the foundation of a determination of the size and depth of an attack surface.
First, what we need to do is establish a baseline that will be the determining factor of conformance in the design analysis. Second, we're going to need to look at specific elements. And we're going to have to make decisions about what is essential, what is preferred, what is optional in the elements, because these will contribute a great deal in determination of the smallest set of functions that must be present for the product to function, and which ones can be eliminated or turned off. The third step will mean we have to examine the privilege levels required by the users or processes to run anything within the program or within the system in which it resides. And where possible, reduce them to the lowest possible levels, or remove them if it turns out that they're not truly necessary. And finally, we need to periodically reexamine these factors, but this is throughout the development cycle, to ensure that any changes are captured and that any condition changes are accounted for, so that decisions that we've made in the past to eliminate or turn off things stand, or if they're reversed, justified, in having done so.
So what we're talking about, is the idea of minimization of the attack surface, and this itself should be a fundamental analysis incorporated as inherent in the design and build process, and should be revisited at various points throughout the development process as well. Now, these efforts must be validated and captured in documentation, in order to be used in the implementation processes as well, not just in design. We need them for testing and for operationalization to best inform those processes and reduce configuration errors.
The thing to do is to conceive well, design better, and implement perfectly. And that's the point made here. Whatever we've done that's good we can undo by poor implementation. And so this information will serve a valid and vital purpose in the implementation.
Now, contributions to security in this way may be difficult to precisely quantify, because what we're doing isn't always directly something that can be measured. However, there can be no question whatever, that in producing a sound and more secure foundation in the application will reap benefits in better performance and reduced maintenance cost. One of the things that we need to do is threat modeling. Now this is a practice that has developed and matured greatly in recent years, but it varies from one type of application to the next. And what it does is it develops a profile that identifies and characterizes the relevant threats to this particular application. That is the subject of our development project. And so it is contextually specific, despite the fact that the processes to do it are rather general and widely applicable.
So what we look at is, how to first identify what our threat elements are, particular to this particular application. Understanding what they are, how they work, the attack process itself, and what the effected attributes will be in order to define a mitigation strategy. And once again, bearing in mind that we are in the design process, we're at the earliest phase where we can actually do something positive and effective about it. So there's no better place to do this. Now, the activity of threat modeling should begin, like all the other processes so far discussed, at the beginning of architecture and design. And again, as we've said, continually evolve in sync as the project progresses.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.