1. Home
  2. Training Library
  3. Security
  4. Courses
  5. CSSLP Domain 1:1 General Security Concepts

Design Principles

Design Principles
Overview
Difficulty
Intermediate
Duration
56m
Students
6
Description

This course is the first of 4 courses covering Domain 1 of the CSSLP, discussing general security concepts

Learning Objectives

The objectives of this course are to provide you with an understanding of:

  • The CIA Triad

  • Authorization, Authentication, and Accounting

  • Design Principles

  • Security Models

  • Access Controls

  • Threat/Adversary Analysis

Intended Audience

This course is designed for those looking to take the Certified Secure Software Lifecycle Professional (CSSLP)​ certification

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Accompanying those attributes are some design principles. What we're looking for is not perfection. What we're looking for, as the slide indicates, is security that is good enough. Now, this requires an understanding of the intended uses and sensitivity of the data and system in question in order to achieve a balance of the impact value of compromises, the types and orders of magnitude, and the protective value of anticipated or required measures that are needed to offset such impacts.

We also want to apply the principle of least privilege, one that has been with us for many decades. This, of course, reflects the understanding that the roles will have the level of access authority needed to do all of the jobs, all of the tasks and access all the data that will fall within the purview of that particular role. And it's done to achieve an alignment between need and level of access. We want to be sure that we are able to fully enable that role to be successful in the performance of its assigned duties, but we also want to be sure that we're not overexposing either the data or the system that it's within.

Another common design principle is separation of duties. Now, this is a process that we all know that is put in place and intended to avoid conflicts of interest, but it also includes the idea of preventing excessive concentration of authority into too few or even a single pair of hands. This also includes the segmentation of duties to avoid conflicts of interest that may be created by improperly combining certain kinds of duties with other duties, such as having a single person perform accounts payable processing as well as accounts receivable processing, a clear conflict of interest. And the notion of separation of duties is also intending to avoid that.

Now, a secure design principle that I'm sure you're well familiar with is what we've always called defense-in-depth. And this of course dates back many centuries into military times, but the notion, building your defenses in layers with offsetting and complimentary and mutually reinforcing types of controls is something that we still employ with, it should be said, a fair amount of success. When we put this together, we're designing security measures to reside at different levels within a system to reinforce or be offset against each other so that no one single path, no one single point of failure ideally should exist.

Now, these will include functions throughout that are for prevention, detection and correction. Each layer acts to protect itself, and as mentioned, to reinforce others on both sides, both more externally and more internally. As I said, we're trying to eliminate single points of failure as we find them, and where we can't eliminate them, try to compensate at one level for weaknesses that we find at any other. Now, this can sometimes be called by other names, such as defense diversity or layered security.

In all cases, this idea of defense-in-depth is very much a philosophy that all of our applications should again, on balance with its utility and usability, should be employed to the extent possible. So let's take a look at some specifics when it comes to defense-in-depth.

Now, in using the layered approach we've been talking about, what we're trying to do is by pushing an attacker through a particular pathway, through a particular sequence of layers and the controls found in each, we're trying to reduce overall their chance of success by having them, in an iterative fashion, exhaust their resources, time, energy, tools, et cetera, by finding different ways to prevent them from getting through so that no one approach will work and that ideally no combination of approaches will ultimately succeed. These also raise the possibility that we can heighten our chances to detect that they're doing these things before they get any deeper than we really truly want them to.

So starting at the outer most edge, we have data, below which we have application, and then host, internal network, the perimeter, physical security, and then in a sense, surrounding them all, are governance documentation, the policies, procedures, and awareness programs. And they're on the right-hand side of the slide. You see the various types of control specifics that would be applied at each one of these layers, each one there to do something it is specifically intended to do there, but also something that adds strength to the layers and the controls on either side of it, sometimes adding, sometimes compensating.

Now, some other design principles that need to be incorporated, or at the very least strongly considered. We want to be sure that we consider the notion of fail safe or fail secure. Systems can fail in a variety of ways, but the idea of fail safe or fail secure means that in the case of fails, it should fail into a state that is no less secure than it was in before the failure. And if it fails, it should also not threaten the system, other components, and certainly it should prevent any threat from arising that could cause damage to humans.

We need to exploit economy of mechanism. To put it another way, this means to keep it simple. This is the practice of minimization to reduce complexity and the corresponding technological fragility that accompanies it. Now, as I said, this is similar to the idea of the KISS method, the keeping it simple. What we're trying to do is reduce a tech surface by keeping it simple. Now, this is somewhat of a fluid characteristic, but as a general practice, looking at a particular context of a control or an operation, we should consider all the different pieces that may be required to get it done and see if we can't reduce that in some way to a simpler and yet still effective and efficient way of accomplishing that task.

We, of course, need something called complete mediation. Every system will have an access control system which not only allows subjects into a system that they can then manipulate the programs, the routines, the screens and the data, but there will be objects in there to which the access is granted. Now, complete mediation means that all of these interactions between the subjects and objects are verified at every occurrence, and the bypass of this through a mechanism called the reference monitor, which implements the access control system to all things by all subjects within a system, cannot be bypassed in any way, that any attempt to do so is denied and logged, and that should this particular mechanism, the reference monitor, fail, it will by design cause the system to crash to ensure that control at access prevention is never lost.

We have open design. Now the idea behind open design is this, if there is a mechanism that works, that we come to depend on, and its mechanism of functioning is in any way hidden or unknown, it effectively becomes a black box. The idea that it's open means that the system itself does not reveal the function to produce security, but simply performs it. For example, in the case of encryption, when we test for its strength, we find that the secrecy is in the key, but not in the mechanism of how that key is employed.

We have, of course, procedures that protect how things are handled and the human interaction with the technology. But knowing everything about the mechanism of encryption, if designed and implemented properly, does not reveal anything about how it actually does the magic within the black box that it actually is. And by keeping the key secret, someone could know everything about it but still not be able to break through it. This does not endorse, but in fact, opposes the idea of security by obscurity. In other words, actually hiding and depending upon the hiding of something and its mechanism from a potential exploitation. 

We want to use least common mechanism. Now, this is a form of design approach that reduces the potential exposures by reducing the potential covert channels that may exist or other unwanted sharing pathways that may inadvertently cause data sharing beyond control and against policy. We do have to contend with psychological adaptability. In other words, the systems that we design and use have to be acceptable and be usable to the humans that are going to be making it their job to function and manipulate data with them.

Greater ease of use also appeals to users and can subvert better security. And this is the trade-off we commonly hear. The more usable it is, the more user-friendly it is, the less secure it tends to be. We would like security to in no way interfere with what people do. Unfortunately, that is indirect conflict with the purpose of security.

Inevitably, security will in some way be inconvenient. But the idea that security does not obstruct or complicate unnecessarily is something that says that security can co-exist with the system that it's a part of, even while it places a minimal burden on the user and doesn't spoil the usability or utility of the program. We want to be sure that we identify weak links in the design before we commit them to any form of coding or to silicon. In finding them, it's not necessarily that we want to remove them. It could be that they are inherently weak, but are still required because of the functions they perform. But what we will need to do is find ways to protect and as possible compensate for them to make sure that since these are probably going to be the primary points of impact that attackers will seek out, we're going to make them as hard to exploit as we reasonably can, while not unnecessarily impacting the functionality of the program.

We want to make use of what we have. We want to leverage our existing components, such that making use of it to build up the security and evolve it to a higher state where we can, and as we can, instead of starting with new concepts and new materials every single time. Too much reuse can create the potential for a larger attack surface. If we have fewer new components but a much higher spread of the same vulnerability, we may be able to make the functionality more to our desires and the design without introducing any additional vulnerability or enlarging our attack surface in any meaningful way.

And then of course, that bugaboo of software development, making sure that we design out every single point of failure that we can. As is the case with every other design feature we're going to include in our system or program, there is undoubtedly going to be some residual single points of failure. But for those that have to be left in, much as we would prefer not to, we need to find them so that we can compensate for them, add additional controls and surround them as best we can with protective measures.

Lectures

Security Basics: CIA Triad and Functional Requirements (AAA Services) - Security Models - Security Models: Access Controls - Security Models: State & Mode - Threat/Adversary Analysis

About the Author
Avatar
Ross Leo
Instructor
Students
3576
Courses
47
Learning Paths
6

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.