1. Home
  2. Training Library
  3. Security
  4. Courses
  5. CSSLP Domain 3:2 - Design Considerations

The CIA Triad


Domain 3:2 - Design Considerations
The CIA Triad

The course is part of this learning path

The CIA Triad

This is the second course in Domain 3 of the CSSLP certification and covers the essential ideas, concepts, and principles that you need to take into account when building secure software.

Learning Objectives

  • Understand security design principles as different from actual software design principles
  • Understand the relationship between the interconnectivity and the security management interfaces
  • Learn how to balance potentially competing or seemingly conflicting requirements to obtain the right security level

Intended Audience

This course is intended for anyone looking to develop secure software as well as those studying for the CSSLP certification.


Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.


Now, in the CIA triad, which we've discussed many times and will again, it reflects what we have in the way of the main priorities regarding the data. It's being kept from exposure to unauthorized eyes, integrity, being protected from any sort of unauthorized or unwanted modification or tampering, and then the availability, making sure that services and systems and its availability to authorized users is not in any way interfered with or disrupted.

Now, one of the points that I want to emphasize is that we know that keeping things such as bugs and flaws from being introduced into any project of this type is something that we want to try to do at the earliest phases as possible. And to that point, I want to bring your attention to a particular research project that was performed by the IBM System Sciences Institute. It is commonly understood that a bug fixed very early in the cycle before it's operationalized is much less expensive to have that done than it is after it becomes productional. Their research showed that it was at least 100 times more expensive to fix the security bugs once the software is in production than when it was being designed.

So, as we've been saying, the time that it's necessary to fix identified issues is shorter when it's still in its design phase and thus a lot less costly. Now, the cost savings are achieved because there is minimal to no disruption of any business operation, which of course contributes possibly greatly to the cost of the fix operation. But we have other benefits of designing security in early in the software development life cycle. These will include things that you see, the resiliency and recoverability of the software, the minimal redesign and consistency that is done because of the much earlier phase in which it's done, the addressing of business logic flaws, which are the rules that characterize that the software is functioning as intended, as conceived, also that it produces higher quality and more maintainable software that is less prone to errors.

Now, secure design not only increases the resiliency and the recoverability and the other traits, but software is also less prone to errors, either accidental or intentional. Now in this regard, secure design is directed and related to the reliability aspects of the software, but keeping it in a more maintainable configuration also reduces the lifecycle cost that that software can't afford if it is prevented from having bugs and flaws that could be designed out at these early phases. So this is the desired end state that you see, resistance, robustness, resilience, recoverability, redundancy, and overall improved reliability.

So let's dive into some examples of what we mean by confidentiality. Now, the obvious definition is that it defines the universe in terms of what or who is authorized and by definition, who is not authorized. And this would include both access and any use. So we're going to employ some basic criteria, including the designated sensitivity level of the information being processed, for the subject, a clearly defined need to know, and a process to establish explicitly authorized access to the information. This, of course, must be committed to a policy that describes all of these things. And it makes sure that disclosure to parties having no established need to know, or from whom some form of damage or exploitation is anticipated should that data become known to them.

Now, ways that we can deal with this include the individual identifiable type of data, which is covered by a privacy form of confidentiality. It would address cryptographic keys. It, of course, would include national defense or security type of information, but not to ignore the private sector, trade secret or other forms of intellectual property. Integrity is one of the most important characteristics, but it must be seen on balance and in the context with confidentiality and availability as well.

In the end, it is the attribute that reflects the ultimate trustworthiness of the information in question. It means that it includes terms such as authenticity, accuracy, timeliness, quality and probably another dozen or so, all basically synonymous or contributory to the quality of trustworthiness. And it means that the asset or program performs and produces expected results from known inputs. Now, there are a number of contributing factors which include our protective elements, such as error-free creation or data entry, proof of origin and compliance with the standard.

Another contributing factor to its trustworthiness would be a detective element such as hashing. Now, in development, integrity is a quality that is assured through the various design processes that we are discussing in this module. These would include reviews, testing and the verification and validation work, and of course their results that highlight just how things are going. To put it shortly, verification is knowing that we are doing the right things the right way and to the desired level, and validation says in doing the right things in accordance to the agreed to plan.

And finally, our characteristic availability, the third of our CIA triad, in simplest terms, of course, this means the resource or data is available to the authorized user in the expected form when and where needed. The examples here are the Morris Internet Worm, which took place in the late '80s, and in a very swift manner brought down several thousand Unix-based computers, the IBM Christmas Tree virus, which happened and is still the only known example of a mainframe virus, which brought down over 205,000 users worldwide that were infected in under four hours. And then clearly in other environments, misconfigured backbone routers, a failure of self-healing networks and other fault-tolerance systems to do those things, and a failure of redundancy in critical areas and functions.

Now obviously, the countermeasure would be to ensure that the self-healing networks and fault-tolerant systems and the redundancy measures are, in fact, working and doing their jobs. Now, to put this in a proper context, some organizations, if they experience a loss of resource availability, may mean potential losses of thousands to millions, such as might occur with banks and brokerages, but in other environments, it could mean the loss of life, such as might happen in healthcare or at our space agency, NASA. So we always have to understand CIA in the context of the operation in which they will function and what impacts they may have.

About the Author
Ross Leo
Learning Paths

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.


Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.

Covered Topics