Trusted Computing Principles
Start course

This course is one of four courses covering Domain 1 of the CSSLP. This course explores the topic of security policies and regulations.

Learning Objectives

  • Obtain a general understanding of security policies, regulations, and compliance
  • Understand the legal and privacy issues that these regulations aim to address
  • Learn about a variety of security frameworks and standards
  • Learn about trusted computed principles and how they underpin security frameworks
  • Understand the security implications of acquiring software

Intended Audience

This course is designed for those looking to take the Certified Secure Software Lifecycle Professional (CSSLP)​ certification, or for anyone interested in the topics it covers.


Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.


If you have thoughts or suggestions for this course, please contact Cloud Academy at



Now throughout all of these standards and frameworks, they each embody different kinds of trusted computing principles. And the ones that you see on the screen now: controllability, security, privacy, interoperability, portability, and ease of use, tend to be shot throughout all of them in various potentially unique ways but each framework standardizes how it talks about each of these areas, so that it has a common definition throughout the structure of that particular framework and one that it usually shares with almost every other standard framework. And here you see the trusted computing principles, the results of us adopting all of these standards and their common criteria across all different kinds of systems and software development.

We have resistance, which is the product is built to withstand attempts to subvert normal operations within predetermined design limits; robustness, essentially the ability to withstand different kinds of impacts without a complete failure; resilience, which means that it has the flexibility of functionality such that it can continue even after attack or an error; recoverability, of course, is critical given that it has the features and structure necessary to facilitate trusted recovery; that it has a certain level of redundancy within it; and that it will perform in a reliable and predictable manner, protecting the qualities of trust and assurance. One way that it does it is through the protection ring model, which has come to be part of many systems' designs.

Now the ring model itself was conceptualized in the Multics Operating System under Project Guardian at MIT. In it, they developed four different rings beginning with ring zero surrounding the OS Kernel. And it rings one, two, and three various portions of the systems, the assets that are within it, the command structure, program execution, features, and user rights are contained in increasingly low levels of control as it moves further out from ring zero, which encompasses the innermost kernel. This four-ring model separated and implemented defense in-depth with its four rings by what each one included. The innermost ring could not be, for example, attacked or accessed directly. Anyone having the privilege to be able to do so would have to go through rings three, two, and one in that order before getting to ring zero. So having various interfaces, various pathways described and various gateway mechanisms in between the layers were absolute requirements for any access of anything beyond the ring that an entity was in at the moment in order to be put through multiple levels of controls before they were able to reach their destination.

Now what we have in current use through most of our systems today is a two-ring version of a ring model also showing defense in depth with rings three and zero adapted from the four rings that you saw on the previous slide. The functions and assets that you saw in four rings have been put into these two, basically with ring zero and all of the included elements within it being for the highest privilege mode or administrator or cis admin or pseudo with all privileged and non-privileged instructions possible to someone with the proper authorization. Then ring three, notice the application level, the lowest level of privilege or the user level and non-privileged instructions only would be possible.

A concept that was first discussed in the orange book, the idea of a trusted computing base captured the various features that you see here. Now, the trusted computing base is a term that is coming back into regular usage. It combines all the elements of hardware, software, firmware, and the controls within it to ensure that all of the elements within the TCB are at the most carefully managed and controlled assets within a system environment. In it, we have protected objects which may be known but cannot be accessed directly. They can only be accessed through a trusted process governed by policy by an authorized subject. Part of this was the trusted platform module, which was a specific module designed to store cryptographic keys and authentication materials for various portions of a system.

Now, the TCB is surrounded by a security perimeter, which separates it from all non-TCB components. And across the perimeter, there are gateway portions and interfaces through which only authorized and cleared elements can pass. That doesn't mean that they're classified going in and unclassified coming out. It means that there are rules, permissions, and authorizations granted to any subject coming into the TCB from outside crossing that perimeter. coming into the TCB from outside crossing that perimeter. And that is to make sure that the TCB itself is preserved secure and that its integrity is kept intact. With regard to the trusted computing base, there is a mechanism within that ensures the enforcement of the security policy in the specific system. And this is known as the reference monitor. Now in all systems and in all access control modes, the reference monitor must be found to be working under all operational conditions. In fact, it is a requirement of the system in which it works that if it is not running, the system is not running because the reference monitor is at the very heart of the security policy in terms of its rules and its enforcement.

The reference monitor is one of the very few programs for which exhaustive testing in all possible modes and access methods is required. It has to mediate 100% of all of the subject object interactions that are possible within its system context. And, as I said, it must be running 100% of the time that the system itself is in operation. And as you see in the diagram we have the reference monitor, essentially an abstract or a virtual machine that does this particular job of policy enforcement. It mediates the interactions between the subjects and objects. It has a security kernel in which the rules governing those interactions are housed and it has an audit file that it will use to record every single activity that it enforces. So when the subject attempts access to an object, the reference monitor refers to the rules database, allows or denies the access attempt to the object, and writes the transaction and its record to the audit file, so that there is a way for any security officer, or any privacy officer, or any auditor to be able to review the records to establish accountability and trace it back to the subject that made the attempt.

About the Author
Learning Paths

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.


Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.