1. Home
  2. Training Library
  3. CISSP: Domain 3 - Security Architecture & Engineering - Module 2

Understand the security capabilities of information systems

play-arrow
Start course
Overview
DifficultyAdvanced
Duration44m
Students97
Ratings
5/5
starstarstarstarstar

Description

This course is the 2nd of 6 modules within Domain 3 of the CISSP, covering security architecture and engineering.

Learning Objectives

The objectives of this course are to provide you with and understanding of:

  • How to capture and assess business requirements
  • How to select controls and countermeasures based upon information systems security standards
  • The security capabilities of information systems

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

We're going to move on to section five, wherein taking these requirement sources and these standards, we're going to understand the security capabilities of information systems and establish a context in which we can implement and achieve the security goals that we have. So as you see we have a fairly long list of module topics. Access control, secure memory management, processor states, process isolation, and so on down the list. 

So we're going to look at how we can apply the standards and requirements that we've just discussed by actually thinking about how we're going to implement these things. Access control mechanisms are present in virtually all computer systems we're likely to encounter. Virtually every operating system has some sort of access control system in which we build identity controls, password controls, and other sorts of things to govern and both enable and prevent access to information by subjects or users. We have to make decisions in our policy before we go about the business of implementing the actual control to be sure that first we understand the scenario of access, the nature of the subject's need for access to a particular object in our system, and how we can go about enabling or preventing those sorts of things from taking place. To do this, in all things computer, we have to assign identifiers because everything in a computer system needs to be uniquely and distinctly identified if we are expecting to control access to it or to allow access to it. It means we're going to have to have a process to identify who the subject is and then authenticate them, and then assign them levels of access that they can have to these objects. 

One of the principles at work here in the access control world is complete mediation. Complete mediation means that no subject can access an object without authorization. It means that there has to be a mechanism that is in place to do this that cannot be circumvented, gotten around, turned off or subverted in any way. When it says complete mediation, it means literally complete. Normally this is the responsibility of a security kernel which will contain the rules that will be implemented and employed by the reference monitor. The reference monitor, sometimes called an abstract machine, sometimes called a virtual machine, is a program that will implement complete mediation. And the reference monitor will examine all attempts by any subject to access any object to decide whether or not that access that's being attempted, whatever that access might be, should be allowed or not. This is established as a kernel-critical process that must be running 100% of the time the machine and its OS are online and working. There should never be a case of conditions where that is not happening. And that means that the reference monitor and the security kernel must be online through all processor states. These processor states are outlining the very first layers of the defense in-depth process that we have inside the computer system. 

Sometimes, states provide specialized processes for handling security functions, such as cryptographic coprocessors, which can be software, virtual machines, or actual hardware chips that are on the motherboard. They have to have states that can distinguish between users and more or less privileged instructions that the users may execute. They typically have to have multiple states, more than two certainly. common states are ready, running, wait, stopped or halted, and then end or abend. Because in these states, they describe every phase that the computer processor is going to be going through as it is going through its cycle of executing instructions. It also has to have modes, one of which is privileged, called supervisor or kernel mode, and non-privileged, called problem or user mode. And all these states and these modes must be applied to all conditions that exist within the computer because they are fundamental parts, key elements of the mediation process that the reference monitor will go through. 

One of the mechanisms that is done for process isolation and control implementation is layering. This organizes the programming into separate functional components that interact in a sequential or hierarchical way to ensure the proper protections are implemented and enforced. It helps to make sure that the volatile or more sensitive areas of the system can be protected from any form of unauthorized access or changed by any subject. By layering, we are able to enforce our defense in-depth process. Process isolation has a very real security function. But process isolation is something that every computer operating system must be able to do, otherwise the system itself will either not function at all or not function in any sort of predictable or trustworthy manner. In general, it is used to prevent individual processes from interacting with each other except through controlled methods and interfaces. It can be done by providing distinct address spaces, preventing processes from being able to reach those if they're not assigned to that particular process, and it can control them through constraints and boundaries. In the process of putting in layering and process isolation, we are also able to do data hiding. We frequently talk about computers as though they are a person. We anthropomorphize them as he or she. So to keep things simple we'll say that if we're hiding the data from them, then it means technically that we have not built a trusted pathway or provided an interface through which the computer, he or she, is not able to access a particular data object, regardless of who the subject might be or what level of privilege they might have. By hiding it, we make sure that the computer doesn't know that it exists, doesn't know that it's available. And it assists in preventing data at one level from being seen or acted upon by processes at different levels. Again, data hiding, as a complementary function to layering, serves to further implement our defense in-depth philosophy. 

Abstraction is yet one more way that we remove the need to know exactly what's going on at a very low level, at the hardware level, so to speak, of the computer system. And it frees up the subject to be able to interact with the software to accomplish their particular processing objectives. It involves the removal of the characteristics so that it is easily represented in its essential properties at more of an idea level than the actual nut and bolt level of the hardware. It also serves to negate the need to know the particulars of how the object functions at that very low level. In the days of DOS, when the IBM PC and other similar operating system platforms came out, you had to know something about the very lowest level of the machine because frequently you would interact with the machine directly. 

The further we've gotten away from that in the era of Windows and the Macintosh Crystal and Linux and the various interfaces we have there with GUIs, the more abstraction there has been, removing us further and further from having to interact with the actual hardware. And this enables greater productivity, but it also raises specific issues of security that have to be dealt with in ways that are unique. Cryptographic protections are going to have to be put into virtually every system in use today. As we all know, we have cryptography in place to protect the confidentiality of whatever is processed through this method. So we identify what the sensitive information is, what state it's going to be in, whether it's in motion, in use, or in storage, and then we're going to apply cryptography in whatever appropriate way to ensure that it is protected from any unauthorized access or use. By encrypting this sensitive information, and through this mechanism limiting the availability of keying material, the data can be hidden from less privileged parts of the system or from non-privileged or less privileged users to protect it in use, in motion, or when it's in storage. 

Other ways architecturally that we can protect our systems include various forms of firewalls and intrusion prevention or detection systems. Normally, these things would be in association with various forms of network partitioning and enforcement of logical security zones of control within that environment. They're frequently used to protect individual hosts from attack. But in, for example, the cloud environment, there is a firewall as a web application firewall or a firewall that stands between the various instances between customers to keep one from influencing or even knowing of the existence of its neighbor. And we have to close the loop on all this by establishing auditing and monitoring controls. Secure systems must have the ability, well, in fact, all systems must have the ability to provide administrators with evidence of their correct or anomalous operation. Anything that is correct, we need to verify that it is correct. Anything that is anomalous or an error, we need these kinds of records so that we can analyze what created the anomaly or the error condition and take corrective action. 

There are logging subsystems within virtually every computer in existence today that allow us to take a look at normal systems functions, security functions, and various application messages to ensure that we are keeping an eye on all of this, have timely notification with good information about what constituted the particular condition we have found so that we can take the appropriate corrective action as needed. More secure systems typically go further and provide greater information, and also provide protection for these logs themselves so that they cannot be tampered with. One of the things that has really matured in the past decade or so is the notion of virtualization and containers. Virtual machines, in one form or another, have been around for quite a long time at this stage. And virtual machines create a logically separate environment in which programs can run that are isolated from other components in the system. They're isolated within browsers, they're isolated within sandboxing by an operating system. And it enables us to start them up, run, test, observe, and shut down without bringing down the entire machine. And they can be readily very quickly replaced by another virtual machine, perhaps a different version of the same thing running different code. It provides a great deal of flexibility while it provides a great deal of protection of the host environment. 

Containers provide a standard way to package application code configurations and dependencies into a single object. They share the operating system installed on the server and run as a resource-isolated process. In earlier days, the concept of container was known as a runtime environment that was contained within an isolating program that was put in place by the operating system to protect the entire system from whatever was in the real-time memory space. That brings us to the end of this particular module. We're gonna go on, so please join us again for the very next module as we continue our discussion of domain three systems engineering of the CISSP. Thanks, and we'll see you next time.

About the Author
Students1669
Courses30
Learning paths2

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.

Covered Topics