CISSP: Domain 3, Module 1
The course is part of this learning path
This course is the 1st of 6 modules within Domain 3 of the CISSP, covering security architecture and engineering.
The objectives of this course are to provide you with and understanding of:
- How to implement and manage an engineering life cycle using security design principles
- The fundamental concepts of different security models
- An awareness of the different security frameworks available and what they are designed to do
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
Now we're going to transition into a discussion of some specific frameworks that have been developed over the decades.
The first one is the Zachman Framework. This is put together, and its full name A Framework for Integration of Systems Architecture by Zachman, was published in 1987, quite some time ago as you see. In looking at the diagram, what we see is multiple layers, multiple roles, multiple locations. What John Zachman did was design a business-oriented architecture looking at the various issues, the various interfaces and interactions that each of these areas with each of these roles had So that, in taking this, what appears to be a very large, somewhat scary architecture, that it will integrate with the system. And it will integrate in such a way that not only does it provide proper security for the assets that it protects, it integrates correctly with the business itself. And it's built in a modular fashion as indicated by the kind of grid structure that it has. So that in being modular, it can be changed in a reduced break-fix sort of fashion. This is one of the first ones that came about.
Another one that followed is called the SABSA, or Sherwood Applied Business Security Architecture. Designed through the work of John Sherwood and others, and published in a large book in 2005. It takes much the same philosophy, provides, however, a great deal more definition. And it integrates with other architectural frameworks that are already out there, such as The Open Group Architectural Framework. As you see, it provides, from top to bottom, contextual architecture, conceptual, logical, physical, component, and then service management. What this Zachman Framework lacked, due to the fact that they didn't exist at the time that the SABSA does contain is a close alignment with the ISO 27000 series of security standards, and the ITIL service oriented architecture process. But again, this highlights the modularity sort of aspect that the Zachman Framework did as well. The Open Group, who has integrated their architectural framework with the SABSA in recent years, puts together their own for implementation of an architectural framework. So you see, we start with preliminary, and then, in clockwise fashion, we start with a vision, look at the business architecture. So again, we have the theme of aligning your IT and security architecture with what the business needs. Then we look at the lower layer, the information systems architecture, the technology that makes up the systems. We look at the various opportunities and solutions that may present along the way. Then we have to plan for actually putting the thing into production use. So when we get around to about the eight o'clock position we have migration planning, at nine o'clock implementation governance, and then, as is inevitable in all information systems we have our architectural change management to ensure that the philosophy, once again, of taking our system, aligning it with our business and its needs, and then setting it up to evolve with the business, keeping in that alignment, is taken care of.
The ITIL defines a structure that is service oriented in nature. The notion is it takes a strategy, it designs to meet the strategy as it's conceived. It then moves into an operational environment to start delivering the capability that's been received in the strategy to the users. We migrated fully into operation, and then we continued to improve over time. What this seeks to do is by dealing with capabilities and services instead of talking about what brand of laptop you want and with how much memory or how large of a hard drive you may want, it talks more about what do you need to do? How do you need to acquire your information? How do you need to manipulate that information? What product do you need to produce? It talks more in terms of the capability to utilize information and deliver the products and services that you do rather than the actual hardware and software that you need. Your capability is defined by what you produce, and the ingredients that are included become the service that help you deliver that product or service that is your end product.
Now, all of these take advantage, again, of the very basic kinds of things that a computer does. Now in looking at what the computer does for you, it is a physical representation of a state machine. The state machine is based on a concept that actually arose in the 30s in a place called the Vienna Group, which included people like Alan Turing, Edward von Neumann, Albert Einstein. It also included musicians and artists. The State Machine Model describes the behavior of a system as it stands and how it changes from one state to another. And the mathematics that were conceived by these people, in The Vienna Group, produced the model that we know as the State Machine and produced the way that it moves from one state to another in a controlled fashion.
One of the byproducts of this was what we call the Noninterference Model as defined by Goguen and Meseguer. With this, another byproduct was the Information Flow, which focuses on how information moves between various users, various applications, and takes on or releases various attributes as it moves through a system. This was defined by our good friends Bell and LaPadula in their model and by Sutherland in his model. Now those three models that we just left, they underlie virtually every other computer model that we're going to discuss. That is how basic they are. But there have to be other models that complement them because now that we have system models for information flow we have to have models that control the interaction of users or subjects.
We have the Matrix-based model, which his an access control model based on the two-dimensional matrix, much like a spreadsheet. The matrix is very direct, because it defines very explicitly who the subject is and what they have access to and what sort of access they have to that object. We have one that is more implicit in nature called a Multilevel Lattice. It is more a hierarchical operating model in that it may define a range of things that a subject will have to various objects. And then, as the subject attempts to operate on those objects, the Multilevel Lattice Model will determine what level the subject needs to do the particular operation they're attempting and give them that level. So it automates the least privileged assignment within the range of capabilities that the subject has, depending upon what is in that range and what operation the subject is intending to do at this particular moment in time. And these are thy typical models that the CISSP domain structure has spoken of over the years, and these are the models that we're going to discuss.
The Biba Integrity model, which was first conceived in 1975 by K.J. Biba, was a model that looked at how to protect in an automated fashion the information contained in a system. It was based, as all the others were, in a formal state transition model, looking at a security policy that defined the parameters surrounding integrity for the data and put in place a multilevel lattice to safeguard how subjects would be given rights of access to the various objects, and then through an automated definition function, depending upon what the subject was intending to do, and what rights they had in that range, enable them to have that at the lowest level possible, thus automating the least privileged process. So as you see here, our compromise is contamination or corruption of a particular data object. Now in the blue bar, that is where we are, we have the read operation where the risk is contamination by subject action. And this simple integrity property allows us to read at and above our level, but we cannot read below. The reason for this is during a read operation it means that the subject must go grab the information they're trying to read and bring it into their memory space. In doing that, bringing lower integrity data into our higher integrity data space runs the risk of corruption should any fault occur in the system and cause those elements to be combined. Reading information from above down into our space does not run that risk because it can't be put back in the event of a system malfunction. In the write operation it allows us to write at and below our particular level. Because now the data that is at the lower level cannot be corrupted by ours since our write operation is at and down. But it doesn't allow the write up operation because that runs the risk of our lower level contaminating the higher level that would be writing back to.
Now the Bell-LaPadula Confidentiality Model followed the Biba Model. In 1976 Bell and LaPadula, working for MITRE at the time, wrote a paper describing how a model, very similar to Biba, could be used to protect against losses of confidentiality. This operated in much the same way from a mechanical perspective. The rights that were allowed to the users were very much the same rights, read, write, full control, et cetera. The risk here is a loss of confidentiality. So the attack would produce disclosure in an uncontrolled form. Now our read operation, which is defined by the simple security property, is information that is disclosed to the subject, by the system, or others. So in being able to read we can read at and below our level. Now what that should tell to you is this is a hierarchical model. If we can read at or below it means that whatever the clearance level or classification level is of the subject or object, it means that it is within our ability to read. But it doesn't allow us to read upwards simply because we're not cleared for whatever's up there. Moving to the write operation and the Star Security Property, as it's called, it allows us to do precisely the opposite. We cannot write down because that allows us, the subject, to disclose information to the system or others that are not cleared for that information to be disclosed to them. But we can write, of course, at our own level. We can also write up. Now the reason for that, this governs what is called, in intelligent circles, the puzzle concept. When you gather pieces of information, each piece of which you're cleared for, but not the entire picture that those pieces may comprise. It means that if you should produce a data product that has done that and it produces a data product when all of these pieces are combined, for which you are not cleared, an operation to save it moves it across the upward boundary and removes it from your visibility, even though you may be the author, because you have violated the rules of confidentiality when you've done this. Now in moving to the strong security property, it prevents us from either reading or writing up or down but restricts us only to our own level.
So to summarize and compare the Bell-LaPadula Model and the Biba Model, we have the simple-property, which is simple confidentiality property for Bell-LaPadula and the simple integrity property. Notice the highlighted text. Bell-LaPadula says no read up. Biba says no read down. In preparing for the exam, remember that reading up violates the hierarchical nature of Bell-LaPadula and reading down violates the integrity rule of Biba. In the Star property, which governs the write operation, Bell-LaPadula says no write down. Again, violating the hierarchical nature and exposing data to those not cleared for it. Whereas in Biba no writing up prevents us from contaminating higher level data with our lower level data. And the invocation property, which is the ability to invoke a process, and through the process gain access to resources that you are not directly authorized for. Bell-LaPadula does not use that. And Biba prevents you from being able to invoke a process that you are not authorized for to gain access to resources you're not authorized for. This would be considered contamination by proxy. And so, again, Bell-LaPadula does not use it, Biba does use it.
Now, the next most significant model to follow Biba and Bell-LaPadula is the Clark-Wilson Integrity Model. Now the Clark-Wilson Integrity Model focused on the process of a subject accessing an object. It defined three rules of integrity. The first one being preventing unauthorized users from making any modifications of any kind. So this protects from the external threat. The second rule being preventing authorized users from making any improper changes so that it protects against the internal threat. And then through these, maintaining internal and external consistency, so that what is represented by the database is accurately represented by the database. So you see the flow. On the left you see the subject. The green arrow indicating that the subject is authorized to the process. The green arrow on the right indicating that the process has the object, which means that by definition the subject also has access to the object, so that particular way of gaining access to the object is permitted. The green arrows indicate trusted pathways, meaning that we have control and we know all the different things that that trusted pathway will go through and can control the . Now, the bottom, the red line going across the bottom, is an example of an uncontrolled, unsecured interface such as a command line interface. By the direct access it might bypass any of the rules that we would use to govern this particular access. Because of that, the subject is debarred from being able to access the object directly. It has to follow the well-formed transaction illustrated by the green arrows, and the direct access using the command line interface is prevented. Now, doing this, it makes for a fairly complex system of defining and labeling, but this model is so effective in what it's doing it's built into virtually every eCommerce website on the web today.
The Brewer-Nash Model, sometimes called the Chinese wall, though it is more appropriately referred to as a screen, this is a mathematical model that is used to implement a very dynamic set of rules to handle rapidly changing access permissions. As you see in the picture, we have Bank A, Bank B, and a firewall in between. Our two workers there at the bottom may have normal routine access to both. But when certain conditions arise, and they cause certain rules to become active, a conflict of interest is then created, and so access in this particular example to Bank B is cut off. Because to learn about what is happening to perhaps a corporation in Bank B, and to know what's going on within a corporation in Bank A, it may provide one of those persons insider information. In doing that, of course, that's what creates the conflict of interest. So the rules have to be developed to reflect what would create those conflicts of interest. And when they are those rules activate and cut off the access. It separates competitor's data from within the same integrated database, and it makes sure that there is no way for any user to make fraudulent modifications to any objects by cutting off access. In setting up this screen it prevents the conflict of interest and the insider information that would be available to them otherwise. So as you might imagine, this appears in financial organizations, brokerage, houses, and so forth quite often. The Graham-Denning Model, another model whose principles are very, very widely implemented, deals primarily with secure computability and inheritance. It deals with how various resources, subjects, and objects are created within a system, how they are assigned rights and privileges and how the ownership of the objects is managed. In creating this model, Dr. Dorothy Denning and her partner Graham, got together and they designed how this could be done by building a set of objects, your resources, your datasets, and so forth. A set of subjects and a set of rights to define the range of interactions that the subjects and objects would have and how these things would be created, deleted, modified, and how one would inherit rights or not from the other through these actions.
The Harrison-Ruzzo-Ullman Model takes the Graham-Denning Model and implements much of the same to become a similar type of a model. It looks more at generic rights and a finite set of commands. And in looking at this it is more concerned with situations in which a subject should be restricted from certain privileges. So, this is looking at how a conflict of interest can be created if such rights were to be accorded to that particular subject. Now we've come to the end of our first section of domain three security software engineering. We're going to stop here. But as a final word, the thing to look for on the exam about the models is being sure that you're clear on what the point of each model is, not to pretend that you're going to see dozens and dozens of questions on each one, but knowing the principle at work in each model. By being very clear on that, that should enable you to do well on this particular section. Now in the subsequent sections of this domain, we're going to go on and we're going to talk about further principles, further software and system principles as well as physical. So, we're gonna end here and I'll see you next time.
About the Author
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.