1. Home
  2. Training Library
  3. CISSP: Domain 3 - Security Architecture & Engineering - Module 1

Understand fundamental concepts of security models

Understand fundamental concepts of security models
Overview
Transcript
DifficultyAdvanced
Duration1h 11m
Students1

Description

Course Description

This course is the 1st of 6 modules of within Domain 3 of the CISSP, covering security architecture and engineering

Learning Objectives

The objectives of this course are to provide you with and understanding of:

  • How to implement and manage an engineering life cycle using security design principles
  • The fundamental concepts of different security models
  • An awareness of the different security frameworks available and what they are designed to do

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

About the Author

Students283
Courses13
Learning paths1

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.

Covered Topics

Now having laid that as a foundation, we're going to on to section two to understand the fundamental concepts of the security models that many of these frameworks embed. 

So you see here, we have a list of some very basic ideas: system components, processors, memory, I/O devices, operating systems. These different pieces are going to be looked at at a very general kind of a level. A lot of it may be very similar to things that you have done at many points in your career, very, very familiar territory. It is a fundamental part to understand the basics before we can move to the more advanced topics. We all know them, and that is why this information is here, to make sure that we're all on the same page with the basics. So the processors perform four basic tasks. As complicated as they seem to be and, in fact, are and all the wondrous things that they do, it's all based on some very basic types of activities that they perform. They do fetching. They do executing. They do decoding, and they do storing, and the logic that drives them has to keep track of all of these steps to make sure that they're performed exactly the same way each time and in exactly the right order. Some of the key features that have developed over time go far beyond the ability of a processor simply to count from zero to one at the speed of light. We have had to develop additional features that we incorporate into them, including tamper detection sensors, crypto acceleration, which can be hardware or software-based. We have battery backed logic with a physical mesh. We have secure boot capabilities to ensure that this process can proceed without being interrupted. We have to have the ability for a system to do on-the-fly encrypt and decrypt capabilities. Part of that is to do the protection against failures of privacy and to ensure that performance is maintained. We have static and differential power analyses countermeasures to try to defeat various forms of cryptographic attacks that rely on power analysis, and we have our smart card UART controllers. 

Now, increasing performance has been built around the idea of bringing more and more resource to bear on the particular problem. The original mainframes, all the way up through the mid-'70s, many of them were considered to be a single-user system. They would take one problem. They would process it through to completion, having been loaded from either a magnetic tape or a card deck, and once it's completed that one particular problem, it goes back into a ready state and waits for you to give it another problem. Well, this can be a very time-consuming, very slow, and very expensive way in terms of the resource and its utilization. So going to multi-user types of operating systems to control the machine to handle two to 2,000 users simultaneously was one of the first steps that had to be made. Then, we brought in more processors. More processors, more support for more people solving more problems. 

Then, we have multiprogramming so that multiple programs can be added so that more people can do more things and more things done by more people at the same time. We have the ability to do multitasking and to make sure that we have all of these things done in the proper sequence, multi-threading, by which the program code can be broken down into multiple sets and that several different processors could be running at different points in the execution stream. All of these multi, multi, multi features created a computer that was able to service perhaps a few thousand people doing a few tens of thousands of tasks simultaneously across perhaps as many as a few million processors, all of which served to greatly enhance the cost effectiveness of the computer and get far more work out of it in a far shorter period of time. 

All of them still use primary storage, which is chip-based. This is the kind that exists at multiple levels. We have the random-access memory or RAM, and that comes in various forms such as SD or VRAM for video, and to call it random-access is really not accurate because computer systems, sure you're well aware, are anything but random in the way that they operate. They have to be very, very rigorous so that the computer doesn't get lost in what it's doing. To make the RAM move faster, though, we have to have the ability for the computer, using various techniques, to store the information in whatever open address space pops up first based on a particular scheme. Again, not as random as it might first appear. RAM, of course, is variable. It receives information, allows that information to be processed, and then, it clears that information space and allows itself to be reused. In conjunction with RAM, we have ROM, or read-only, nonvolatile type memory, that has a certain set of programming in it that doesn't move or leave and is used as a fixed resource within the computer. With all of this movement of programming and data in and out of memory, we have to make a number of techniques possible so that we can keep different pieces of memory from attempting to be used by multiple things at the same time. It is much the same problem as two things trying to occupy the same space at the same time, and that never works, and it will not work here. One part of the data or program will be wiped out in favor of the other part that is overwritten. So these forms of process isolation are fundamental to the way every operating system in existence works. 

Memory has to be segmented so that certain pieces can be used for high priority. Other pieces can be used for lower priority. Paging to make sure that the memory space allows for paging and swapping of pieces so that it can work more efficiently on fixed-size blocks or pages of data, but again, segmentation and process isolation techniques make sure that the pages that hold these data are not compromised and overwritten by other programs accessing other pages of data, and then, the protection keying that divides physical memory up into blocks of a particular size, which is then keyed by a distinct and unique numerical value. 

Again, another process isolation technique to prevent any loss or overwriting or corruption of data. The primary technique that an operating system will use when it allocates a piece of memory and stores data in it is this ASLR, address space layout randomization. This randomly arranges various chunks of memory to make sure that a certain distance of memory address space is between one use of a memory space and use of another piece of the same memory space elsewhere within the entire range of RAM addresses available, and it's an attempt to make sure that what is being stored is accessible to the programs without any one particular location storing certain information interfering or being wrongly accessed by any program. 

Now, all of that was the primary storage based in the RAM chips inside a CPU on the motherboard. We have secondary storage, which is effectively unlimited for as long as you can go out and purchase tape drives or removable hard drives or USB keys or other forms of interchangeable, removable storage devices. The secondary storage is the repository that holds the data that will be moved from there into the RAM before it can be processed, so it is not currently accessible by the CPU until that operation happens, but it does have a very high capacity, and it is, by its very nature, nonvolatile. Now, most of the operating systems can simulate more than having a fixed amount of main memory, and this is done through virtualization, a scheme of paging, swapping, aliasing, and other techniques, ultimately to make the computer think that it has more physical RAM than it actually has. Part of this is done by storing data in page form in RAM. Part of it is done by storing another part of it in a swap space on the disk, whether it's a rotating hard drive or an SSD. 

The paging and swapping that is done is done by keying, and once again, this makes it appear as though the computer has much more RAM than it does, in fact, have. As an early technique, it was a way of speeding up the computer's performance because at the time, RAM space was extremely limited in kilobytes as opposed to the gigabytes we have these days, and being extremely expensive, it wasn't likely to come in very large quantities, so IBM, the inventor of many of the patents and other system types of applications and advances that we have in computing in general, invented virtual memory. Now firmware, another name for program built in to code, like ROM, stores instructions that will be used in a persistent, readily accessible way. These embedded chips with program instructions encoded onto them are integrated onto the motherboard, and they handle persistent functions. 

Now, as nonvolatile memory, different ways have to be used if those chips need to be reprogrammed with new instructions. One way would be through EEPROM, where the program is originally charged onto the chip through electricity, then flashed with a burst of electricity to clear it, and then, new program written over it, or what we have more often called flash memory. Now the operating system, as we all know, is that main program that controls how the computer system works. Basically, it's a control program chasing blocks of addressing, making sure that everything happens the way it's supposed to in the proper order. It does housekeeping such as regulating voltage, spinning the fan faster to keep the system cool within certain limits, and it does these functions in the process of running programs so that in the world of multi, multi, multi, multi-user, multiprocessor, multiprogramming, everything happens in a proper order, usually called FIFO, first in, first out. It enforces security. It enforces memory management and file management, and it schedules resources depending upon who gets there first, what kind of resource it needs, and what priority it might have. In the kernel of the operating system is where the vast majority of programs will exist. 

The system kernel causes the loading and running of binary programs. That is to say that they're already in zero and one form. It handles the schedules and task-swapping. It will allocate the memory as we described, and it tracks physical location of the files on the hard drives so that it's always looking in the right place for the required information, and it acts as the go-between between the actual physical resources of CPU, memory, and input/output devices, between those and the applications. Now as I said, this is an area of this particular course in this particular module that we'll be talking about some very basic things. So here it says that a program is the set of instructions that will bring those instructions together to accomplish the given task. When it executes in the CPU from the registers which hold the immediately executable forms, it spawns a process or an instance of the program to operate and perform the required function. When the process begins, it starts off by requesting various resources, memory spaces, various kinds of drivers, various kinds of devices. The operating system in the kernel allocates those resources, such as memory, that the program requires, and it passes through the various phases from its initial entry until it completes or, for any other reason, it exits. 

Now, all of that basic stuff still forms the foundation for the enterprise security architecture but at its very lowest level. When we look at enterprise security architecture, what we need to look at first is the enterprise itself. How is it put together? What does it do? What is its business? What is its business's information need? As you break those things down, we come up with an overall conceptual architecture, and breaking it down into an enterprise security architecture, it helps us to find what the building blocks will be to achieve those various information management and information security objectives. One of the things we should be looking at at the enterprise security architecture level is a strategic set of goals. The business is not going to stay static, and it will evolve and adapt over time. As we think about the enterprise security architecture, we should be thinking about how can we build this enterprise security architecture so that it, too, will be flexible and adaptable and evolve over time in alignment with how the enterprise itself will? It doesn't mean we want to make something that is so flexible that it may not even be useful at all, but we want to build it in such a way that modification becomes more the norm so that instead of having very expensive break/fix, meaning that the architecture itself is very brittle and technically difficult to work with, but we build it more along the lines of a modularity sort of approach so that as the organization evolves, the security architecture can evolve in alignment with the way the business itself moves. 

Whatever the business's priorities are, that should be reflected in the security architecture itself, and that, too, will need to evolve and adapt over time. The better a job you do of that, the more the security architecture will integrate with the business itself and become a fundamental piece so that the protection becomes inherent in everything that we do and what we operate. So we want to look at the long-term strategy and break it down so that we get a simple, long-term view of the control that will be necessary to integrate with and help accomplish that strategy. Having a unified vision for the security controls doesn't mean we're making everything dependent upon everything else. It means that having a vision for how it works, the integration, and the alignment become fundamental to how the controls will integrate with the system and ultimately with the enterprise security architecture. 

We always want to start from the perspective of let us use what we have, or at least let us start with what we have, and then, as needs arise, as changes occur, flex and adapt and evolve as dictated by the enterprise security architecture is made to evolve as the business itself evolves so that we can be flexible as we adapt in alignment to the business needs but also to bear in mind that threats are going to adapt and evolve as well as we proceed into the future, and we need to be able to do that to remain effective against them. So the architecture itself needs to be designed in alignment with the business architecture so that it aligns and integrates with it in a way that the two become fundamentally a part of each other and both become enablers of each of the other rather than something that distorts or disrupts the positive impact of the one. We want the security program to be economically efficient, but we also want it to be operationally effective in the way that it deals with the protection needs of the assets that we're protecting in alignment with the business so that as the business adapts, so does this. 

As we go through this forward-facing evolution, this should be enabling decision makers to make better security-related investment and design decisions as we keep that alignment, as we keep that evolution going. Looking at future-state technology, we should be looking some time ahead with a forecast of where we are going to be or where we plan to be at some stage in the future. By having a forecast that we're working towards, again, we're building flexibility and adaptability in because things may arise that may change that, so we need to be able to build that future-state forecast with enough flexibility that we can adapt however the reality comes to us and forces us to. We have to be able to support and enable and extend our security policies in much the same way so that our guidance documents reflect the kind of adaptability that we know we're going to have to have, even if we don't know exactly what that will be, and for that, we want this to describe general security strategies. Now, bear in mind that a strategy is a sequence of goals, each one, in some way, dependent upon what came before it. This would be no different. We need it to guide the security-related decisions so that as we know and learn, we are able to build, keep the alignment, and be able to enable the evolution so that as the enterprise moves forward, so do we, and the security remains, and ultimately, this is the objective, it remains aligned with that and continues to achieve the appropriate goals of protection that the business, in its evolving state, will need. We always want to make use of the industry standards. We always want to make use of the various models, assuming that they apply to us, so that we are adapting and adopting the best practices that reflect the best knowledge, the best approach to particular security problems. Presenting and documenting the various elements of this architecture mean that this is elaborated and presented in such a way that all the major stakeholders of the business as well as the security engineering folks will have a common understanding so that nothing is a black box, which is, runs counter to what we want to achieve. We want to define the technology and the security architecture in relation to other technological domains because it is going to have to integrate with, adapt to, and operate in harmony with these other areas, and it will provide an understanding through this of the security impact and the posture of development and implementation within those other domains. One of the things we are trying to achieve is consistency. 

Now, in managing IT solution risk consistently across the project, it doesn't mean we're always doing the same thing regardless of what we're facing. It means that, when we encounter a certain kind of risk, it means that we take the same sort of approach so that in each case that we come across, we perceive the risk correctly, we weigh the pluses and minuses of the given situation correctly, and we make the same kinds of decisions reflecting what our risk appetites are, what our risk tolerances are, what our regulatory restrictions or enablements might be. This is more in line with what managing the risk consistently means. We're always driving towards the same solution, even if the actual steps in the process deviate from one instance to another. Security can cost whatever we want it to cost, but we want to be sure that we are controlling our costs, even while we improve the flexibility by implementing reusable, common security features and increased modularity. We have to plan for not only what we're going to do but how we are going to take this through end-of-life and decommissioning. Things will inevitably become candidates for retirement and replacement. As threats evolve, old tools don't work anymore. They have to be replaced by new ones. That's a part of all of these processes, and it must be planned for as a fundamental aspect of this plan. In virtually every one of these architectural constructs, we will have these common security services included. Boundary is where, is that layer where most of the things we need to protect ourselves against or enable are going to occur, so we are going to have to have control services that work at the boundary layers. 

Access control is invariably one of the most important aspects of any security architecture, and we need to define what services we're going to use for those purposes. We will have to protect our information to ensure that its integrity is always maintained at the proper level. It means that ultimately, what we do to protect the integrity keeps the information trustworthy so that it properly informs all of our business and technological decision processes. Part of that will include the cryptographic services we're going to need for data at rest and data in motion, and inevitably, we're going to have to measure what we do to see to it that we understand it is effective or that it's not and are able to adjust it so that it becomes effective again, so that's our auditing and monitoring services. One of the approaches we could take involves establishing security zones of control. These are typically areas or groupings within which a defined set of security policies and measures are applied to achieve a specific level. This kind of a zone of control could be for, say, classified information processing, or it could be for handling a certain kind of business information or a certain class of patient information. A variety of reasons exist for establishing a zone of control. What we also need to do, as I described in the boundary kinds of services we need, we need to establish the manner and the form in which we're going to do separation between these zones. Areas of lower security or higher security are likely to be within some proximity of each other. Perhaps information flows between them, and perhaps it doesn't. Whatever the case is, we need to plan for separation of these zones so that only appropriate movement between them, if any, is allowed, that it's controlled, that it's measured, and that it can be audited.