CISSP: Domain 7, Module 1
The course is part of this learning path
This course is the first of 4 modules of Domain 7 of the CISSP, covering Security Operations.
The objectives of this course are to provide you with the ability to:
- Understand and support investigations
- Understand requirements for investigation types
- Conduct logging and monitoring activities
- Secure the provisioning of resources through configuration management
- Understand and apply foundational security operations concepts
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Moving on to section five, we're going to look at how to understand and apply foundational security operations concepts. So in this section, we're going to look at secure operational principles, separation of duties and responsibilities, the information life cycle, and service level agreements. Now, within this, there are things that we have to examine, such as trusted paths and fail-secure mechanisms. Now, trusted pathways are known good travel pathways that the information will flow over and over which we have control and visibility. These provide trustworthy interfaces into privileged user functions and are intended to provide a way to ensure that any communications over that pathway cannot be intercepted or corrupted. Our fail-safe mechanisms focus on failing with a minimum of harm to personnel or systems. And our fail-secure systems, in contrast to fail-safe, focus on failing in a controlled manner to block access while the system is in an inconsistent state.
Now, part of any security program must be the way in which we interpret and define and implement who is going to have access to what. These will typically fall under the categories of need-to-know and least privilege. Now, these are fundamental principles that have to be incorporated into all of the processes, all the procedures, and all the usage that's allowed of all of our computing resources. Every individual must have a need-to-know first before they can be granted any level of access to any sort of a resource. Along with this must be a least privilege definition of how high or how all-encompassing that access level will be. We're going to give them the need-to-know based usually on their role or their operational function in the organization. And then whatever that requires in the way of access, we're going to use least privilege to define their access, fully enabling them to accomplish and be successful in their role at the lowest possible level of authority and exposure on the data.
There will, of course, be a part of our population of users that will have special privileges, because that will be part of their role. We're going to have to set up additional monitoring techniques and schedules so that we can monitor the usage of those special privileges. We have to be sure, as was just said, that the need-to-know and least privilege principles are applied to this, like to all others. We're going to review these on a regular basis, possibly even a more frequent schedule than we might the routine level of privilege assigned to other users. It should go without saying that only authorized users should be granted this kind of access. If it's a role that they're fulfilling in a temporary role, then we should monitor it for the life of that and revisit it periodically to ensure that that level of privilege is still needed. If it's their regular role, we should be able to review that periodically to determine what activities have been taken, making sure everything is authorized, and then, again, periodically review it to see if that level of access is still needed, checking to make sure that the role itself and the necessities of it haven't changed. Before someone is given this kind of access, we need to be sure that their trustworthiness has been validated, usually through the mechanism of some form of background check or management endorsement. And as we were just saying, we will have to occasionally revalidate these privileges.
Now, the monitoring techniques will be things that can be either manual or technological or physical. Before they're given, we'll look at things like background checks, clearances, suitability, background investigations, and so on to establish the basic trust level that we'll have in the selected employee. Then management endorsement or account validation will have to take place so that we know that the role does indeed require the level of special access being requested. Another form could be job rotation, where a person moves from one set of tasks to another to another to another, either because it's a progression and their roles change, the activities change, or because we're building depth in our staff.
Now, all of this needs to take place and take account of the data lifecycle phases so that the access that individuals are granted will be appropriate for each portion of the data lifecycle where they will encounter the information asset. We begin with create or receive, and this of course is where data is either created in the form that will then be committed to a file system or it's received from an outside source. And part of that will be to validate that the data being received is indeed accurate and properly representative. Followed by storage. Now, storage comes next because we have to hold it in an organized system or file management program so that we can look at who is going to need this, what the rights and uses will be, and put together our classification categorization program for the newly arrived data. Following that comes using, and this is, of course, our standard editing, manipulating, creating, or in any other way modifying the information in accordance with the rights and privileges that have been accorded to us in our roles. At some point, the data will be shared, put into controlled distribution, regardless of form, and through whatever means to make sure that as much usage and cogeneration can be made of the information. Then as the data diminishes in its current need, it will go into archive, possibly even to long-term storage for regulatory purposes to support our compliance program. But whatever the information is, it's rare when information keeps its value and lasts forever. So there will come a point where it will need to be disposed of or just outright destroyed. When it gets to this phase, it needs to be kept secure until such time as that destroy date comes around. When the destroy date comes around, we need to follow a process that uses assured methods to make sure the data is rendered into a form from which it cannot be restored to a human-usable, human-readable form by unauthorized parties.
Now, as I mentioned in step two of the data lifecycle, we have to do classification categorization. Now, classification is to ensure that the information is marked in such a way that only those with an appropriate level of clearance have access to the information. So here we align the classification of the information to the clearance of the subject who will be accessing it or that group or role of subjects who will be accessing this information. Along with that will be categorization. Now, this is, at least in part, the process of determining the impact resulting from the loss of its confidentiality, integrity, or availability of the information on the organization.
Now, categorization also has to do with the functional application of the information in question. And this of course goes to the role of the individual accessing it, how it will be used by them. These kinds of systems come from a variety of sources, usually promulgated by a government agency. Some examples of this are Canada's Security of Information Act, China's Law on Guarding State Secrets, the UK's Official Secrets Act, and then in the US, we have the NIST FIPS Standard, the Federal Information Processing Standard 199 and a companion volume from the Special Publication 800 series, volume 60, version one, release one, a Guide for Mapping Types of Information and Information Systems to Security Categories. These various classification categorization systems may be coming from government sources, but they recognize the same principles as we discussed a moment ago: classification to determine what level of clearance a person may have to have to get access to it and then a comparison between the functional usage of the information and the functional role of the individual acting upon that information through categorization.
Another fundamental aspect of the data lifecycle will be a retention schedule. Certain of the information must be retained for a period of time beyond its normal usage lifespan. We will develop document retention schedules for these various types of information, often specified by regulations. And at the end of them, there will be some form of mandated destruction after a set date, a set period or length of time, or when certain criteria come up and the information, as mentioned, must be rendered into a state from which it cannot be restored to a human-readable form.
Well, that brings us to the end of our first module of Domain 7, Security Operations. Please be sure to rejoin us for our next module, beginning on slide 53 of Domain 7, Security Operations. Thank you.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.