CISSP: Domain 2, Module 1
The course is part of this learning path
This course is the first of two modules of Domain 2 of the CISSP, covering asset security.
The objectives of this course are to provide you with and understanding of:
- How to classify information and supporting assets using policies and categorization systems
- How to determine and maintain ownership using data management practices
- Protecting privacy, through regulations, standards, and technology
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
Welcome back, we're going to continue now with the Cloud Academy's presentation of the CISSP Review Seminar, and we're going to be discussing CISSP Domain 2, as the current structure stands, entitled Asset Security.
So in our domain agenda, we're going to discuss how to classify information and supporting assets, determination and maintenance of ownership. We're going to discuss the manner and the drivers behind protection of privacy. We're going to delve into how to ensure appropriate retention. How to determine the data security controls and how to establish handling requirements. So our objectives are to look at how we can obtain a comprehensive and rigorous form to create this program. In it, there must be methods for describing current and/or future structures. In other words, how to compare how we are today versus what state we need to be in based on regulatory requirements and business requirements. That's going to encompass examination of organizational behavior. How the workforce manages this program meets its objectives. It's going to encompass the processes and methodology. It's going to have to look at the IT systems of course, personnel behavior, policies, procedures and so forth. And it's going to have to reach down into all the different organizational elements. We're going to have to look at various frameworks because from those, we're going to have to extract the various ingredients that will make up this kind of a program, and that of course, will involve the creation of policies, procedures that embody the various concepts and principals and standards that have to be used so that we can establish the criteria that our organization is going to need to make sure that the protection of the information assets and all of the supporting infrastructures surrounding them is appropriate to the level of protection needed, based on some measure of the value of the asset in question.
So let's talk in section one about how we're going to classify information and supporting assets. Now we all know what the process of classification is going to be. This is a process by which we need to establish what assets we have and the various attributes and characteristics that they have, so that we can assign a value. Now the term value implies that there's more than simply price or cost associated with this. We have to look at the primary and secondary characteristics of all of the assets under consideration, so that we get a real sense of what its value to the organization is because this represents a combination of tangible and intangible characteristics that may in fact, make the thing worth a lot more than simply price or cost might indicate alone. It needs to look at the confidentiality, integrity and availability qualities so that we know how this particular asset is used in the organization and what it contributes. In the case of something like confidentiality, this would bring in the aspects of what happens if it gets exposed to the public or some other unauthorized entity? If it's integrity, what happens if the information in some way is corrupted or used when its state is uncertain and then availability of course, what is the essence of its availability? Is it something that we have to have only once in a while? Absolutely must have, but only once in a while? Or is there a level of availability that we can live without? Such as 90% is okay versus 99.5% having to have that all the time. So these contribute to our calculation of the value of the given asset.
The idea of the classification process is that it creates a taxonomy by which we can classify these things into individual units or groups or some other method. And it helps us establish the nomenclature for how we're going to call these things. What names we're going to give them. What groups, so that these names can be used consistently throughout the organization so that when you say, "widget" for example, everyone will understand what widget means and everybody will be clear on the particular asset that you're talking about. So as a companion activity to classification, we have categorization. Because what we have to do now, is we have to look at the functional use of the assets in question. And how they're employed, so that we can do a better job of grouping them based on like-usages. What we're trying to determine with categorization is how the loss of any of the main characteristics, confidentiality, integrity and availability, are going to have an impact, so that we can qualify, that is to say ensure that these are genuine qualities and have a sense of what they actually are. And quantify what the sort of impact might be in terms of its dollar impact to our organization.
Now there are a number of systems that we can use. For example, we have Canada's Security of Information Act. We have China's law on Guarding State Secrets. The United Kingdom's Official Secrets Acts and our own NIST FIPS 199, and the companion volume typically used with FIPS 199, the NIST 800-60 Guide for Mapping Types of Information and Information Systems to Security Categories. Now as I've said before in past modules, the examination for the CISSP is not going to ask the candidate deep details of any one of these particular standards. Being familiar with the general concepts that we're going to cover in this module would be much more likely to be important to questions answering on the exam. But deep dives into any of these, they are here of course, because this is an international exam, but deep dives into any of these would very likely, not appear on the exam.
So, classification entails the analysis of the data that the organization creates, uses, discloses, retains, to determine its importance in value and then assigning it to a category in accordance with the various attributes that we discover. So in the classification policy that guides the process of this, these are some questions we have to answer. Who will have access to the data? Obviously we're talking about authorized users and we're talking about their roles and the kinds of uses they're going to put that information to. How the data is secured, and this of course, relates to these very questions. Who it is, what level of access is going to be required, what use they're going to make of it, what rights and privileges they should have to it. And so, we capture that in accordance with the role of the individual. We have to answer the question, how long the data is to be retained. Many different kinds of data are covered by regulations to be retained for a specified period. For example, healthcare information. The actual medical record itself, that is, is typically required to be retained for a period of seven years following the last operative usage of that information or the last time it was modified. And so, we need to retain this information and this must be part of the classification policy. As should, what method should be used to dispose of the data?
Typically, these methods require that we identify how the data should be disposed of to ensure that whatever form it's in, be it paper or electronic, that the method is an assured one. That is to say, it cannot be returned to a human readable, in other words, an exploitable to an unauthorized person, form that can again be used. Whether the data needs to be encrypted, this of course, is a common question in today's world, which needs to address data at rest and data in motion. Two different kinds of information that we'll address later in this very module. And as part and parcel of this whole process, we need to be sure that we have a very clear understanding of what the appropriate use is, so that we are able to put constraints on how the data can be accessed, so that we can define the kinds of roles that should be accessing it and the kinds of uses they should be making of it. And the appropriate use of the data would also lend itself to helping us define more clearly, what confidentiality, integrity and availability types of settings we're going to have to make for this particular type of data. So, what sort of classifications could be used? Now this slide is presenting the idea that we have titles. Which of course, should reflect accurately, what kind of data goes into this particular category. So here are some examples. Private, company restricted, company confidential or public.
If you're working for the government, and you work with classified information, you have a very different set. Whatever the case is, the titles that are given should clearly reflect the essential characteristics of the data that falls within that category's by name. Using confusing titles does not enhance the security value of any of this process. So they should be reasonably clear and straightforward in terms of what they indicate. Of course, once the classification is decided, we have to decide who puts data in them? Well this is traditionally assigned to the person or the role that we call the data owner. And the data owner is typically the person who is going to enforce the organization's policy on data classification and management and protection of the given data asset.
Now the owners have a number of responsibilities, one of which is to review periodically, the classification, how it's defined, what is included in it, to make sure that it continues to be relevant and accurate as well as, to make sure the data that goes under that particular classification is appropriately assigned to it? Now part of what they have to look at also includes, what is to be done whenever a deviation from this occurs? The question, why the deviation occurred, under what circumstances and for what reason? This is not to say that this is looking at a violation. This is to say we're looking strictly, at a deviation. And a deviation could be that something has been misclassified. It could be innocent, it could be intentional. It could simply be done because an individual misunderstands what is actually meant by any of the terms under the given classification. And this will happen because different people will have different interpretations of different terms that we use. That being so, we need to manage this by exception, looking for deviations and making sure that ultimately, they get into the proper category. We have to look at who's authority was in place when this original classification or reclassification was carried out, so that we can make sure that they understood exactly what they were approving in the first place. If there's any documentation, which of course there should be, we need to check that out to see if there's been anything inappropriate or simply misunderstood contained in that. And then, look at the process and see if that needs to be modified to improve it and make it more accurate.
As with any asset management process, we need to be sure that we have an accurate inventory of our assets. Now some of these will change over time, some of them will not. And so, we need to look at the configuration management question for two basic reasons. One, because we need to manage the configuration of the inventory control system that keeps track of our assets. Our directory system, our implemented form of this that occurs logically in the system. We also need to manage the configuration of the assets contained within it. Now the program given the name, IT Asset Management, or ITAM, expands this program to be much broader than the traditional discipline where we're just managing data, managing files, folders, servers, volumes. We need to look at the different dimensions brought into the question of managing the information in our program.
To do all of this the right way, we of course, are going to need some form of configuration management data base. Now this of course, is the way that we can enter and control what information we put under this kind of management system. And it needs to be, ideally, the one authoritative source that we turn to, to get the kind of guidance and information that we need. And it needs to reflect all the things that we've determined through our classification and categorization process because this serves at the heart of this program to support and enable the processes in service delivery or support, overall management control and all of the other IT disciplines that we have in our organization. So as I say, this needs to be thought of as being a single, centralized and ideally, for purposes of looking at things from different perspectives, a relational repository. Undoubtedly, it needs to be aligned to the organization, the processes, the uses and so forth, that the users in our organization reflect. And equally ideally, it should be based on scalable technologies because without question, our asset base and the actual detailed content of this system is going to have to change over time, scale up and down and adapt to the changes in our environment. One of the assets that we do need to be very careful in managing is the software that we use. And that includes the software that we use to manage this process. Because software is the thing that we rely on most heavily to do any of this, we need to be sure that we have got all of our licensing, by whatever scheme is contained in the license, properly in place and that all of the controls over how software is acquired, implemented, which means licensing, has been properly taken care of. Copyright infringement can result in very costly legal challenges in the event that a case for that is made.
We should also be controlling the library that we have, of all the software that we use in the business, so that as part of our CMDB, controlling the assets, we have an aspect of it that controls what software is used to control versioning and participate in the overall change control process. We have a life cycle. And the life cycle begins with defining requirements for which we will do implementation following acquisition. When they're provisioned, they're put into operation and maintenance mode, and then ultimately, they will go into the disposal process where we decommission. And the processes that you see here are of course, a lot more detailed than these simple boxes indicate.
Take for example, disposal and decommissioning. If the equipment that is being considered for disposal, has contained sensitive or protected information of any kind, individually identifiable as an example, then the process of disposal and decommissioning is not just as simple as, well we'll just erase it. There is a formal process that may have to be put to use to ensure that we have a very high degree of assurance that that very sensitive information will not be inappropriately disclosed, regardless of whose hands it falls into following our decommissioning. So it's a process that needs to be properly defined, properly performed, verified as the final step and then we can say that we've actually addressed it. And each one of these phases has similarly detailed processes that must be followed.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.