This course within the CISM Domains learning path looks at risk management and the resources that can be used in order to avoid and tackle risk in an organization. We'll start by looking at risk identification and risk analysis, which is the quantification and comparison of risks. Then we look at a variety of risk management frameworks in use by companies today.
Then we look at the constraints that can hamper your efforts to manage risk, focusing on working with third parties and the technical and human aspects to take into consideration when doing so.
Learning Objectives
- Understand how an organization can identify and analyze risk
- Learn the constraints to risk management
Intended Audience
This course is intended for anyone preparing for the Certified Information Security Management exam or anyone who is simply interested in improving their knowledge of information security governance.
Prerequisites
Before taking this course, we recommend taking the CISM Foundations learning path first.
Now what you see here on this slide are the various approaches that have been developed over the years to perform risk assessment and come up with a qualitative quantitative hybrid evaluation, priority, sensitivity, and criticality. The example models are COBIT, for example, that comes from ISACA itself. We have OCTAVE, which is a complex but adaptable tactical risk management framework. We have the recently published NIST 800-37 volume which describes the NIST risk management framework in its newest form. We have a standard from Australia that is based on the ISO 31,000 risk management guidelines and the ISO 31,000 itself, which is an international standard used widely for risk management processing.
ITIL itself includes risk management in a service oriented architecture framework but it's not exclusively or particularly focusing on risk management. We have CRAMM, which is the CCTA's risk analysis and management method. We have FAIR, the factor analysis-based quantitative approach to risk management. And the value-at-Risk, which is a quantitative formulation of risk management. Now we have in the end, an aggregated risk that is there when a threat affects a larger number of minor vulnerabilities producing a cascading effect.
Now these could also be many threats affecting many minor vulnerabilities in close proximity or relative simultaneity. Individually acceptable risk items therefore may become, in their collective nature, unacceptable. Now cascading risk, strictly speaking, is one where one failure leads to a chain reaction of other failures. For example, failure of one power utility causes an outage across an entire section of the power grid. In IT, this would be one system causing dependent systems to go down or in some other way, malfunction.
The factor analysis of risk, called FAIR, comes from the Fair Institute. This decomposes risk to focus on underlying components or factors to determine a given factors marginal contribution to the risk being examined. It includes four elements, the taxonomy of factors making up the risk which include frequency, probability of action, probability of success and the type and severity of the impact. It includes the method of measuring these factors, which typically should reflect the business measurement methods used in the given enterprise. It has a computational engine to derive the risk mathematically and a simulation model to analyze the various risks scenarios.
We have the probabilistic risk assessment method. This looks at complex life cycles from concept to retirement. Now, being created by NASA, it of course would seem to be very much applicable to that rather unique environment. Such, it does tend to be very time consuming, but it does work very well in a high network security type of an environment.
It asks the general question, what could possibly go wrong? Which follows how likely is it to occur? And what would be the consequences? Now in this overly simplistic presentation of PRA, there is of course, a lot of complex calculations that take place to answer those three questions. Being very time-consuming and fairly complex in its process, it may not be applicable to everyone but it is another valued and valid approach to risk assessment.
We have here the process for identification of risk. When we identify risk, what we have to do is identify the type and nature of all viable threats and the vulnerabilities they seek to exploit. We will of course construct a variety of scenarios to depict this interaction. The viable threat factors means that they exist or could appear it also infers that they can be controlled in their occurrence or their impact.
Now we have to accept the fact that total identification may not be possible, or even desirable. And this in itself is a vulnerability. Some vulnerabilities can't be tied to a specific threat. Still, they should be listed and analyzed regardless.
Now when developing these scenarios, we need to bear certain things in mind. We need to make sure that our data on the risk is current. We might begin our process starting with very generic scenarios and go through progressive elaboration to refine them further, adapting them to the real-world environment that we seek to analyze. The number of scenarios should reflect the business complexity, meaning that they should all be realistically possible within that business environment.
The taxonomy itself should reflect the business complexity of the particular enterprise under examination. Using a generic reporting structure means that we identify the specifics, but without getting hung up on the structure and make that a limiting factor.
People of course have to be included with the right skills and knowledge at the appropriate places in the analysis. Buy in from all the departments concerned will of course have to be obtained at some point and it will involve staff as the first line of defense.
Now in the identification of risk, we need to bear in mind that what we're talking about is probable not necessarily possible because there needs to be a credible probability that the event we're analyzing will actually occur. So when we develop the scenarios, several points need to be kept forefront of our mind. We cannot focus on rare scenarios.
Six sigma outliers are in vanishingly small probability of occurrence and therefore will either distract or skew our findings. We can combine simple scenarios together rather than have a multiplying form of too many scenarios to deal with. We should always consider systemic risk and this is risk that can affect a large part of the industry.
We can consider contagious risk, multiple failures that happen in a short time, up and down the supply chain or through a dependency that exists in a sequence of tasks or operations. And we need to build awareness as we do this around risk detection, with humans being the primary form of first-line defense, the better we make them at being aware of what constitutes a risk, the better our controls are likely to work.
So when we look at the risk itself, we should examine and determine origin, what the actual threat is that will exploit this, what the impact is most likely to be and arrange if that's applicable, the reason for its cause, how exposed is it and what controls would be positively affecting and reducing this exposure. And then time and place. Historical information can of course be very valuable in making these determinations.
So here we have a graphic that can show how to identify risk. Now, the techniques for selecting risk identification can be various forms of collaborations, such as team-based brainstorming, flow charting and modeling, various kinds of what-if scenario examinations and constructions, and then mapping threats to identified or suspected vulnerabilities. These scenarios should describe various risk impact events and the assets that will be affected, should they occur.
Some sample events that we might examine. These include system failures, loss of key personnel, theft, network outages, power failures, and various forms of natural disasters. As I mentioned, we need to focus on realistic scenarios, what is probable versus what is possible. And in the end, we should come up with some way of quantitatively representing each one for an appropriate level of comparison.
We will of course need to seek out support from other areas of the organization. We may even need to go outside of our organization to find support for the method and the factors that we are using. For example, we can find good and well accepted practices from organizations like ISACA, SANS, ISE squared, and others. There may be various forms of network round tables amongst senior executives in IT and security.
We can look at news organizations for current events that have a reflection of the various things that we're looking at as outed scenarios. Security related studies that are published by Price Waterhouse Coopers, Symantec, and a host of many others, White papers and other kinds of studies can be very informative.
We can look to security training organizations such as ISACA and ISE squared and comp TIA. And then, vulnerability alerting services, such as MITRE, Department of Homeland Security and others. One of the methods that we will use of a general nature will be the plan, do, check, act model. A form of total quality management. And here you see the four steps.
To plan, we design and create our information security management system. Do, means we implement and operate the ISMS. As with all things, we must check it periodically by doing monitoring at our point in time examinations of audits. And then, based on the findings, we will periodically have to upgrade. And as things change and evolve in our business world, modify this ISMS to reflect those changes. And as with all of these processes, it is iterative. The inputs will come from many different sources and the outputs will be frameworks, managed information assurance programs, and internal and external reporting requirements being met.
Now, the TQM components using PDCA and others, will require various things to start them off to set the goals, the scope and the various targets that we're trying to achieve. We have to begin with a vision. This should be a very clear statement describing the purpose of the organization or the study itself. We have to set strategic goals as directed by that vision. And these should be objectives necessary to reach the vision. We will need key performance indicators, key goal indicators and critical success factors plus other types of key actions to show how we're doing along this process to meeting the objectives we've set for ourselves.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.