The course is part of this learning path
This is the final module of domain two and looks at metrics, monitoring, and reporting within the sphere of risk management. We look at how to measure risk through key risk indicators (KRIs) and how monitoring of these can help us to avoid risks in the first place.
We also look at the controls that can be put in place to reduce risk overall in our organizations.
Learning Objectives
- Understanding how to monitor and measure risk
- Mitigate risk through controls
Intended Audience
This course is intended for anyone preparing for the Certified Information Security Management exam or anyone who is simply interested in improving their knowledge of information security governance.
Prerequisites
Before taking this course, we recommend taking the CISM Foundations learning path first.
So the technical control components and the architecture needs to be expressed to those again, targeted to the audience. Now the common pitfall is an over-reliance on technology when we already know that the majority of security events and breaches are caused by human errors. We need to be sure that we have administrative and physical controls playing their part in the total security program.
When we design the architecture, we need to look at specific questions that need to be answered by it. Some are questions of placement, for example, where are controls located? Are we using defense in depth such that the controls are layered? Do we need control redundancy? Basically a plan A and a plan B or a primary and a fail over fallback in every case. And is there any uncontrolled access channel that we haven't accounted for?
Other questions are about effectiveness, are the controls reliable? Do they perform properly under all normal circumstances? In other words. Are they meeting minimum security requirements? Assuming of course we have those properly specified, do they inhibit productivity? This is a very important question because controls that inhibit productivity on duly may be candidates for being withdrawn or replaced.
We should try very hard to align the controls and their functionality with productivity requirements to keep them from doing that. Are they manual or are the automated? Once again the question arises, how dependent are we on automated technology and how reliable is it? Are they monitored? It makes little sense to have a control in place and not monitor. Are they monitoring in real time or after the fact? The truth is some controls will monitor in real time while others only detect and report after the fact that something has occurred. But we need to be clear on which ones do which job so that we know exactly what the results each one reports will mean. And then how easily could they be circumvented.
An intelligent, well-trained hacker could know reasonably how to get around almost any control. As part of our vulnerability assessments, this needs to be something that can be determined with reasonable certainty. So that we can plan for that either to occur and what our response will be or how to prevent it in the first place.
We have questions of efficiency. How broadly do the controls protect the environment? Are controls specific to one resource or asset? Are they fully used? Is a control a single point of application failure, which of course would make it a candidate for changing or removal. Is a control a single point of security failure? Critical controls if they are a single point of failure should be examined for backup or alternatives. Is there unnecessary redundancy? This needs to be examined to prevent unnecessary redundancy which complicates our environment.
We have to address questions of policy. Do they fail secure or open? Are controls restrictive or permissive? Is a principle of least privilege enforced? And does the control configuration align with policy? Meaning that, the way it performs accomplishes a policy requirement. And then we have to answer questions of implementation.
Is this implementation in line with policies or standards? Are controls self-protecting? In other words, do they reveal tampering or any other misapplication or misconfiguration. Will the controls alert personnel? Any control that does not alert in the event of an attempt of circumvention or deactivation without authorization would do little to be effective in its job. Have the controls been tested? Every control, put in place needs to be tested to ensure that we know exactly how it will work, exactly how it will fail and how we can tell the difference in the results that are provided.
Are control activities logged? Typically anyone that manipulates any of the controls must have a high level of privilege associated with their access. These should be logged and there should be an audit trail of authorization to show that they are doing an authorized task. Do controls meet goals? This needs to be something that is examined every step of the way when a control is under consideration. And then it must be demonstrated clearly that it does or that it doesn't. And in that case, an alternative needs to be looked for.
Are control goals mapped to the organizational goals? This of course would be a primary consideration. So as changes are made controls can become less effective, which means that as changes are made controls changes need to be examined as well. Controls like any other aspect of our systems or our policies and procedures need to be periodically tested to ensure that they continue to be as effective as they were when they were first selected and implemented. And if these tests revealed that they are not, again they become candidates for evaluation and replacement.
Changes to controls must undergo a formal change management system of course and the approval process requiring management review and sign off. And if any training is required the controls can not be rolled out and made effective until the training in them has been made completely. There should be an effort to establish a set of baseline controls. These are the minimums that go into effect that provide us the minimum baseline, that is the point of departure for any specialized controls needed above and beyond.
Some examples, these would include authentication logging, role-based access, data transmission confidentiality. And as baseline controls, these would all be examples of things that happen every time under normal circumstances.
As always before deploying a new system or control set these should be tested to ensure that they align and meet the security requirements. We have to be sure that they integrate well into the systems interfaces and that they actually actively oppose all vulnerabilities within their scope.
We have to be sure that the system provides security administration and feedback. So that it is possible without undue complexity that the administration can be performed and that the feedback indicates that it is accomplishing its mission. If flaws are detected, the project team needs to work together to resolve them and take whatever action is appropriate. And if an issue can't be addressed before rollout we have to estimate the risk that we'll go into the production environment and compensate for it appropriately.
If a system is deployed with issues as may often be the case, we have to document them and agreed a timeframe in order to fix them. And if no resolution is available we have to track and assess issues periodically. So that should the conditions change and require response we can do so in a timely and effective manner.
So let's examine in section 42, what success of this program would look like. So briefly, evidence of effective risk management shows that we can handle the capacities, we can handle the features and we can anticipate any sort of routine performance and any sort of routine disruption.
We have monitoring in place, we have met our risk appetite and tolerance as dictated by senior management, we have the ways to measure, we have the ways to follow up, we have the ways to ensure that our program is functioning in all proper ways and that we can handle any exceptions.
The information security manager also has the authority along with the responsibility to carry out this function. We're able to look at all of our assets, identify them clearly. They've been classified and they have an owner assigned to them to ensure that all of the program elements relevant to this particular asset are being performed and reporting. And that all of these assets are prioritized as required by the business.
We've come to the end of our section, here we've reached the conclusion of domain two, information risk management. We'll begin our next section on domain three, the information security program development and management.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.