CISM: Domain 3 - Module 1
The course is part of this learning path
In Domain 3 of the CISM Domain learning path, we'll cover the development and management of information security. We'll security program frameworks, scopes, and charters, as well as program alignment with business processes and objectives.
You'll learn about various security management frameworks, administrative activities, operations, and the performance of internal and external audits. We're going to further our discussion on metrics, and we're going to talk about specific controls.
- Understand how to set goals and strategies for managing information security
- Learn about the metrics used to measure security performance
This course is intended for anyone preparing for the Certified Information Security Management exam or anyone who is simply interested in improving their knowledge of information security governance.
Before taking this course, we recommend taking the CISM Foundations learning path first.
There is no question, but what we have to ensure that we need are compliance objectives. In many contexts, 100% compliance is mandatory. It may be possible, however, to do with less in certain contexts, but we must weigh the compliance levels against the potential impacts of attempting to achieve it in order to understand whether or not we're going to have an impediment that might be avoidable, or we're going to have enablement that needs to be increased.
We have to measure the technical compliance, and because it is technical, it's generally easy and oftentimes can be automated. Measuring process standards compliance can be a little more difficult, and it requires a little bit more monitoring on more of a constant basis. And it might also require review of logs and the performance of checklist activities. We have compliance requirement categories that have to be met, and we have to be sure that we're clear on which one is at work, which is statutory compliance, which means external regulations which demand compliance. There is no option. Contractual compliance, similarly required, but these are more contractually-oriented than of regulatory nature. And then the self-imposed constraints of internal compliance.
We always need to measure operational productivity. And once again, we need to adopt the business management methods that are used elsewhere, so that ours will be consistent with theirs. We have to find a way to calculate the work that is produced on a per resource basis for the given time period. For example, we can look at the productivity of log analysis. Measuring the number of entries analyzed per resource each hour might be one form of measurement. But we need to establish a baseline first, and then always to compare things to the baseline as our point of comparison for how efficient we are. The baseline itself will periodically have to be reviewed and altered if the conditions change.
As with every other business area, we need to be sure that we're doing cost-effective analysis to ensure that we are getting the most value out of every dollar we have to spend. We need to do budget forecasts and compare to final expenditures, so that we can show that we are being efficient and getting the results that we need to. If we underestimate our costs, we may have to adjust how we arrive at these estimates. Our methodology may be flawed or there may be a gap.
We will always have to track our costs and results, so that we can measure our cost effectiveness for specific components in much the same manner as other business units. Cost needs to be total cost of ownership, including maintenance, operational or administrative costs, any hidden costs that may not be quite quantitative enough, and any time we are comparing to replace a control with another control, how the old and the new compare on a total cost of ownership basis.
When we look at measuring organizational awareness, we have to take, for example, that vigilant employees are often more important than any technical control. The success is typically measured at the employee level, and, if necessary, we get human resources help in this process.
Some valuable metrics might include records of initial training, so that we can establish a common level of initial awareness on the part of all employees. Then we can look at acceptance or acknowledgement of policies and user agreements and determine how many compliance issues there may be with those, again as a way of measuring how valuable, how effective the training was.
One way of testing knowledge retention would be to test after training using various forms of examinations or quizzes. These also can be considered audit elements for regulatory examinations and investigations.
Measuring the effectiveness of technical security architecture may in itself be relatively straightforward. We have an abundance of quantitative metrics, such as number of probe attack attempts that are repelled, same attacks being detected by our IDS or IPS capabilities, tracking that down to the number of compromises produced, getting statistics on malware that is identified and how many of these are neutralized by it, looking at potential downtime attributed to security flaws, and the number of messages examined by an IDS or IPS.
Along with these, there will be qualitative metrics, if a tactical mechanism has been tested and what those test results were. If we have security controls applied in a layered fashion, how we test these and how we test the relationships between one layer and another. If the mechanisms are properly configured and monitored in real-time, testing them by simulating various kinds of violating activities. And all critical system stream events to a personnel in real-time so that we can measure whether they are timely and sent to the right parties.
In the end, we'll have to take these metrics, both quantitative and qualitative, and summarize them in a dashboard type of presentation with drill-down capability for senior management. We have to measure the effectiveness of management frameworks and their resources. We do this through issues management. How often these occur, how well operational knowledge is documented or distributed, how many processes refer to standards, how many standards therefore might be missing or incomplete or out of date, how well security roles and responsibilities are documented, if every project plan incorporates security requirements, and how well security goals align with organizational goals, all represent examples of more qualitative measures than quantitative. But in each case, we need to know what a baseline would be for determining effectiveness, and what a gap would look like in each one, so that we know where they may be missing or failing.
On a more quantitative basis, we can measure operational performance by looking at things such as time to detect, escalate, isolate, and contain incident. Time between vulnerability detection and resolution can give us an indicator of process efficiency. Quantity, frequency, and severity of incidents that do occur. And as always with these, we have to have ways of measuring them and comparing them on an apples-to-apples kind of basis.
Average time between vendor's release of a path and its rollout. A measurement of controls effectiveness is our process of measuring systems that have been audited. How many have been, what the action plans were, and the time scale over which these actions have been performed. And then the number of changes released without change control approval, showing process efficiency and management oversight.
Throughout all of these, the monitoring can be done, the metrics capture can be done. But we have to have a sound and constantly performing communications method. This may be benefited by implementing a centralized monitoring environment that improves visibility and information resources, and attracts us key subset of monitored events that are kept in scope.
Commonly monitored events need to reflect the priorities of the enterprise, and as such need to be the priorities of our security program as well. Some common events that we do commonly monitor: Failed access attempts, various forms of processing faults, outages and faults due to design issues. And there should be a direct relationship and correlation between design and these faults.
Changes to system configurations and justification for them. Privileged system access and activities performed. Technical security component faults, which may relate back to procurement processes. We will always need to have a well-thought-out process for responding to these events, whatever kind of an event it is. Analysts should therefore be properly trained in the various scenarios that we use.
As before, we need to have a test escalation pathway to show that results, if they show failure potentials or flaws or incidents, that we can escalate them rapidly and through the right steps. Focusing on real-time events may take our attention away from other areas that require equal attention.
We should be looking at the most frequently targeted resources based on our research and our histories. The logs can reflect what has already happened and forecast what we might expect in the future. So let's examine our information security program development and see what success might look like. We should have specified what the outcomes of the information security program should look like. And we should have ways of capturing and measuring them to demonstrate where we have been strong and where we have fallen a little short.
Sometimes this can be difficult to translate into design concepts directly or into technologies and processes. Using a security architecture or using an existing one can help us do a better job of that. The successful security program outcomes would include: how well it's aligned to the business organization, how efficient and effective our risk management process is, what the value delivery would be, depending upon how we've defined what value delivery is constituted of. Normal business efficiencies such as resource management, performance management, and the assurance process integration.
A key element of this of course is strategic alignment. And as we've been saying through this course, we need to have security aligned with business goals, which will define critical elements and priorities for us. It will require frequent interaction with business owners.
The various topics that we'll have to cover are the organizational information risk, the tolerances, the acceptance, the standard, and how much variance will be allowed. The selection of control goals and standards to go along with these, getting agreement on these various elements, and defining what constraints may exist in the environment that could inhibit our achievement of them.
If there is a security steering committee, these should be typically run through them for the oversight and opinion that it can help define and guide the program. We've covered the risk management process in great detail. At this point in the seminar, any questions about the risk management process will be those we have already covered.
Let's reemphasize, though, that it's a core function for any security program and drives a great deal of what we're going to do. But it is one of the more critical aspects, but not the only one. Now the value delivery basically centers on that we deliver what we promise. It should create a desired level of security effectively and efficiently.
Now not having defined value delivery as an objective, the program may get off its track and start going down various other pathways which will diffuse our achievement of the various objectives. Delivery of value means that the program is being adopted and seen as more of a normal one to people than the kind of outside process security is often perceived as being. And as we've said before, this program should be in a constant state of improvement and evolution, rather than disruptive change or repetitive remediation.
Like every other business effort, we need to be efficient in the way that we manage our resources. These will be in the form of humans, financial, technical, and knowledge resources, and they need to be applied as a cohesive unit. The knowledge is, of course, particularly important. Good resource management captures knowledge and actualizes what we have learned in the course. It also makes sure that it's available to all parties who require it, so that the organization itself learns, as well as the security program management. It means that we're going to evolve our security practices and processes that we will improve their documentation, their training, and their application.
As before, we've said, we have to align with business practices. We will need, of course, to do performance measurement to show that we are managing our performance correctly. The security strategy that we've developed will have to identify and make clear how it will be monitored and measured. There is an abundance of metrics, and through our process of selecting which ones, we may need to regard this as having to be revisited and changed from time to time.
We need to measure how well we're doing in terms of selection, design, implementation, follow-up, remediation, and all the other activities involved in performance management. We have to be sure that we measure what we build as well as what we're building and how. It could be that we need to use an independent auditor to give us an objective look, so that we know where we may fall down and not be blind to our own failures and be aware of our own progress.
We need to be sure that our assurance providers are integrated with the rest of our program. These assurance providers can help us identify and manage risk and monitor control effectiveness, sometimes with greater visibility and objectivity. This can be based on expertise in a given area that may not be generally available. Certain examples might include business managers, IT managers, financial directories, and so on.
In the end it involves anyone who can contribute to security effectiveness. Without doubt, we need to cultivate good relationships with assurance providers and integrate their activities with our program activities to get the full benefit. Now we've come to the end of our section. We're going to pause here briefly because we're going to change into our final domain, information security incident management.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.