1. Home
  2. Training Library
  3. Security
  4. Courses
  5. CISM: Domain 3 - Module 1

Development and Management - The Action Plan: Part 1

The course is part of this learning path

Development and Management - The Action Plan: Part 1
Overview
Difficulty
Beginner
Duration
35m
Students
30
Description

In Domain 3 of the CISM Domain learning path, we'll cover the development and management of information security. We'll security program frameworks, scopes, and charters, as well as program alignment with business processes and objectives.

You'll learn about various security management frameworks, administrative activities, operations, and the performance of internal and external audits. We're going to further our discussion on metrics, and we're going to talk about specific controls.

Learning Objectives

  • Understand how to set goals and strategies for managing information security
  • Learn about the metrics used to measure security performance

Intended Audience

This course is intended for anyone preparing for the Certified Information Security Management exam or anyone who is simply interested in improving their knowledge of information security governance.

Prerequisites

Before taking this course, we recommend taking the CISM Foundations learning path first.

Transcript

As you see here in this graphic, there are pre-incident developments that we need to undertake. And then there are post-incident actions that we have to take once an incident has occurred. We start with preparation to ensure that we design a plan and test it to make sure that we have the ability to implement a plan that will be effective in dealing with incidents that occur.

We have to protect. This is our proactive prevention of incidents from occurring in the first place. Here we need to be sure that we understand and establish protection levels. Any time that the environment or any other significant aspect changes, we need to implement changes or consider implementing changes in this process. We always want to evolve towards a more appropriate level of protection. Simply more may not solve the problem. And we need to look at input for future detection as the environment of attacks and incidents changes.

Then we need to, of course have detection mechanisms. We should have proactive detection in place to give us an early warning as early as possible and then reactive detection so that we can find out what is actually happening, not just that something is, but what? At this point we'll need to be able to do triage. Not every incident is of the same severity or the same type. Some will be critical. Some will not be, but they will be incidents nonetheless.

So using the Carnegie Mellon University systems engineering incident management process we'll be able to do triage, to place problems into categories reflecting their severity and their type. We'll be able to correlate data with problems, prioritize these problems and then assign the appropriate personnel to them. And then our response, which will be based on the type of incident, there will be a tactical response, a management response, and very likely a legal response. And all of these elements need to be included in our incident response planning.

So moving on, we now come back to the subject of metrics and monitoring. It is a fallacy to assume that technology can solve all the problems. It is equally fallacious to think that metrics will tell us everything we need to know. The selection process for metrics is thus very important because as we're about to explore, we will find that certain metrics and all metrics in fact will in some way, fall short.

Technical metrics are great for measuring how specific controls are working, but they can only tell us so much. They're not so great for telling us how well it aligns with organizational and programmatic goals. So we need to address the following questions. A very general one is how secure is the organization. And this can vary from one moment or one day or one week or one month to the next.

A very nebulous question but one that will always be asked is how much security is enough? What are the most effective solutions? These of course are contextually specific because one control may not be effective at all in one situation where it may be completely effective in a different one. How do we determine the degree of risk? What do we base them on? What are our metrics? What's the model? Are we moving in the right direction? 

It's easy to assume that moving from less secure, however that's defined, to more secure, again however that's defined, is moving in the right direction but more is a little bit too broad. We need to have security that is appropriate to the context always. And so moving in the right direction means we're recognizing what takes place in the environment and in the enterprise and moving in the direction of ensuring that the enterprise is better prepared, better secured and better able to respond to change and to incidents.

Without question, we always have to ask, what impact would be a catastrophic one and what about a breach? Technical metrics will address only the quantitative parts of these questions but they don't answer the complete question. We will have to have a different type of metric and these can only come once we know when our goal for these will be.

So in order to determine program effectiveness, the metrics that we need to examine will include things such as CMMI levels, key goal indicators, key performance indicators, key risk indicators, possibly balanced scorecards, Six Sigma for quality, ISO 9000 quality indicators, or the COBIT five. Whatever indicators we're going to use, whatever metrics we're going to extract from the information sources, these need to be consistent so that we can compare them to others and have an equal amount of measurement and emphasis where it is appropriate.

Some example approaches might include the regular conduct of risk assessments to track how these elements change over time. We might use tactical methods such as vulnerability scanning and penetration testing. We might have reporting from change management activities and how it affects our risk profiles. And then of course, we have to get around to changing our training techniques to ensure that help desk personnel and all others know how to deal with various types of attacks, such as social engineering.

When we're going to measure security management performance, we have to rely on the way that we've laid out our plan in the first place. It's not just about metrics. It's showing progress towards evolving into a more secure state appropriate to the enterprise and its risk environment, rather than continual repetitive remediation.

The three basic steps for measuring success in this way will include how we've defined and how we measure the goals, tracking the most appropriate metrics which means we've been through a proper vetting process to determine which are appropriate metrics, and then periodically analyzing results to know where to make changes, because this is a live process and will require reexamination from time to time.

Now we've already said that our security risk management program is a multi-faceted program. We need to come up with ways that allow us to examine and measure risk and potential loss. We will however need to be sure that we've established how to maintain a compromise between addressing risk and loss and maintaining usability in the working environment.

Techniques that we need to employ will include technical vulnerability management, looking at technical vulnerabilities and how effective our program is in identifying and resolving them. General risk management, looking at risks and how each was addressed to determine appropriateness and effectiveness. And then ultimately, loss prevention, taking a look at losses that have been incurred over time and seeing how many were preventable and what actions were taken subsequent to their first occurrence.

Now these of course will result in quantitative measurements. They need to be included but they need to be joined with qualitative approaches in which we will ask questions such as, do risk management activities occur as scheduled? Is there executive oversight and review activities and do these occur on a regular basis? What about incident responses and business continuity plans being tested and what were the results of them? Have asset inventories and risk analyses been brought up to date? And is there stakeholder consensus on the amount of acceptable risk?

Part of determining the effectiveness of any program is the support of organizational objectives. In this case, the support of organizational objectives by the security program. We have to ensure that there is a correlation, very positive between the business milestones and the security goals.

Once again, trying to make security more of an enabler and less of an impediment. We have to determine the number of security goals successfully completed. As you see, this is just like any other of the business areas. They will have goals and they have to show how they've done progress towards or accomplishment of any of them, and in security we must do the same.

We have to determine if any business goals were not met and whether or not security played a role in keeping that business goal from being met. In such a case, the consequence could be that we lose support or that there's criticism, a loss of budget, a loss of staffing. But in the end what we have to do is realign the security goals to ensure that this doesn't happen again at any point where it may be avoidable. We have to continue to measure the consensus strength amongst stakeholders, so that they believe and support that our security goals are proper and that they're appropriate.

About the Author
Avatar
Ross Leo
Instructor
Students
3546
Courses
47
Learning Paths
6

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.