The course is part of this learning path
In this course, we will discuss various vitally important metrics used to determine how well we have mitigated risk and how closely we have matched the requirements of our enterprise. These metrics include Annualized Loss Expectancy (ALE), Recovery Time Objective (RTO), Recovery Point Objective (RPO), Service Delivery Objectives (SDO), Maximum Tolerable Outage/Downtime (MTO/MTD), and Allowable Interruption Window (AIW).
We then move on to look at how these metrics can be applied to business continuity (BC) and disaster recovery (DR) planning and we'll also have a look at BC and DR in general, how it works, and the associated processes and techniques. Finally, we move on to testing BC/DR planning and the types of tests we can use.
If you have any feedback relating to this course, please reach out to us at firstname.lastname@example.org.
- Learn about the metrics for measuring performance in managing risk
- Get a solid understanding of business continuity and disaster recovery
- Understand how to test business continuity and disaster recovery practices
This course is intended for those looking to take the CISM (Certified Information Security Manager) exam or anyone who wants to improve their understanding of information security.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
So now we're going to move into section 11 where we are going to explore the final phases of disaster recovery and business continuity planning. As I mentioned earlier, testing of the plan must be of a material nature so that we know that we have actually used the plan that we know that recovery is something we can be assured will occur.
The plans when tested should focus on identifying gaps, testing timelines, confirming strategy effectiveness, evaluating performance of personnel and determining how appropriate and accurate the plan actually is when put to use. Most regulations we find that address the subject of business continuity and disaster recovery planning in certain regulated industries require that the test be performed at least annually. And in some cases, twice annually.
It is always sound practice to retest the plan after a major revision has been undertaken. This would include any changes derived from changes in key personnel, technologies, facilities, and locations or lines of business. It is important and often overlooked when each test type is conducted that the potential for disruption simply caused by test conduct is minimized and that the affected business areas are notified that the test is being conducted.
Surprise tests are rarely a good idea and can in fact cause a disaster to occur. And that the test itself is recorded from start to finish, to ensure information capture that will facilitate a proper update. Testing the plan requires proper planning in and of itself. We have to remember that the plan is not a plan until it's tested and proven sound and functional. Otherwise a false sense of security will result in the company and may in fact be at far greater risk than if it had no plan at all.
So we have the exercise to plan the testing of the plan so to speak. As with any plan, we must develop the test objectives that will help us measure whether the plan was a success or not. Develop the methodology by which the plan itself will be tested, decide on the evaluation criteria, the manner in which the after action review will be performed, and how to capture recommendations for later implementation. It can be a regulatory requirement that third parties monitor, or in fact conduct the test to ensure objective evaluation of the test and its results.
In some industries, this may be a compliance requirement. If so, that must be included in the planning of the test of the plan itself. It is a curious observation that if your plan appears to succeed completely it may be that something has not been accounted for or something has been missed.
On the more positive side, it may indicate that your planning efforts and your teams are ready to go to the next phase and conduct more deeply complex plan testing exercises. As we described in the previous section, multiple types of tests are available to conduct.
As we move through the mature process of developing the plan, here you see a further explanation of those test types. The best course of action is to recognize that if this is the first serious planning that has been done in this particular area, it is often best to start with a simple test type and progress towards the more complex or more mature types of tests.
In every kind of test, ensuring that disruption possibilities are kept to an absolute minimum and that possible alternative methods of reaction are planned for in the event that a disruption still occurs should be a key aspect of the overall planning process.
Here you see a table showing the kind of test that each one of these represents. As we see the purpose of each test and the mechanism by which it is carried out can be either in paper form or in a broader way deeper in terms of how fully it exercises the plan.
The checklist is a very good preliminary tool to ensure that as the plan has been prepared no step and no feature has been missed. The structured walkthrough, also a paper-based test involving group discussion is a way to make sure that everyone's skill level has been properly attained and that any training gaps that may still exist can be addressed.
Both the simulation and the parallel are active preparedness tests since they both exercise the systems that are going to be used in place of the normal operational ones. A key element of these two test types is that the operational systems themselves are not shut down during the course of the test but are instead put into an idle mode while the simulation or the parallel is run on what will be the system that acts as the failover.
Typically full-scale tests and full interruption tests are not required by regulation as these tend to be too costly and have a marked potential to be too disruptive if anything goes wrong with the tests. Nevertheless, the other test types must be conducted if the plant itself is ever to be proven and relied upon.
In each type of test, the test phases include a pretest phase used to set the stage for the actual test to prepare the workforce and the participants to conduct the test. Then comes the test phase itself. A simulation is often employed to give the sense of a real emergency and to better exercise more aspects of the plan more fully.
The post-test phase. The after-action review and the evaluation activity of the conduct of the plan, the recording of all events, and the effort to evaluate and capture lessons learned from the exercise. During the course of the conduct of any test, all steps at all levels, and each team should be recorded for later review. This of course means quantitative and qualitative metrics should have been determined for capture as evaluators of how the test went.
Examples would include elapsed time for each test component, actual work performed versus what was assumed as required, any sort of numerical percentage or number reflected in any critical aspect of the plan, and it's test and evaluation on the accuracy of the data recorded that reflects how the test was performed and how it resulted.
Each test type should have a facilitator or some form of director to oversee the conduct of the plan. Even the checklist requires a co-leader to gather the checklists and extract the important elements from each submission. These tests should be handled in a central location of the organization to ensure proper oversight of the overall effort and capture of all the vital elements to ensure a consistent and uniform management effort over the plan and its maintenance. And we've come to the end of our section and we're going to pause here for a short break and then we'll continue in a few moments.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.