CISSP: Domain 6, Module 2
The course is part of this learning path
This course is the 2nd of 3 modules of Domain 6 of the CISSP, covering Security Testing and Assessment.
The objectives of this course are to provide you with an understanding of:
- System operation and maintenance
- Software testing limitations
- Common structural coverage
- Definition based testing
- Types of functional testing
- Levels of development testing
- Negative/misuse case testing
- Interface testing
- The role of the moderator
- Information security continuous monitoring (ISCM)
- Implementing and understanding metrics
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
So we're going to look at the information security continuous monitoring function, or ISCM. We're going to discuss metrics, and we're going to be reviewing and modifying our strategy. The continuous monitoring maintains an ongoing awareness of information security, vulnerabilities, and threats to support organization or risk-management decisions. Continuous monitoring is also a compensating control for risks that exist that may be of a serious nature, but do not have any way for us to impact them in a way that is operationally or cost-effective.
Now, working with the ISCM device, we have to be sure that we meet certain criteria in order to get the proper value out of it. We need to be grounded with a clear understanding of the organization's risk tolerance. This can help officials set priorities and manage risk consistently throughout the organization, and properly define, set, and enforce policy. We must, of course, include metrics that provide meaningful indications of security status at all organizational tiers. We have to be sure that this ensures continued effectiveness of all the controls, or identifies problem areas where revisiting controls in place must be undertaken with consideration for having to replace, or in some way, adjust the control to make it more effective.
An ISCM can help verify that we have achieved compliance with the information security requirements. The ISCM is informed by all organizational IT assets, and helps to maintain visibility into the security of those assets. It ensures that with this visibility, we have knowledge and control of changes to organizational systems and environments of operation. And through it, we are able to maintain awareness of threats and vulnerabilities, and take prompt, corrective action.
Now, this program typically requires organization-wide involvement, or at least organization-wide input. This program is established to collect information in accordance with pre-established metrics, and they themselves - the metrics - have been examined to ensure that they're relevant and provide actionable intelligence to us. The organization-wide risk monitoring, therefore, cannot be inefficiently achieved through manual processes alone.
So looking at our ISCM development process, we need to discuss and define an ISCM strategy. The program needs to be established so that we know what it is supposed to accomplish, and the workflow through which we will do them. Then, of course, we have to implement it. And we need to implement it at least as well as we've defined it and designed it. It will therefore collect the security-related information that is of value to the organization. Through the analysis of the data collected, we will be able to create actionable intelligence from it, report findings, and be able to respond to the findings with corrective actions. And like all of our programs, it will require periodic review and update of the program and the strategy.
Now, metrics has long been a topic of great discussion and debate. Metrics can, of course, tell us a great deal about what's going on. But metrics can also be distracting and meaningless to us. So we need to first decide on what metrics we need by deciding what it is we need to know and be watchful for. Metrics should typically include all the security-related information to fully inform our decision-making process. They need to be organized into meaningful information from their raw data to ensure that we have properly informed our decision-making and reporting requirements. Some example metrics are these: the number and severity of vulnerabilities revealed and remediated, number of unauthorized access attempts, configuration baseline information, our contingency plan testing dates, and results, the number of employees current on awareness training requirements, risk tolerance thresholds for the organization, and the overall risk score with a given system configuration.
Now, the program, of course, requires, as I said, deciding on what metrics are needed, how they're acquired, but ultimately it's going to boil down to the interpretation of these metrics so that we know what it's telling us and are able to derive the actionable intelligence we need. This interpretation presumes that controls directly and indirectly used in the metric calculation are implemented and working as anticipated. And as we have said many times in this module, we have to be sure that our tools are properly calibrated so that we can trust what they tell us, and that's especially true about those that we derive metrics from for this actionable intelligence.
So the metrics criteria would include attributes like these: the security control volatility. Does it change a lot? Is there a great range of outputs we might get from it, and what do they mean? We have to have system categorizations and impact levels so that we can assess a certain level of criticality based on what the metric is and where it comes from. Security controls or specific assessment objectives provide critical functions. Security controls with identified weaknesses, and the compensations that we may have to put in place for them. Organizational risk tolerance is always going to be a main driver behind a program element like this so that we can be sure that we know where we are in relationship to that risk tolerance.
We, of course, need threat information from threat intelligence services or from our own research. Likewise, we will need vulnerability information regarding our systems and anything that comes from the vendors. We need our risk assessment results, which means we need to periodically redo our risk assessments to make sure that we're keeping an active eye on each element. And there will be reporting requirements, both generated by us internally and from outside regulatory sources that will have to be met also.
Like any strategy, which is a series of goals that need to be met to achieve the ultimate objective of the strategy, strategies themselves must be reviewed from time to time, because it's a real world, and strategies will need to change. No strategy should continue on infinitely without being reviewed to ensure that it continues to be relevant to our environment, thus we need to establish a procedure for reviewing and modifying all aspects of the ISCM strategy, including relevance of the overall strategy itself. Perhaps our environment has changed to the point that the strategy as originally envisioned is no longer relevant to us. It needs to be reviewed to reflect any change in our organization or risk tolerance. We need to be sure that it is reviewed so that we know what the accuracy and correctness of the measurements we use to support determining whether or not we've met it or not are still accurate and trustworthy. And our metrics, reporting requirements, monitoring, and assessment frequencies all need to be judged as the mechanical components of how we are driving to achieve the ultimate objective of the strategy.
Now, anything that can change the strategy must itself be examined to ensure that it has relevance to the strategy. Some examples of factors that need to be examined would include these: changes to core business missions or processes, significant changes in the enterprise architecture, changes in the organization or risk tolerance, threat information, new laws or regulations, trend analysis of status reporting output, changes within the information systems themselves, changes in vulnerability information, or changes to our internal or external reporting requirements. Any of these factors, all of these factors, should be looked at as precipitating change to the strategy and as triggers that should tell us it's time to review it yet again.
Now, we have a couple of cases where risk management has, as an overall driving factor, a compliance approach, or a data-driven approach. Compliance-driven, of course, is based on external regulatory requirements that must be met to meet a certain standard of law or a certain regulation. And then data-driven, what our findings during a risk assessment tell us about an increase, a decrease, or a change in a risk profile and some aspect of our operation. Typically what we find in the real world is a combination of these two elements that force us to re-examine our strategy and adjust our approaches to what our findings indicate are risks that need to be addressed.
So we've reached the end of our current module. Please join us again when we continue on with Domain 6, Security Testing, and Assessment, and we will begin with section four on slide 88. Thank you.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.