image
CISSP: Domain 6 - Security Testing and Assessment - Module 1

Contents

CISSP: Domain 6, Module 1

The course is part of this learning path

Security Assessment and Testing
Difficulty
Intermediate
Duration
40m
Students
259
Ratings
3.4/5
starstarstarstar-halfstar-border
Description

This course is the first of 3 modules of Domain 6 of the CISSP, covering Security Testing and Assessment.

Learning Objectives

The objectives of this course are to provide you with an understanding of:

  • Assurance - Operational vs life cycle
  • Test and evaluation
  • Access control principles
  • Strategies for assessment and testing
  • The role of the systems engineer, security professional, and the working group
  • Insecure interactions between components
  • Porous defenses
  • SAN critical security controls
  • Log management
  • Code review and testing
  • Testing techniques

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Welcome back to the Cloud Academy presentation of the CISSP Exam Preparation Review Seminar. This next module is going to be Domain Six, Security Assessment and Testing. Here we we have our Domain Agenda: Design and Validate Assessment and Testing Strategies, Conduct Security Control Testing, Collect Security Process Data, and Conduct or Facilitate Internal and Third-Party Audits. These will be the topics we'll explore as we delve into security assessment and testing.

So the question must be asked, "Why test?" Today there is virtually no business of any size that can operate without a computer. We all pretty much take this for granted. All computers operate using software in the form of operating systems, applications, and of course, the ever-present internet. For every business, a computerized function tracks sales and the entire revenue cycle. For every business, a computerized function tracks costs, payments, and taxes. Thus it should be abundantly clear that a computer is critical to the entire supply chain.

On the more negative side, it is well-known that nearly all software contains flaws of some type, on some level. It is also well-known that hostile actors expend great effort looking for these to exploit them. Therefore testing should be conducted to find and fix those flaws before the adversaries find them and exploit them. So what we're looking for is we're looking for increased assurance, both for the operations and for the life cycle of the system.

Operational Assurance focuses on the features and the architecture of the system in question. In the architectural sense, we're looking at architectural and processing integrity, trusted recovery, covert channels, and a host of other architectural features that add to or subtract from the reliability and performance of the system. This requires periodic feature and functionality testing to ensure that correct operations continue to be correct.

Software development and functionality issues will arise at almost every juncture to ensure that the software quality continues high and that flaws are kept to a relative minimum. And we have to have consistently performed and documented change management and maintenance processes, to ensure that we're watching out for these things and maintaining them to the highest levels we can obtain.

For life cycle assurance, we're looking to ensure that the system is designed, developed, and maintained with formally controlled standards that enforce protection at each stage in the system's life cycle. This requires that we do periodic security testing and trusted distribution to ensure that what we build gets to its destination and gets installed in a way that prevents corruption from it in transit. It means that we have to do configuration management to ensure that the features are what they are supposed to be, and they continue to function as they are supposed to function. And it means that we have to have change control, more of an evolutionary process that we use to manage the system over its product roadmap and its life cycle.

So we're going to dig into Module One where we design and validate assessment and test strategies. Security assessment and testing covers a broad range of ongoing and point-of-time based testing methods used to determine vulnerabilities and the associated risk. Mature system development life cycles include security testing and assessment as part of the development, operations, and disposition phases of a system's life. The fundamental purpose of test and evaluation is to provide knowledge to assist in managing the risks involved in developing, producing, operating, and sustaining systems and their capabilities.

Testing and evaluation measures progress in both system and capability development. These also provide knowledge of systems capabilities and limitations for use in improving the system performance, and for optimizing system use in operations. Thus, expertise in these areas must be brought to bear at the beginning of the system life cycle, to provide earlier learning about the strengths and weaknesses of the system under development. The goal is early identification of technical, operational, and system deficiencies, so that appropriate and timely corrective actions can be developed prior to fielding the system.

The creation of testing evaluation strategies involves planning for technology development, and this in course includes risk, evaluating the system design against mission requirements, and identifying where competitive prototyping and other evaluation techniques fit in this process. The content of test and evaluation strategy is a function of where it is applied in the acquisition or development process. The requirements for the capability to be provided and the technologies that drive the required capability. A test and evaluation strategy should, therefore, lead to the knowledge required to manage risk, the empirical data required to validate models and simulations, the evaluation of technical performance and system maturity, and a determination of operational effectiveness, suitability, readiness, and survivability.

In the end, the goal of the strategy is to identify and manage and mitigate risk which requires identifying the strengths and weaknesses of the system or service being provided to meet the goal of the acquisition or development program. Ideally, the strategy should drive a process that confirms compliance with the initials, the Initial Capabilities document, instead of discovering later that functional performance, or non-functional goals, are not being met. The discovery of problems late in the test and evaluation phase can have very significant cost impacts as well as substantial operational repercussions.

Now, historically, test and evaluation consisted of testing a single system, an element of that system or a component, and it was carried out in a serial or sequential manner. When tests would be performed, data would be obtained and then the system would move to the next test event, often at a new location, with a different test environment. Similarly, the evaluations themselves were typically performed in a sequential manner with determinations of how well the system met its required capabilities established with a combination of test results obtained from multiple sites with differing environments. Confusing to say the very least. The process was time-consuming and very inefficient, and with the advent of centralized collaboration strategies, it became insufficient to the need. In large measure, this was due to an approach to acquisition and development that did not easily accommodate the incremental addition of capabilities. Creating and maintaining an effective test evaluation strategy under those conditions would have been difficult to say the very least.

A test and evaluation strategy is an absolute necessity today because of the addition of capabilities via incremental upgrades, which is now very much the norm, and the shift to network-centric construct where data is separated from the applications. Data is posted and made available before it is processed. Collaboration is employed to make data understandable and there is a rich set of network nodes and pathways that provide the required supporting infrastructure. Thus, a properly planned and executed test and evaluation strategy can provide information about risk, risk mitigation, and empirical data, to validate the models and simulations, evaluate technical performance and system maturity, and determine whether systems are operationally effective, suitable, and survivable.

So, software development is part of a system design. Software, as we know, is what makes all of these things work. And software is only as good as the requirements, the design, the build, and the overall execution of the project. Software requirements are typically required and they're derived from the overall system requirements and designed for those aspects in the system that are to be implemented using the software. These need to be documented requirements and the specifications that represent the user's needs and intended uses for which the system itself is being developed. 

So how is software different from hardware? A lot of this would seem pretty obvious. Hardware by itself gives us predictable behavior and is fairly simplistic, at least as a machine is thought of. It does wear out, and, as we know, computing hardware, in particular, becomes obsolete rather quickly. It is superseded on a regular cycle by faster, cheaper versions of itself, and it requires complex manufacturing arrangements. Compared to that, software is not a physical entity, it does not wear out, its effects and its products change with the speed and the ease of the software's development.

Malfunctions, of course, are traceable to errors made during the design and development of the software. Unlike machines, software branches, just like trees do, but machines don't. And seemingly insignificant changes can create unexpected and very significant problems elsewhere.

Now, when we look at our design for our system, both hardware and software have specific things that they have to meet by way of performance objectives. In designing our software, we want to imbue it with various properties. We want to be sure that these particular ones are designed and built into it from the very beginning. We want it to have increased resistance, which means it's been built to withstand attempts to subvert normal operations within pre-determined design limits. We want it to be robust, that is, that it has the strength to function and perform correctly under a range of conditions without complete failure. We want it to have resilience, which means it has the flexibility of functionality such that operations can continue even after an attack or an error's impact. It needs to be recoverable, that is, it has the structure and features that facilitate trusted recovery. It should have the quality of redundancy, with compensating capabilities to ensure continued operation in the event of component failure. And ultimately, building up to this, it needs to have reliability, that is, it will perform in a manner that reflects the necessary qualities of trust and assurance.

So with all of these points in mind, we're going to develop strategies for assessment and testing, so that we know exactly what to expect and what the limitations are of our software. A proper strategy, correctly executed, should provide us with valuable insight regarding the risk and possible steps to mitigate it. It should provide us empirical data that will serve to validate the assumptions, the models, and the simulations. It should provide us with evidence of technical performance and operational readiness, and give us indicators of the operational effectiveness, suitability, and survivability.

Now, a proper strategy will verify the degree of trust. Now, trust is defined as all protection mechanisms working to process sensitive data for all types of users and maintain the appropriate level of protection. It means that there will be consistent enforcement of policy under all normal operating conditions.

Along with that will be assurance. Assurance is defined as the level of confidence that the system will act in a correct and predictable manner in all normal computing situations, such that known inputs will always produce expected outputs under all the normal operating conditions to be expected.

Now, the role of the systems engineer and the security professional. Working together, these two roles should create or asses the test and evaluation strategies in support of the acquisition and development programs. They should recommend test and evaluation approaches based on their knowledge of what is to be acquired or to be built. When these plans are put together they need to be able to evaluate the test plans and procedures, so that they have high confidence that they will elucidate the characteristics that they're testing for and that the test will produce a result showing that the system and its software will meet the design specifications. They have to understand the rationale behind the requirements of the acquisition and development programs, so that they can prove the validity of the testing strategy, and that the system will meet the requirements.

Now normally, a working group is put together and this is based on the concept that the more brains you have, the more eyes you have on the problem, the better and more level all of this stuff will be. The working group should evaluate as a group how to update the test and evaluation strategy if one exists, or how to create one if it doesn't. As a group they will ensure that the test and evaluation processes are consistent with the acquisition strategy, having a much broader and more complete understanding of this strategy. They will ensure that the user's capability-based operational requirements are being met by the system, and the working group, rather than a single individual, will provide greater visibility, a broader understanding, and a greater lack of bias when it comes to the tests and the results.

The first one is verification. Through software testing, whether it's in static or dynamic analysis, we're going to be able to verify that the software will perform as designed, as intended. To do this, we will have the software testing as part of the strategy. That testing will involve static and dynamic analysis. We will look at the code and the documentation being produced, and we will walk through at a functional level, and then, where necessary, at the code level, to ensure that everything that should be present is, or to identify the gaps that might be present and organize ways to fill those gaps. 

Alongside verification is validation, and validation is not the same as verification; it is a compliment to it, because through validation methods it develops a level of confidence that the software or system meets all the requirements and user expectations as intended, and hopefully, as documented.

About the Author
Students
8613
Courses
76
Learning Paths
23

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.