image
Testing Methodology
Start course
Difficulty
Beginner
Duration
32m
Students
137
Ratings
5/5
starstarstarstarstar
Description

This course covers section one of CSSLP Domain Five and looks at security quality assurance testing. You'll learn about important and foundational concepts on the process and execution of testing, topics regarding quality and product integrity, and various other considerations.

Learning Objectives

Obtain a solid understanding of the following topics:

  • Security testing use cases
  • Software quality assurance standards
  • Testing methodologies and documentation
  • Problem management
  • The impact of environmental factors on security

Intended Audience

This course is intended for anyone looking to develop secure software as well as those studying for the CSSLP certification.

Prerequisites

Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Transcript

So let us examine what testing methodology actually is about. For testing to be effective of course, we must begin with the strategy, a methodology and it must be crystallized to the point that we committed and it becomes the way. Now, committing it and making it the strategy does not mean that it's set in stone. Like all strategies, it is a sequence of things that we're going to do until circumstances and events prove that it must be reexamined and perhaps altered in some way.

So it isn't set in stone but something that will evolve as we evolve and grow more mature in this process. But every strategy to be successful in this particular arena, we need to look at what is to be tested, how it is to be tested, where the test is to be conducted and this would be about in the process under scrutiny and what the result is that is desired. And all of this must be done in an objective way so that the results are objective and reveal the actual state of the subject or article being tested without any prejudice or skewing of the results.

Now, the execution of course, is going to be in a series of steps and yes, as you would expect, there is a correct order for these to be performed in. And it seeks to answer the questions. What do we define as a successful test? And how do I measure what success means? So let's examine these questions for a moment. Typically, testing in this vein is considered to be successful if it doesn't find any bugs. And yet, that's actually counterintuitive. If we are tested to find bugs, the test is considered successful if it finds one.

So a negative result, though a good thing, doesn't mean that there are no bugs to be found. It simply means that our test was designed in such a way that what it was looking for, it didn't find. But if we were to take a test, alter it in some manner so that it's looking for something different and yet it reveals something that would be considered a successful test. Keeping in mind that what we are looking for is flaws in logic or bugs or coding errors or some other kind of condition we refer to as a flaw. But we need to be sure that we know exactly what we mean when we're talking about what is a successful test and how do I measure that? How do I quantify that? So looking at the testing methodology, we of course need to examine what sorts of tests, what groupings of tests we need to apply.

First, usually comes unit testing. Now this generally is the first type of testing and the first instance in which we are going to perform testing of any type. It is the point earliest in the development cycle at which we can expect to find and resolve issues. It's also the point, if we find anything, where it would be least expensive to do so. Now, this is often performed by programmers themselves at the completion or near completion of whatever they happen to be working on. And it's intended to illustrate operational and security requirements have been met in the module that's under work.

Following this will come a form of integration, either one module with another or a string of units that come together to form a thread, a process or even full level systems. Now, nominally, this is the next stage, specifically at a process level because it moves from a single execution sequence all the way up to the entire system increasing in complexity and variation. And as we combine units in one way or another, we're advancing the operational complexity and variability of the entire construct that we're testing.

Now, first we have to establish that each unit works as it is supposed to. Operates as it should, produces the results that it should. Now, this test answers that question of how they all work together. Interfaces, variable passing, interprocess communication, security and all other that we're trying to make sure work as they should and operate as they should. As we move in our test methodology, we're going to go to the next level. And the next level is going to be performance testing which is every bit is important as the security testing that we're working on.

Now for performance testing, we typically are going to have a standard testing suite to evaluate attainment of those specifications that we have put in the service level agreement. We're going to look at loading different types of stress and of course, different types of tests to determine how the program will perform in as close to real world conditions as we can manufacture.

Another form of test which we're going to mention here but we'll explore in more depth later is regression testing. And regression testing has some real pluses but it's got some fairly serious drawbacks too or can. Now when code is altered, regression testing is performed to validate that the software of the changes that have been made did not break anything or any previous functionality or security and regress to a non-functional or insecure state.

Now when problems or bugs are encountered or when the accepted code master has been changed, this form of testing is used to ensure that what was fixed or what has changed has not altered other aspects in both functional and non-functional respects. We want to be sure that what was fixed is indeed fixed but that it didn't cause any problems elsewhere in the code. We also want to make sure that whatever bugs we've killed along the way they have remained dead.

Continuing with our testing methodology and test types discussion, we're going to look at non-functional testing. In addition to testing for the functional or reliable aspects of the software, software testing must be performed to ensure that the non-functional aspects of the software are in proper shape. Now, these of course are testing for things like conditions that must exist as well as for the recoverability and the environmental aspects of the software that will be present when it's an operation.

Now, these are conducted to check of the software will be available when it's required and that it has the appropriate replication, load balancing, interoperability and disaster recovery mechanisms and that they function properly. Now examples of the testing that validates these includes looking at first setting the metrics for maximum tolerable downtime or MTD. Recovery time objective or RTO. And recovery point objective or RPO. And these are metrics measured in time units.

So in testing for these things, we need to first set the metric and then put the software through various conditions, operations, so that we can make sure that meeting them, meeting the recovery time objective, one metric for example is that we can meet the RTO and that that keeps us from meeting the MTD. That the RTO and the RPO, the recovery point objectives are in close alignment as they would be in an operational environment.

Now, security testing looks at various forms of testing and targeting specific security functionality to prove that it does or does not work. Now, these are fairly well known. A white box type of a test in which we have full knowledge of how the module being examined works. And we have access to the operational logic and source code. Now white box testing is often done at early stages of development such as when we have full access to the source code in the build stages.

We have black box testing which comes a good deal later. Now black box is always a zero knowledge proof sort of a test although it is not necessarily the same thing as a true blind test. Now, the tester has no knowledge of the inner workings of the test artifact. Basically what they're looking for is a set of conditions that the user themselves would face but the tester at least has the experience to apply it in various ways to examine how the unit under test will respond. And between these will be a gray box type of a test which is a form of partial knowledge that includes some measure of the inner workings but is substantially less than having full access to the source code that creates the test artifact.

We also have interoperability testing and this looks at verifying the resilience of interfaces between disparate environments. We're also going to be looking at how to verify the software's upstream and downstream dependencies when we're doing interoperability testing to make sure in conjunction with regression testing that things have not changed that should not have been changed, that they remain functional.

Now, when we compare, let's look at at some specific features. So for the white box test, we have full knowledge with source code available. We have access to both design and the logic of the flow. What this produces typically is a low degree of false positives. It also will be able to show logical flaws and make them detectable and intelligible compared to a black box where we have absolutely no knowledge of the internals and the functionality and no access to the source code of the test artifact.

Now, what this enables us to do is assess the cause and effect and thus the behavior only. Giving an input, letting it do its job and getting an output and by the comparison of what we do have, with what we think we should have, as to whether or not the functionality is what it should be, because again, we don't have any access to the actual logic or the source code. Now, the possibility is that we're going to have a high level of false positives. The logical flaws represented by a message but it doesn't let us have any knowledge or visibility on the intelligible information about what may have produced it, deriving directly from the functionality within which we cannot see.

About the Author
Students
8670
Courses
76
Learning Paths
24

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.