The course is part of this learning path
This course looks at the second section of Domain 6 of the CSSLP certification and covers pre-release activities which include implementing the actual testing process, the actual conductive test, and the variations of the test that will be employed at this stage.
- Understand the pre-release activities to carry out before launching software
- Understand the pre-release testing process
This course is intended for anyone looking to develop secure software as well as those studying for the CSSLP certification.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
One of the most critical aspects of this would be to define and test for the failure mode. Now, due to high levels of interaction and complexity, software failure can result from widely variable and seemingly unrelated causes. Now, by contrast, a physical system tends to fail in a way that is fairly common and predictable. For the same reason, exhaustively testing all possible types of failures is infeasible for software. It is because there may be an infinite number of variable inputs that could be processed, and these processed through highly variable computational pathways. For this reason, when development mode failures occur, changes can correct these. But again, we run into the problem of a high level of variability in both inputs and the computational pathways, even during the period of development.
When we add the complexity of changes that are being made and changes of any kind, the internal dynamics of the software will necessarily also be changed. And these, of course, will create further variations in the computational pathways. Now, this of course is reiterative testing and this, if taken to its extreme, can become economically prohibitive. And so sufficiency in case scenario sampling must be done to carefully meet the need and yet, not overlook critical or simply important aspects of the product.
So in test program implementation, question number one. What do we need to test? Now, testing must determine the degree to which the program meets the design requirements and performs up to expectations. But not only this. In other words, it's not limited strictly to design requirements verification and that it performs up to expectations. It must be extended beyond this. Testing must also be directed to examine and verify the soundness and strength of the program or the system's higher risk elements, such as the more complex and potentially technologically fragile components.
Testing must likewise examine the security, compliance, and privacy functions since these will likely be targets of many attack forms once it reaches the real world and operational execution. Following that, question number two. How and where, that is in the program's flow, is testing to be done? Subsidiary questions to this will include: will or should this include hardware as well as target software? Should this be restricted to just the target software, or should it also include interfaces and interactions with other software? How will communications be tested to ensure that the target software interacts with these components correctly?
Now, part of addressing these facets will be constructing the actual test procedures and pinpointing the locations and the programmatic flow where the tests should be conducted to get the proper representation and results. So let's examine test program implementation and the test cases. Now, clearly the point of any test case of any kind, whether its use, misuse or abuse, is to determine that the test object response is predictable.
Test cases are typically arranged to confirm correct functioning and reliability as gauged by production of expected output. Test specific portions or functions to discover errors and malfunctions. And test to determine the technical fragility of a program element and whether that can be taken advantage of by a given threat. Now we have two primary categories that these tests can fall into. One is black box and one is white box. So let's examine both of these areas a little bit more closely.
First, let's take a closer look at black box testing. So here we see some examples of definitions of the terms and what characterizes the given piece. So what is a black box test? Well, it's a testing approach that is used to test the software without actually knowing how the software works internally, how it's constructed, or what sort of flow that it takes. The base of this is based on external expectations. In other words, when you give it an input that the internal behavior of the application is unknown, but that the output produced is what is expected.
Now this is used for testing for higher levels of functionality like in system testing or acceptance testing. When it comes to the programming knowledge that'll be required, we find that black box testing typically does not require any or at least not a great deal. The implementation knowledge is not required for doing black box testing because in black box testing, we're not looking at the internals. We're looking at how it works from the outside judging by what goes in and by what comes out. And this of course is the main objective of this type of testing, what the functionality is, and how it works, and what it will produce.
Now testing can start after preparing the requirement specifications. The granularity is rather low with black box testing, basically a pass fail you could say. There is no internal code access for black box testing. And the skill level basically takes someone of the anticipated skill level of the expected user in the normal usage environment so that by having no knowledge they're simply able to manipulate it as a user would.
So let's go on to white box testing. The white box type of testing, in comparison to black box, is quite different. Where the black box does not require and, in fact, demands that the tester does not know what the internal workings are like, the white box test is one where the tester does, in fact, know very well how the internal workings are put together and how they will produce. The internal workings thus are known, and this is part of what the tester will examine. The usage is intended to look earlier in the process, such as when unit testing or integration testing should be performed. There is very likely going to be a programming knowledge requirement in order to do this properly.
The implementation knowledge will likewise need to be rather extensive. The objective is to determine about the quality of the code and its functionality. Now, the basis of this is that the testing can start after preparing the detailed design document, which comes rather early in the cycle, as we've seen. Thus the granularity is going to be quite high, and it does require access to the code. Thereby, the code could be stolen if testing is outsourced, and this is one of the risks that must be considered.
Now, when it comes to this type of testing, as compared to black box, an expert tester will be required, someone with great experience in this area so that they can be competent in producing the results needed. Now, as we continue the test program implementation, we need to look at other sorts of tests. One will be load testing. Now, this is an iterative process that exerts an incremental increase in the task loads or the number of concurrent users. This is the process of subjecting the software to increasing levels of tasks or users until it reaches a failure threshold.
Now, the goal here is to identify the maximum operating capacity for the software. This could also be referred to as longevity, endurance, or volume testing. Now, in the context of software quality assurance, load testing is the process of subjecting the software to volumes of operating tasks or users until it simply cannot handle anymore with the goal of identifying that maximum operating capacity.
Now, it is important to understand that load testing is an iterative process. The software is not subjected to maximum load the very first time a load test is performed, but rather it is subjected to incremental changes in load constantly increasing until that failure threshold is reached. Now, generally, the normal load is known and in some cases the peak or presumed maximum level is known as well.
When the peak load is known, load testing can be used to validate it or to determine areas of improvement. When the peak load is not already known, load testing can be used to find out where the threshold limit is at which the software no longer meets the business service level agreement. In addition to this, there will be stress testing, and this is different than load testing. It is aimed to determine the breaking point of the software. In other words, the point at which the software will no longer function.
Typically, the software is subjected to extreme conditions such as maximum concurrency, limiting computational resources, or heavy loads. Now, if load testing is to determine the zenith point at which the software can operate with maximum capacity, stress testing is taking that at least one step further. It is mainly aimed to determine the breaking point, as we mentioned.
In stress testing, the software subjected to extreme conditions, which can be varied from maximum concurrency or limited computing resources or simply from heavy loading. Now, it is performed to determine the ability of the software to handle loads going well beyond its maximum capacities and is primarily performed with two objectives in mind. The first is to find out if the software can recover gracefully upon failure when the software breaks.
The second is normally to assure that the software operates according to the design principle of failing securely. For example. If the maximum number of allowed authentication attempts has been passed, then the user must be notified of invalid login attempts with a specific non-verbose error message, while at the same time that user's account needs to be locked out as opposed to automatically granting the user access, even if it is only low privileged guest access. Stress testing in this case can be used to find timing and synchronization issues, as we just described in the previous example, or race conditions or resource exhaustion triggers and the events and deadlocks.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.