1. Home
  2. Training Library
  3. Security
  4. Courses
  5. CISSP: Domain 6 - Security Testing and Assessment - Module 2

Maintenance Tasks

Contents

keyboard_tab

The course is part of this learning path

play-arrow
Maintenance Tasks
Overview
DifficultyIntermediate
Duration45m
Students46

Description

This course is the 2nd of 3 modules of Domain 6 of the CISSP, covering Security Testing and Assessment.

Learning Objectives

The objectives of this course are to provide you with an understanding of:

  • System operation and maintenance
  • Software testing limitations
  • Common structural coverage
  • Definition based testing
  • Types of functional testing
  • Levels of development testing
  • Negative/misuse case testing
  • Interface testing
  • The role of the moderator
  • Information security continuous monitoring (ISCM)
  • Implementing and understanding metrics

 

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

There are, of course, maintenance tasks that have to be performed. We have to look at our software validation plan, and on occasion, it must be revisited and revised. These things are plans and they don't stay static forever, and we shouldn't expect them to. We have to look at anomalies, and we have to examine them to make sure that we can explain them. We have to evaluate them for the level of seriousness, the extent, the cause and effect relationships, the anomaly and exploration may have with other aspects of the system, and we have to have a way of setting them as a priority amongst other things so that we can tell the seriousness and the level of effort necessary to resolve them.

We have to be able to have a problem identification and resolution tracking system so that we can track the history of the product through its development, and then, again, through its operational lifecycle. Anytime change is needed, changes need to be examined before they're simply blindly accepted to be done. So there must be an assessment process for each change to look at various aspects of the proposed change.

We're going to look at task iteration. How often does it need to be done? How extensive? Does it need to be different on each iteration? And throughout all of these maintenance tasks, we need to look at documentation and make sure that it is properly updated to reflect the current state, as is, of the product.

We have, of course, our negative and misuse case testing as well, and this cannot be overlooked in any complete testing program. Positive testing determines that your application works as expected. If an application error is encountered during positive testing, this is a case of the test itself having failed. In a case of positive testing, the test is done to determine that the program element being tested performs as it's supposed to. So if it finds an error, the test itself has failed because the program has an error and does not perform as expected.

Now, a negative testing. This ensures that the application will gracefully handle invalid input or any other form of unexpected user behavior. Now, typical negative testing scenarios look at various aspects, and again, this is to see how the program will handle non-normal situations. Some include populating required fields, others include correspondence between data and field types. The wrong type of data put in the wrong type of field should produce an error, and 'How does the system handle that?" is the question being explored.

The allowed number of characters, allowed data bounds, and limits. These are checking things about the fields to make sure that if there is a hard limit on the field, that it enforces the hard limit. If it isn't going to allow data to go beyond certain limits, then it should fail if that's done. If the data is reasonable, and then, checking the web sessions to make sure they behave, that they start up the right way, that their authentication cannot be bypassed or testing to see if it can be, to make sure that the webs do not fail.

Now, the use and misuse cases are these. Use cases are abstract episodes of interaction between a system and its environment. This is intended to explore the normal interaction scenarios to ensure that performance expectations are filling that as designed. It validates structure to functional relationships, and finding errors means that the test itself has actually failed, as I said earlier. For the misuse cases, this testing is intended to validate that the component will properly handle error conditions or unexpected use or behaviors without crashing. Now, examples of this include failing to populate required fields, it exceeding bounds and limits, and mismatching a field type with data type.

We go on to interface testing and integration testing. Now, the integration testing is intended to validate whether or not the components of a system work together according to the design and its specifications. Compared to this, interface testing is intended to verify that the system's components being tested pass data and controls correctly across the various transaction points found throughout the program.

Now, specifically for the interface testing, we want to make sure that we verify if all interactions between the application and the server are executed properly. We want to check and verify if errors are being handled correctly, make sure that what happens if a user interrupts a transaction while it's being processed so that we can understand what the effect of that will be, and then, checking to see what happens if a connection to the web server is reset.

We have to look at the internal and external interfaces, and with regard to these, the question that we have to ask will be with respect to whether or not all browsers that will possibly be used with this application have been tested. Have all the error conditions related to external interfaces been tested when the external application is unavailable or the server is inaccessible? If the interface uses plug-ins, does it function without them? What occurs if a transaction is exited before completion? Likewise, what occurs if the connection is reset? And have all unnecessary features, code, and libraries been removed?

The internal interface, likewise, needs to be tested, and these questions are the ones we typically want to be able to answer. Again, if it uses plug-ins, will it function if those plug-ins are unavailable or non-functional? Can all linked documents be supported or opened on all platforms? Are failures handled if there are any errors during download? Can users use copy and paste functionality? Are you able to submit encrypted form data? If a system crashes and a restart and recovery mechanisms are there, are they efficient? Can they be relied upon to do the job? If you lose the internet connection, does the transaction cancel? How is a browser handled crashed? Has the development team implemented intelligent error handling?

Now, in the testing process, we have, on the one side, our development group, on the other side, we have our operators group, and in the middle, should be, the role of the moderator and the moderator stands between the development and the operations group and oversees the testing function. And the moderator acts as the recorder, the overseer of the testing group so that it brings independence and the proper insight to determine whether or not the testing process is being handled properly and that it is succeeding at its particular job.

So we're going to move into section three where we're going to collect security process data and discuss the topics related to that.

About the Author
Students1435
Courses29
Learning paths1

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.