CISSP: Domain 6, Module 1
The course is part of this learning path
This course is the first of 3 modules of Domain 6 of the CISSP, covering Security Testing and Assessment.
The objectives of this course are to provide you with an understanding of:
- Assurance - Operational vs life cycle
- Test and evaluation
- Access control principles
- Strategies for assessment and testing
- The role of the systems engineer, security professional, and the working group
- Insecure interactions between components
- Porous defenses
- SAN critical security controls
- Log management
- Code review and testing
- Testing techniques
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
So we're going to talk now about security control testing as we move into section two. So here our section topics will be first, we're going to talk a little bit about introducing the control testing. We're going to look at various sources that inform our testing process: how we develop tests, how we perform, how we judge the results, including a category view of the top 25, the SANS critical security controls, the use and meaning of logs, how we protect them, synthetic transactions, code review and testing, security throughout the development life cycle, maintenance, negative testing and misuse case testing and interface testing.
And here, of course, is our point of beginning. We don't know what we don't know. Now as simplistic as this may sound, it means that we don't know those things that we should learn. It means that we're not even aware of everything that we should know. And we might not even know where to begin. All of our efforts will be directed towards answering this point, so that we learn all that we can reasonably learn and make better-informed decisions about where to go from here. And doing that to eliminate the we don't know what we don't know.
We're going to take an approach that starts and looks at knowledge and awareness. We have four different categories of knowledge. We have our known knowns. Things that we know and are aware, conscious that we know them. We have known unknowns, by another name called assumptions. We have unknown knowns which are typically uncaptured lessons learned. And we have the final category of the unknown unknowns. Things that we don't know and we don't even know what they should be.
So we're going to take these categories. The known knowns we will actualize as things that we know and are conscious that we know them. They're things that we can actually actualize. With our known unknowns being assumptions, we have to validate them. When we validate them, we prove that they either are what they should be, that the scenarios and other things that we are making assumptions about are indeed probable as well as possible, or that we invalidate them as false assumptions and we discard them, possibly replacing them with others.
Our unknown knowns need to be turned into captured lessons learned so that we can absorb and naturalize the knowledge that they represent. And our unknown unknowns, through these various processes that we're going to use, we need to discover what they are because the overall goals are going to be to validate what we know and reduce the unknowns, by either defining them or eliminating them.
So, our category-based view of the top 25. These represent the most widespread and critical errors that can lead to the serious vulnerabilities in software. These are derived from the common vulnerability in numeration, gathered and managed by the Mitre corporation on behalf of the federal government. And they boil down to three high-level categories: insecure interaction between components, risky resource management, and porous defenses.
Insecure interaction between components refers to weaknesses that are related to insecure ways in which data is sent and received between separate components, modules, programs, processes, threads or systems. And like all of these lists, it will change in its order.
Here you see a listing of six different ones. And these, as you see, are numbered from 1 to 22. The list, of course, will change from time to time. This is from the year 2014, and as you see, number one, improper neutralization of special elements - otherwise known as a SQL injection attack, followed at number two by a similar one, and OS command injection. At number four we have cross-site scripting, a perennial appearance on this list. Number nine, unrestricted upload of a file with a dangerous type. Number 12, cross-site request forgery. And then number 22, URL redirection to an untrusted site, also known as an open redirect.
Now, these attacks are on this list because they're quite popular and quite prevalent and very often exploited. These attacks are frequently representative of the ones that attackers seeking to exploit webpages will typically look for first. And they'll go down the list until they find successfully the one to be exploited.
We have next risky resource management. And the weaknesses in this category are related to ways in which software does not properly manage the creation, usage, transfer or destruction of important system resources. And like the prior list, these are put on here as frequently occurring and they will change in their order. The CISSP exam, however, is not going to ask the question of where does a particular thing appear at what point on which list, but being familiar with what the attacks are in general, the definition. General ideas about how they're done would be appropriate for the exam.
Number 3, a buffer overflow. Number 13, path traversal. Fourteen, download of code without an integrity check- unfortunately quite common. Number 16, inclusion of functionality from an untrusted control sphere. Number 18, use of potentially dangerous functions. Number 20, incorrect calculations of buffer size which can of course lead to a buffer overflow. Twenty-three, uncontrolled format string. And number 24, integer overflow or wraparound.
In failing to manage resources properly, it's possible that buffer overflows can be forced to occur. And those would give an attacker a great deal of control potential over the system. And here we have one that is of equal concurrence: porous defenses. These are related to defensive techniques that are misused, abused or just plain ignored. And going down this list, these are things that typically can be programmed into a policy to make them more robust. So to name just a few. Use of hard coded credentials. At step seven on this list, I'm surprised it's not higher. Missing encryption of sensitive data. Number 10, reliance on untrusted inputs in a security decision. Or 11, execution with unnecessary privileges.
The things on this particular list are things that we probably have a great deal of control over and simply do not do. Here we have the list. The SANS-critical security controls from 2014. Here you have a list of things that can be put in place to prevent a lot of the occurrences from those three prior lists.
As you can see from these lists, you have many of the steps that we know that we should be taking to prevent these adverse events from happening. For example, inventory of authorized and unauthorized devices. That puts a perimeter around what we should and shouldn't have. Likewise, number two, inventory of authorized and unauthorized software. One of the areas that is a perennial problem is security misconfiguration.
Number three suggests that we should have secure configurations for all of our devices and that we should have these committed to the corporate memory so that we have a standard that can be enforced.
Number four, continuous vulnerability assessment remediation. The ongoing security management program is what that represents. The rest of these controls illustrate various things that as I say are things that we have a great deal of control over, and that we should be managing and monitoring and correcting as necessary, as promptly as they're identified.
One of the sources of information to tell us whether those things are working as planned or at all are logs. Now, logs, of course, are records of events that are occurring within systems and networks in the organization. Now, these entries (typically time-stamped events) will identify who, what, when and, perhaps, how a particular event has been conducted. Each entry contains information related to that specific event and identifies when and where it took place. As such, containing such information logs are themselves quite sensitive information. We need to protect logs to make sure that the information that we're going to use and provide us actionable intelligence about what is happening in our system environment. They typically generate, transmit, store, analyze and dispose of these things using various tools. But, as I say, logs themselves are quite sensitive because of what they contain, and, if corrupted, could lead us to make false decisions and take wrong steps, possibly exposing our systems and network to even greater hazards.
So part of our testing is going to include looking at the various controls and the security software that we have in place. And here we have listings of the various kinds of controls that are most common: antimalware, intrusion detection and prevention systems, remote access software, web proxies, of course, our favorites, firewalls, properly configured and operated routers, authentication servers, vulnerability management software, and our network access control and our network protection servers.
All of those systems will generate logs and those logs need to be reviewed to determine what's actually going on in our networks and systems. They typically fall into a couple of categories. We have system events which track performance and trends for operations. They look for anomalies and record where tracking problems, irregularities and, of course, unknowns. And they track violations, failures of access rules compliance on the part of users or possibly processes.
Another form of log data is for audit purposes. We have the operational type. Evidence of proper performance and corrective actions that have taken place. We of course have to have audit records to support our compliance efforts. Evidence of adherence to compliance requirements or the failure of them, and evidence supporting potential legal or regulatory proceedings.
Now some examples of commonly logged information. Client requests and server responses to those requests. Account information such as successful and failed authentication attempts. And both true positives and false positives in both cases are necessary really to be reviewed. We have usage information such as the number of transactions occurring in a certain period, telling us things about our operational load and capacities. And then significant operational actions such as application startup and shutdown, applications failures and major configuration changes. Of course, logs will inform us about breaches that have occurred.
Now normally, investigation of a breach starts with evidence that the thing has happened. And we many times have to work our way backwards, deconstructing, so that we can get back to the original point where the breach may have occurred. Logs contain the evidence that will support this deconstruction and the logs, therefore, need to be protected from confidentiality or integrity breaches. It means that they need to be secured to prevent any sort of manipulation or scrubbing. Logs improperly secured in storage or transit are susceptible to tampering and this is exactly what we need to prevent. Logs, of course, must be preserved so that we can examine them. So their availability is critical. Logs need to be protected to ensure this availability.
Now, testing requires review after concluding a run to judge the results, and a log, of course, will record this. It also supports recovering from halts, requiring a quick review to ascertain cost and remediation that needed to be put in place. One thing to take account of is that many logs have a maximum size. Now, the size limitation is meant to conserve system resources because some logs can balloon up to very very large sizes. But on a more practical level, the size limitation makes analysis, search, and review a practical thing that we can do. And one thing that we need to be very careful of is that when a log reaches its maximum capacity, one of two things typically will happen. Either the system shuts down or the log overwrite begins. And records that we may need as critical indicators of events that have happened starts to overwrite those events and obliterates them.
Now, logs themselves are, of course, a record of things that have happened. Thus, log analysis is reactive to these system events. It records what has happened. It cannot tell the future. So we need this to do anomaly detection of things that have occurred causing system problems, user behaviors, and other unknown to-be-investigated events. It means that we have to have well-put-together processes for analyzing the logs because, without them, the log value is just so many bits and bytes giving us no actionable intelligence whatever. And the value of the logs, therefore, is fully compromised because they're not contributing, simply because we're not looking. So we must implement proper practices to have the logs run, have them properly stored and protected, and then analysis processes to ensure that we do extract the informational value from them. So it needs to be prioritized properly. We have to establish policies and procedures for the log management and for creating a sound infrastructure to support this.
Logs require support because, without this, this becomes so much unused space, unused data from which the value cannot be gathered. One of the things that logs can support is the analysis of what users are doing. By using the log reports of real user monitoring, we are able to analyze what is actually happening in production systems. By looking at capturing and analyzing every transaction, we're able to see resource consumption, system behavior, program execution, and authorized usage. It is a form of passive monitoring because all we're doing is capturing and then reviewing what occurs. And we rely on web monitoring services that continuously observe a system, tracking these kinds of systems, their availability, functionality, resource consumption, responsiveness, and other characteristics.
To compliment real user monitoring we have synthetic performance monitoring. Now, as a complement to real user monitoring, what this does is allows us to put together scenarios to test various theories. This involves having external agents run scripted transactions against a web application as a simulation. They're meant to follow the steps that a typical user might follow when they're processing this particular transaction scenario. Obviously they don't track real user sessions but they serve as a test of a theory or a potential transaction scenario to demonstrate what will happen to demonstrate consequences, resource consumption, and other variables.
So the types of monitoring that we can do, as either real user monitoring or as synthetic, it can be website, database or TCP port monitoring. The synthetic monitoring benefits provide theory validation of potential operational conditions or outage scenarios. They can focus on what monitoring application availability 24 by 7 might reveal in the event that other scenarios, real scenarios, have presented difficulties in order to simulate what might have been the causative factors. The synthetic monitoring can provide knowledge of whether or not a remote site is reachable as a test. It helps us to understand performance impact of third-party services through simulations. It allows us to monitor the performance availability of software-as-a-service applications and their supporting cloud infrastructure. It allows us to test and examine various scenarios for business-to-business web services that use SOAP or REST or other web service technologies. It allows us to test theories for monitoring critical databases and queries for availability. It allows us to objectively measure service level agreements and the various service perimeters, testing reality against theory. It allows us to baseline and analyze performance trends across various geographies. And it complements real user monitoring by validating various use scenarios so that when we come across them in the real world we're able to react properly to them.
Now, code review and testing is a part of every testing process. Within this, security must be a priority in all phases of software development. What we're trying to do is identify possible vulnerabilities so that we can prevent them from being committed to the final versions. And preferably detect them as early in the process as possible.
The causes of security vulnerabilities are many. Most often, though, they're a result of bad programming patterns, misconfiguration of security infrastructures. They can, of course, be functional or logical bugs that exist in the security infrastructures, and logical flaws in the implemented processes.
Now, the techniques commonly used are black box or white box testing. Black box testing where we judge functionality based on the results, rather than knowing exactly how it's done. And a white box test, where what we're trying to do is relate functionality to the actual construction, where we know exactly how everything is being processed. We have various dynamic testing forms as well as static testing forms. And we have manual testing, where we sit down and do it ourselves, or we run programs like Fortify or Veracode against our program modules.
Considerations that have to be included in the overall strategy of testing include assessing our attack surface, size, and depth. What kind of an application is it? What kind of inherent vulnerabilities may exist? We have to be able to judge the quality of results and the usability of them. We have to test for supported technologies to ensure that they themselves are risk-neutral. And we have to look at performance and resource utilization to ensure that the programs themselves are efficient as well as effective.
This ends our first module in Domain 6, Software Testing and Security Assessment. Please join us for the next module which will take up the subject of security control testing. Be sure to join us again for that.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.