CISSP: Domain3, Module 2
This course is the 2nd of 6 modules within Domain 3 of the CISSP, covering security architecture and engineering.
The objectives of this course are to provide you with and understanding of:
- How to capture and assess business requirements
- How to select controls and countermeasures based upon information systems security standards
- The security capabilities of information systems
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Welcome back to the Cloud Academy presentation of the CISSP examination preparation review seminar. We're going to continue now with our discussion on domain three, and the topic area is capturing and assessing requirements.
Now in the capturing and analyzing of requirements, these, of course, are the things that specify what the system must do and what sort of security the system must have. We go to all the key stakeholders for this particular system, this particular enterprise, and we walk through a discussion with them, what does this thing need to do? What sort of information is it going to be handling? What sort of protective measures do we need to develop? Equally important is what sort of things could occur that would disrupt your operation? What sort of things would we be doing in the area of information security that could interfere with what you're doing, knowing full well that a compromise on certain things is going to have to be reached? What we're going to look at is we're going to look at some frameworks for deriving requirements and directing how we can implement them to satisfy an optimal solution so that the business is not impeded unduly, so that information, as precious as that is, is not compromised, and that for all the things that we can do that will reasonably, preferably a little better than reasonably, protect the system and the information, that those things can be implemented. And that people on all sides of this particular issue, the business folks, the security folks, the auditors, the regulators, can come to an agreement that it is doing the job that it needs to do so that it achieves the operational objectives of the enterprise, and those of external compliance sources.
Now, requirements generally fall into two broad categories, functional requirements, which are the things that the system, or whatever is under investigation and ultimately to be implemented, things that it needs to do, functions that it needs to perform, operational and security related. The other broad category are the non-functional requirements. Now, contrary to what the name seems to indicate, that they don't do anything, this focuses on the qualities or attributes or conditions that need to be created within the environment that will produce evidence that the controls are working, that facilitate operational performance and security to perform correctly. The kinds of things that we need to produce the desired end state include functional and non-functional requirements that will produce enhanced levels of these conditions, those that provide better resistance, the ability to withstand attempts to subvert normal operations. Robustness, that it has the strength to perform correctly under a range of conditions without undergoing complete failure. Resilience, meaning that it has the flexibility of functionality such that the operations can continue, even after some form of assault or attack or error has happened. That the system is recoverable so that it can be returned to a trusted state, and then corrected to continue its operational performance. That there are conditions and operations within it that enhance its redundancy. And then overall, that we have increased reliability so that the system will perform in a way that reflects the necessary qualities of trust and assurance that we're trying to achieve.
So the mechanisms that we're going to use to do the capturing of requirements will include these sorts of activities. Vulnerability assessments, of course, examining what we have in place, or the intended design, to see what potential vulnerabilities exist. Our traditional risk assessments are threat modeling to see what sort of attacks are out there, determine what kind of a system this is that we're looking at, either a design or one currently in service. What sort of data it has, and what threats are likely to be inflicted upon it. What are the user operational needs? What sort of regulatory compliance items are we going to have to account for and then achieve? What sort of historical issues do we have with an implementation like this or something similar? And what sorts of constraints do we have coming from the business, coming from regulatory sources, coming from operational needs, from whatever source they may come? All of these will provide input that will serve to inform the design and operational functions that the system in question will have to perform.
As we collect the requirements of all kinds from all sources, we should be considering what our policy ought to be. Now, the policy itself at this point in the exercise needs to reflect, in broad general terms, what we want to achieve as an organization, anything from an industry regulation standpoint that the industry itself does reflective of best practices, what regulatory sources do we have to account for, what legal liabilities do we need to reflect effect of dealing with? Those sorts of things need to inform the kind of policy we're going to write. As we write the policy, we need to keep in mind that this is going to have to be modeled in some way so that we know what it will look like when it becomes an implemented operational artifact within the system. It also means that we're going to have to do feasibility research on this as to whether or not we can actually achieve what we've written, because it will need to go into the system in the form of coding. Then we're going to have to do a formal security model that verifies that everything will be built into the policy that ultimately make it into an implemented form actually does the job that it's supposed to do. As we've already said in this course, many management professors and consultants have said, "If you can't measure it, you can't manage it." We have to prove that the model works effectively, without undue interference or impediment to the business objectives it's supposed to be helping enable, or at least getting out of the way of. That means we'll have to come up with evaluation criteria. The aim of the system assurance is to verify that it meets the stated set of goals, but we have to have a way of measuring it.
Therefore, we have to establish what our evaluation criteria and the specific metrics within that criteria must be so that we can measure them and determine whether or not what we are designing and building, or what we are proposing to modify, will achieve those goals. Now, there are a number of standards that have been developed over the years that can point us in the right direction. And for the next few slides, we're going to examine a few of those. One process that the US government has used for a few decades is called certification and accreditation. This is now called system security assessment. Now, this is a two part process. Overall, it has the objective to determine how well a system, be that a design, or when an operation that requires reexamination, measures up to the desired, preferred, or required level of security and performance in the real world. And then to determine whether or not we should proceed with it based on whatever the results may illustrate.
Now in the certification phase, this involves technical and non-technical evaluation techniques where we actually go in and test the system itself in various ways. And we review the documentation and policy and its implementation so that we can see that what actually operates in the coding, and what is stated in the guidance documentation, are in alignment. It's going to require substantive testing to make sure that what we think it does, it actually does. Or if it doesn't do it exactly the way that the documentation says, how it deviates and whether or not it still meets the objective, and that in such ways, it's acceptable. We compile our results and compare it to an established baseline, if there is one, to make sure that when it meets all of these things, it's actually meeting it in a way that has been specified, in the way that its operational environment is going to require of it. If we come out with a positive result, then we enter the next stage, which is the accreditation phase.
So in the accreditation phase, which will follow the certification phase, we put together a report, very comprehensive, including all aspects that had been projected to be tested and those that were, all the results, and we make a presentation to management about this. During that presentation, a full discussion of everything that was done, everything that was found out, takes place. And any sort of corrective action, remediation of gaps, insufficiencies, or any other findings that required any sort of remedial action, and what the results of that were. We also have to discuss what residual risks remain and what steps need to be taken to manage those. We conclude with a recommendation for further action, which means we start out by saying either, "This is not acceptable in its current form "and needs the following things," or, "It's acceptable in its current form "so long as we take these steps," or, "It's acceptable and there is no corrective action "that we recommend at this time." If the approving officer, that's what AO stands for, if they approve, then they execute the acceptance statement and accredit the system.
Now, this is a very significant step, because in the course of establishing the accreditation and signing off, the AO is accepting the risk on behalf of the organization that he represents. He's accepting the responsibility to perform whatever the critical actions are that need to be done to establish full compliance, and he's agreeing to see it through to get it into the correct state as based on the recommendations and corrective action plan. The issuance is an authorization to operate, or ATO, or if there are remedial actions that must be executed within a time period, then the interim ATO, or IATO, is issued, and then the project goes into effect to get those corrective actions performed. If they're performed within the allotted period, 30, 60, 90, 180 days, whatever it might be, then it automatically transitions to the ATO. The basic source that produced this particular process of certification accreditation is the famous "Orange Book," the Trusted Computer Security Evaluation Criteria. This was the DoD standard, established in 1983, that set the basic standards for implementation of protections within the systems that were then most prevalent, which of course, at that time were mainframes. It was intended as a guide to help the DoD find products that meet those basic standards. It did focus strongly on enforcing confidentiality. It did not disregard availability or integrity, but its primary focus was indeed on confidentiality. It stayed in effect as the accepted standard until the Common Criteria superseded it in the year 2000, officially. As an historical artifact, the TCSEC, the so called "Orange Book," was based on the Bell-LaPadula model that we discussed earlier.
Now, the "Orange Book" evaluation criteria set four divisions. Starting at the bottom, D, minimal protection, essentially, this meant that the system either hadn't been evaluated at all or met only the very, very barest minimums. One step up, C, called discretionary protection, specified certain characteristics that now required things that we take for granted today like usernames and passwords and various rules. The C1 is the lower of the two, and C2 was the one that was most commonly assigned to systems that were at the top level of non-classified processing for the US DoD. Moving up the next level, B, mandatory protection. It had three levels, beginning with B1, labeled security protection. B2, structured protection, and B3, security domains. At this level, classified processing was allowed. Moving to the top level, A, verified protection, we have at the highest level, A1. There is an A2 called beyond A1, but that was basically theoretical in concept.
Now, as far as how the "Orange Book" and this evaluation system is going to appear on the test, current rumor has it that it's not likely to appear in any great profusion. It has been superseded, it has been officially retired since the year 2005, but there may be a question or two that appear. So, it should not be disregarded, but it doesn't hold the very large space that it did in past versions of the exam. The ITSEC, which was developed in the European community, was a response to the fact that the "Orange Book" was built around the US DoD requirements, and that they focused primarily on confidentiality. The European Community reluctantly, somewhat begrudgingly, used that system for a while while it developed its own. In the years 1992 to 1995, they did develop their own called the Information Technologies Security Evaluation Criteria, which sought to broaden the areas of concern from confidentiality to include integrity and availability. It set a very methodical approach to doing this evaluation, and set the stage for what followed that, the Common Criteria. And this was in effect from 1995, when it was finally signed off, until the year 2000, when it too was superseded by the Common Criteria.
Now, the Common Criteria started out as a multinational joint effort to establish a global standard. It was eventually submitted to the ISO, and assigned ISO 15408 as the Common Criteria standard. It was truly the first international product evaluation criteria system, and it borrowed heavily from a lot of the existing standards, the United States' own "Orange Book," the European Union's ITSEC that we just covered. What it specified was a two division process that developed a set of functional and assurance requirements for categories of vendor products deployed in a particular type of an environment. One of the most important things that it did was it established the nomenclature that you see here. We have the protection profile, which is established as a statement of a security need, usually by a customer, or a prospective customer of a particular product, be it hardware or software. The response to this from a potential vendor would be the security target, or ST. This was a narrative describing how the potential or existing product offering would satisfy that protection profile. The TOE, or target of evaluation, is the actual product specified in the security target that embodies all the functions and features described in the ST. The evaluated assurance level is the level that is established once the evaluation process has been concluded and the product has a final rating. And the evaluated products list, or EPL, contains all of the successfully evaluated products, both hardware and software, that have been completed. And this is available for anyone to buy these products from.
The Common Criteria produced a seven layer rating system. EAL0, which is not listed here, simply means the product has not been evaluated at all, so by default, every product prior to its evaluation is rated EAL0. And then EAL1 through 7 rates increasingly the product's ability to meet the security target statements. Going through the list, we go from functionally to structurally to methodically, increasing in rigor and depth all the way up until EAL7. And there are products on this list that are at all levels. These levels can be set as targets for development of a particular product, or they can be set by the actual evaluation. One may serve as the goal they want to achieve, though it could make a higher level if it was tested in that way. Others are tested to the limit and see how high it gets. This provides the functionality and the global reach that a security standard needs to achieve so that it can be explored, evaluated, and used in a wide variety of environments.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.