This course is the 2nd and final module of two modules within Domain 2 of the CISSP, covering asset security.
Learning Objectives
The objectives of this course are to provide you with and understanding of:
- How to ensure the appropriate retention using archiving, retention policies and best practices
- How to determine data security controls, focusing on critical tenets, data encryption, the security content automation protocol (SCAP), in addition to considerations and baselines
- Establishing handling requirements where we look at the importance of labeling and destruction of different media types
Intended Audience
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Prerequisites
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
Feedback
If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.
So we're going to move on to our next section, in which we're going to determine data security controls.
Now, one of the the forms of documents that we have is a baseline. Now, the baseline just to give it a definition, it establishes a minimum set of safeguards to protect all or some of the IT systems and with that, the information they hold of the enterprise. These are built using a framework and enforce the fundamental security concepts we've spoken of to protect the confidentiality, integrity, and availability of the systems, and the information assets that are contained within or flow through those systems. In that, it's specifies the minimum level that has to be achieved and maintained. It does describe and lay the foundation for our defense-in-depth program through the administrative, technical, physical, and governance aspects of the enterprise, and our information protection program .
So what considerations should we make? In looking at baselines and all of the different governance guidance that have, we have address these questions. Which parts of the enterprise or systems can be protected by the same baseline. It would be fallacious to assume that we can establish the same baseline except in its most general form, protect everything in accordance with its value, so to speak, across the entire enterprise. Different needs arise in different areas and whatever our baseline might be, it needs to recognize and take account of those. Next question, should the same baseline be applied throughout the whole enterprise? Again, as long as your baseline is very general. My example was protect everything in accordance with its value. That could be effectively applied throughout the entire organization, but beyond that, trying to break it down into details would be a substantially more difficult problem.
Next question, at what security level should the baseline aim? Here's a question that security level may not even make sense, unless you're thinking in terms of a high, medium, and low, a grade one, two, three, four, five, or some other appropriately and well-defined scale. Typically, the protection that has to be applied will have to be applied in context with whatever the asset happens to be. General controls, yes of course. But, specific things have to be defined. How will the controls forming that baseline be determined? Well again, you have to look at what the asset is, what its value is, what its meaning to the organization, what effect it's lost might have, a whole variety of factors. How the controls forming the baseline be determined? A lot of that is going to be well, what are the attributes of the asset that it's being protected? There are many questions that these four will cause us to ask and answer. In the end, they will drive how we make our controls and our program appropriate to the protection needs we have. When we look at the controls in detail, we need to look at places that controls catalogs, and there are several different ones, and here are a few. We have international and national standards. We have industry sector standards or recommendations. You could call them best practices or standard practices. And then, our own company. What we've done in the past, what we do in the way of solving our own problems, possibly even aligning those with industry standards for tailoring them to fit our unique situation.
Now, some examples, and these of course, are quite well-known ones. The ISO-27002, the code of practice, that accompanies the ISO-27001, the governance framework. In the U.S., we have our NIST Special Publication 800-53 which is effectively a controls catalog of some 400 pages now. And the International Security Forums, code of good practice. Now, to look at these things and compare them line for line, you would find that they differ quite widely, but it looking at the actual controls, the specifications, how you go about analyzing the problem, you will probably find that they actually resemble each other down into the details as well. In one of the NIST guides from Special Publication Series 800, Volume 14, we have our generally accepted principles, which are sometimes referred to as the GASSP, the Generally Accepted System Security Principles. They begin with establishing information system security objectives for the given system. And in it, it describes a general process for determining what those are. It recommends programs that focus on prevention, detection, response, and recover types of controls. It directs that the protection of information should be done as we've already discussed: when it's being processed, when it's in motion, and when it's in storage. It makes certain kinds of assumptions. One of which, and this is a very important one, is that external systems are assumed to be insecure. To put that in its correct form, systems that are external to ours are thought to be unknown in terms of what their security level is, and therefore, to err on the side of caution, we assume that they are insecure when compared to our own since we know what ours are.
We have to establish resilience for critical information systems. Now, that's a particularly heavily charged line. Critical information systems infers that this plays an extremely important role in our organization and by saying that we need to establish resilience for this critical information system, it means that we have to engineer, operate, and take other steps to ensure that in accordance with what defines this critical nature, that they will establish resilience, which will include things like recover ability, resistance, and other traits that mean that the system in being critical can be relied upon or can be recovered and restored in ways that minimize exposures and losses. And in all cases, we have to have the ability for auditability and accountability. When they put controls programs in place, it's almost given that we have to be able to audit it as many management consultants, and professors have said over the years, if you can't measure a thing, you can't manage a thing. Well, auditability and establishing accountability through that audit is the way that we measure, which means, of course, we have to establish criteria and metrics. But if we don't measure it through audit, then we don't know whether or not it's meeting its goal. It seems an obvious thing to say, and yet, in actual practice, auditing is something that often times goes begging. But there's no overstating its importance.
Now, whatever our program elements might be, they are general. They're taken from one of these catalogs. They're taken from various sources. Our business, our organization, our government, our hospital, whatever our organization might be, is a unique organization. And so, we must tailor these things to fit the needs that we have identified in our own organization. We will need to scope these things to set the determined extent of applicability of these controls and we have to decide what will be included and what will not be. To try to, as the old saying goes, eat this elephant all in one bite, would be an exercise in futility, and we certainly wouldn't achieve the results that we need to. So we need to decide from the outset, what is going to be part of this and what will not be. Maybe we do them all in phases, but regardless, we must set a scope so that we can accomplish it, and move on to the next thing. And within that scope, we have to take these controls and we have to tailor things. We have to customize them to meet the specific needs for the unique situations we're going to encounter. We need to structure these measures and these methods so that we can integrate them with what we have in place already. It is frequently a mistake to take workflows that we have had in place and with the idea in mind that what we used to do, to use the hackneyed phrase, "This is how we've always done it," we're gonna throw it all out and start from scratch. That frequently causes disruptions. Having a revolutionary approach, often times, as revolutions do, can cause a lot of carnage. It can cause a lot of disruption of a destructive nature. And what we need to try to do is take these things in our scoping and tailoring exercises, and integrate what we need to do so that we elevate, upgrade, improve what we are doing to make it more secure in its performance than simply taking what we do have, throwing it out, and starting from scratch.
Now, the Center for Strategic and International Studies has a group they call, the 20 Critical Security Controls Initiative. So let's explore these for a few minutes. First, the five critical tenets, offense informs defense. This is a fairly obvious thing to say and yet, it is something that is often times not exploited nearly well enough. To say that our defense should be informed by the offense is really saying, if we know how we are being attacked, by dissecting those attacks, we should be able to determine what the targets of those attacks are, what the intent is, what their goals are, and that will inform our defense, and we will be able to erect controls along those lines because it reflects what the priorities of the attackers are. We have to prioritize things because we can't make everything priority number one and we really shouldn't fool ourselves. We shouldn't try to make them 1A, 1B, 1C. Things need to be done in a proper order, emphasizing the most critical things to us first, and working our way down that list. We will always have to have metrics. As I said earlier, if we can't measure it, if we don't have our metrics to be able to measure the stuff, we won't know whether or not our program really is as good as we would like it to be, or how to tune it. We're not going to be able to fix everything. Some things, all we can do, is set up the final control continuous monitor. The continuous monitoring is important because it informs our response capability. We may not be able to do anything of a preventive or countermeasure form to a particular situation, but we can respond to it if and when the conditions arise, and without continuous monitoring, we very deeply crippled that effort. Most standards these days, extol the virtues of doing automation wherever and whenever possible. Automation has to be done in many different places. For example having a firewall, having anti-malware, having IDS and IPS in place. These are clear examples of automation that a human being simply cannot do. And so, in finding those correct areas where automation provides us the service that we need, one that we really can't do, or one that automation simply does much better than we do, we would be foolish not to employ it.
Now, the NIST has put out a standard known as the Security Content Automation Protocol or SCAP. What this provides is a suite of specifications that provide standard formats and nomenclature by which software flaws and security configurations are communicated both to machines and to the human beings that operate those machines. SCAP is a multi-purpose framework of specifications that support automated configuration vulnerability and patch checking, technical control compliance activities, and security measurement. So as you see, it's looking at not only the mechanics, but how the mechanics are going to be measured. And then, it establishes goals for the development of SCAP, including the standardizing of system security management, promoting interoperability of security products, and fostering the use of standard expressions of security content. One of the hampering effects that we have in most of our operations these days, is the fact that we speak in standardized terms, but we don't speak across industries, across organizations, across to our peers, or our competitors, or to the government. We don't speak in a common language. We don't have common taxonomy and nomenclature. There has been much movement in that direction to get those, but we still lack a lot of them. We still lack a lot of sharing of information. There doesn't seem to be a proper way that we can do that without providing exposures. SCAP can provide a vehicle through which we can define those things.
Now SCAP, currently in version 1.3, covers these five areas. You'll find SCAP to find in the NIST 800-126 volume, which has best recently been updated. The five categories include languages, reporting, classification, metrics and scoring. As I said, being able to measure these things is extremely important, and we have to look at the integrity. Now, a couple of things become clearer as we read the more detailed levels of this slide. It becomes obvious that this is aligned with the common vulnerability and exposure standard maintained for the U.S. government by the MITRE Corporation and the common vulnerability scoring system. So it's already tied very closely to a nationally accepted standard, and this can provide a very strong basis for moving forward with our own internal program.
The framework for improving critical infrastructure cybersecurity, in its way, suffers from some of the same limitation, some of the same unique characteristics that inflict others. The current taxonomy for these organization needs to start with describing their current security posture or the as-is. And it needs to sit their target end-state for cyber security programs or the should-be. This typically will be derived from an external source, a compliance standard, or something of that sort. We need to be able to identify and prioritize our opportunities for improvement within the context of a continuous and repeatable process. We need to be able to assess the process, of course, and our progress towards that target state like we would with proper project management techniques, methods, and controls. And we need to be able to communicate this to all of the stakeholders involved in this particular thing about the cyber security risk. One of the things that needs to be part and parcel of this type of program is also the ability to switch our philosophy from repetitive remediation to one of continuous improvement. Now, continuous improvement is a term that has been tarnished over the years because continuous improvement doesn't seem to make things easy, but continuous improvement is the way that we move towards more and more effective, and cost-effective solutions, where repetitive remediation, it's always how we've done things. We know that it works, but it remains as costly as ever, and in fact, moving forward in time, it grows in cost because the damage is greater than it has ever been. So we need to take this framework and move ourselves from a philosophy of repetitive remediation to that of continuous improvement.
In this framework, there are profiles, and there are tiers and modules. The infrastructure integration of the framework with our organization is a fundamental pillar of how this framework is going to function. Defining these things and realizing that we need to do this in bits and pieces. It needs to be based on a strategic model so that all the bits and pieces are tied together, and that they flow together, and that as necessary, they pass information from one to another in their modules, and their functions, but it's not a piecemeal sort of thing where it's a series of point solutions. It needs to be tied together and then integrated into the infrastructure of our business to ensure it's appropriate and cost-effective. Part of that, of course, will always be protecting data at rest. To put this in simplest terms, this is making sure the data of a sensitive or of an individually identifiable form is always encrypted to make sure that it cannot be seen, or obtained, or broken by an authorized person, and return to a human-readable form except by authorized persons through authorized processes. It needs to be in drives, cloud, backup tapes, or any other form of address storage.
So to examine this in greater detail, we have the data at rest where malicious users are, of course, going to attempt to gain physical or logical access to the device through some method, all of which would be unauthorized. From that, they will transfer the information from the device that they have obtained this access to and put it on their own system where they can do what they will with it. They may seek to do other actions that will jeopardize that confidentiality of that device or take steps to ensure that we no longer have access to the information that they have now obtained. Therefore, our protection program should address how we detect the attempts of this unauthorized access through logged and through real-time alerts, how we make every effort to reasonably prevent unauthorized access to human-readable form, and that we can verify the success or failure of those attempts in order to facilitate our own optimal responses. So what we recommend is this. We implement a data-loss prevention program as a persistent preventive control. We develop a daily recovery plan to address event response needs because every security program will require proactive and reactive activities. We can't prevent everything and we can't rely on response for everything, so it must be a balance between the two. Using compliant encryption algorithms at commensurate strength and tools, meaning that you use the strongest form of this to protect data in accordance with its value.
We should always practice strong password requirements for complexity, cycles, so forth. And we should not use the same password for other systems to reduce single points of failure potentials. The optimal standards that should be used for passwords, you see here, contain nine characters or more. Nine to 12 is common. It contains characters from two of the three possible character classes: alphabetic, numeric, and are special characters, of course. Most policies include elements from all three. We should of course, employ other methods that will defeat the patternization of passwords across systems. Users fall into bad habits and they tend to replicate those bad habits across multiple system, making them all vulnerable. Another method would be to implement generational cycles so that when users have to change their passwords, they're not able to reuse them except within a very long number of generations. If we're going to have secure password management tools, we should be sure that they actually work as advertised so that our passwords and recovery keys are in fact, secured within them. Where passwords need to be shared with other users, and there are, of course, are plenty of cases where that's true, we have to ensure that the passwords are sent separately from encrypted files, and that users all know that the protective means must be enforced. Now, it's very common advice to say not to write the password down and store it in the same location as the storage media. We have to be sure that in the password policy that we have, that we don't undermine this by creating such difficulty that a user, limited by their memory, is going to break this particular rule because they simply can't remember it. It's a password of 20 or more characters long generated at random by a system, for example. Is it strong? Quite likely. Is it rememberable? No.
The data at rest: recommendations. After the covered data is copied to a removable media, we need to do a couple of other steps. We need to verify that the removable media works by following the instructions to read the encrypted recovered data. If it's applicable, we should securely delete the unencrypted covered data following secure deletion guidelines, as we've already discussed. Now, when we look at the containers for this data as mentioned earlier, we need to be sure that any removable media has got a proper title, has a reference to the data owner, and the encryption date to make sure it is properly identified as to ownership and control. Now, in various forms of this medium, we have encryption tools that we'll find such as self- encrypting USB drives, media encryption software, and then file encryption software. And in the next few slides, we're going to examine a few examples of that. Here you see a table that shows self- encrypting USB drive and brands, Imation S520, Kingston DataTraveler 4000 and 6000, and then it describes what it's best used for. Now, as you read this table, bear this in mind. These are vendor programs, and though they are high quality, these will not be asked of you on the exam because this test, as I've said, is vendor-neutral, but be aware. These are making use of all of the things that we have spoken of: proper encryption algorithms, proper procedure for the use, proper performance of destruction. All of those things are present in all of these products. So even though, these may not be asked of you on the exam, it's important to see that they are following the best practices that we're speaking of.
So we move on to speak about data in motion. This of course, is the case where we are using things, various technologies, to prevent the contents of the message or the traffic that is being sent from being revealed, even if the message is intercepted. We all know that there are people who will be reading traffic as it travels through the wire or through the airwaves. We have to be sure that knowing that, this data is protected from their snooping. One form of this in transit protection will be link encryption or also known as hop-to-hop. Now, this kind of encryption is typically done by common carriers across the public network for their own purposes. It's not typically the kind of thing that users will ask them to do. But what is important is, it provides protection between the links as you see in the picture. Between the hops, we have encrypted channels. As with tunnel, all data facts are encrypted including headers, addresses, and the routing information contained within a wrapper that routes it from hop to the next hop. This is typically done at the OSI Layer 2. Now, the disadvantages of link encryption produced by this method are that at each hop, the wrappers are stripped off. The carriers are interpreted so that routing decisions can't be made at each hop. Before the data leaves, it is given a new encryption carrier, and then sent on its way, but it does mean that at each hop, the data is, if only for a moment, vulnerable. From hop-to-hop, we go to point-to-point.
Now, in the picture you see, that we have the encrypted segment between the perimeter firewalls, but the behind that, the data will not be encrypted because it will be internal to that particular local network. Here, the encryption is the same from the standpoint that the original data carrying packet will be encrypted by the wrapper. Here though, it will not be decrypted until it reaches the external interface of its destination firewall. When it reaches that, it will have its carrier stripped off so that the content can be filtered going through the rules of the perimeter firewall as it passes into the destination network. But all across the public network, it will be encrypted the entire way with no decryption at any hop. And here we have the transport mode or end-to-end. This involves encryption of content only. The original headers and trailers are visible. The content is encrypted and as it moves from one desktop to the other, routing is happening, the packet is traveling, but the content of the packets is encrypted, and cannot be seen by anybody except at its destination desktop. So this end-to-end type protects everything within the packet and its data-carrying container only. So, the data in motion description of the risk is that these malicious users can intercept or monitor any data that's in plain text in these carriers. As it is transmitted across these networks in unencrypted form, which of course, leaves these packets open to any form of capture, or alteration of its data content between the source and destination.
So the recommendations here are, of course, recognize that these risks are real, and as we have learned from news and various announcements from government agencies, they're really quite common. When a covered device is reachable via a web interface, the web traffic should be transmitted over SSL, using only strong security protocol such as TSL 1.1 or better. SSL being itself, a deprecated protocol, covered data transmitted over e-mail must be secured using similarly cryptographically strong e-mail encryption tools such as PGP or S/MIME, or TLS in transport mode. Non-web covered a traffic should be encrypted via the application-level encryption. Where an application database resides outside the application server, all connections should be encrypted between the database and the application using FIPS-compliant cryptographic algorithms. Where application-level encryption is not available for a non-web covered data, then it should be using IPSec, SSH, or TLS type of network-level encryption.
These are some examples of very commonly employed encryption protocols. For web access, HTTP for open. HTTPS for secured. For file transfer, we have FTP, we have RCP. For secured versions, we have FTPS, SFTP, and SCP. And as remote shell, instead of using telnet, using SSH version 3. For remote desktop, VNC. Or radmin or RDP are suitable secure alternatives. Yeah, when you go about the business of selecting your encryption algorithms, typically, it is assumed that the longer the encryption key is, it generally provides better protection, and it's likewise assumed that long complex passphrases are stronger than shorter passphrases. Well, on their faces, yes. Both of these are true. However, assuming that these are at the heart of the strength, is really not the best way to proceed because we have to consider things other than simply the encryption algorithm and key length. For example, we must consider the quality of the implementation. If we have any control over that, that has to be considered, along with the encryption algorithm selection, the randomization of the seed values that we use, which is typically what these long complex passphrases are used for, to increase the randomization of the algorithms, generation of seed values, or initialization vectors. So simply trusting to longer keys and more complex passphrases is insufficient. We must consider it more broadly. And in our wireless connections, of course, since we can't see who is doing what, and someone who's not visible, possibly not even in the same room as you, accessing our wireless connection, we have to be sure that we take a very cautious approach, and put the appropriate controls into the connection to ensure that wherever they might be, they won't see things they're not authorized to see. Therefore, we want to be sure that we are employing strong wireless as the connection permits, using nothing less than WPA2.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.