The course is part of this learning path
This course is the 3rdof 6 modules within Domain 3 of the CISSP, covering security architecture and engineering.
Learning Objectives
The objectives of this course are to provide you with and understanding of:
- Vulnerabilities of security architectures, including client and server-based systems, large-scale parallel data systems, distributed systems
- Cloud Computing deployment models and service architecture models
- Methods of cryptography, including both symmetric and asymmetric
Intended Audience
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Prerequisites
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
Feedback
If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.
Welcome back. We're going to continue now with Cloud Academy presentation of the CISSP examination preparation review seminar. In this module, we're going to cover the vulnerabilities of security architectures, the system itself, technology and process integration, database security, client-based, server-based, large-scale parallel data systems, distributed systems, and cryptographic systems. Then we're going to get into a long discussion of cryptography, which, as we all know, is a very important attribute in technology that we must use in today's world.
So let's begin. What are we faced with? We're faced with threats of all different kinds and descriptions. Some of them are human-motivated, some of them are simply failures of the hardware that humans design and build. Then we have misuse of system privileges, buffer overflows and other memory attacks, oftentimes arising from less-than-perfect programming. We have denial of service, things that are caused by outside agents, but occasionally, are caused by we ourselves in the way that we use our systems, reverse engineering, a method that is used by both our adversaries and our allies, to examine how things work, possibly with the idea in mind that they're going to improve them, possibly with the idea in mind that they're going to know more about how to exploit them, and then of course, the very common system hacking.
One of the areas that has faded from current attention in recent years is the idea of emanations. System emanations, commonly called EMSEC in the military circles, is an analysis of certain systems' vulnerability to access, of an unauthorized kind, as a result of its electromagnetic emissions from hardware. This can be applied to telecom systems, radio networks, cryptographic systems, or other kinds of radio-emanating systems. In the days when this was really a concern for the military, they came up with a standard known as TEMPEST. This was a standard, a short name for referring to investigations of compromising emanations, and how these can release, in a way that is detected by sensitive detection equipment, unintentional data that's related to intelligence signals, bearing information that can reveal what is going on inside of a system, or if you're the one doing the listening, what can reveal to you what a system is communicating to others. It can be intercepted and analyzed without any physical connections, whatever. It can disclose the data transmission received and handled or otherwise processed by any IT equipment.
The TEMPEST standard thus employs methods of shielding, separation, both electrical and physical, and various other techniques, such as Faraday cages, in order to suppress and prevent the compromising emanations' escape from its emanating source, and the Faraday cage, a fine copper mesh built into the casing around such components, is one of the most commonly known parts of this. There are state attacks, also known as race conditions, which attempt to take advantage of how a system handles multiple simultaneous requests. These arise in software when an application, depending upon the sequence or timing of processes, fails to operate correctly, fails to time them. Many software race conditions have associated security implications. A race condition allows an attacker with access to a shared resource to cause other actors that utilize that resource to malfunction, resulting in a variety of effects, including denial of service and privilege escalation. We also have an attack known as covert channels. Now covert channels are, as their name indicates, channels that are hidden, not obvious to the system, and in fact, may be of a kind that can't even be defined to the system as a channel. As such, these mechanisms are hidden from access control and standard monitoring systems inside that computer system that would reveal their presence, and as a thing that can actually be congealed and recognized, enable controls to be placed over it to prevent it from being misused. The person or party who's exploiting a covert channel is using some form of irregular method of communication to transmit information.
Now, the Orange Book, the TCSEC, identifies two different types of covert channels. The first kind is storage, and this involves the actual movement of data through an irregular form of channel that is not recognized by the system, and as I said, not being recognized, it can't be defined as a thing that can be controlled by the access control methods or mediated by the reference monitor. The other kind is timing channels. These are non-normal communications methods that, once again, cannot be identified in such a way that the system recognizes it so that it can be created as a defined resource and have access controls placed over it. Consequently, covert channels can be very dangerous if they are created within a system and are exploited by a hostile actor.
Now mainframes and other thin client systems. From a security perspective, they have an advantage, and that is, that being centralized, the architecture within the mainframe, or this other thin client system, gives it the ability to focus on the design and implementation of the centralized environment. By focusing on the mainframe, the heart of the system, hatever form that may take, it means that all the control is in essentially controllable and operable location and not dependent unnecessarily in any material way on any of the endpoint clients. When we get away from the centralized systems of mainframes and thin clients, we then get into the world of client-server, a very, very common way of doing computing today. When we get into client-server, what compensates for all the control being in the mainframe or all the control being on your PC on your desktop, we now have to have middleware that connects the two endpoints. This connectivity software enables this combined system, a thick client, you might say, and a back-end server, each one providing its respective advantage to the computation at that endpoint. It allows them to work jointly to process multiple threads of information at the same time, and enable different kinds of interaction that a simple mainframe, with dumb endpoint terminals, that it couldn't do. In this way, we solve different kinds of application and connectivity problems that a simple mainframe environment, simple by comparison to the client-server world, could not solve.
We also have embedded systems. These take a board, a small form factor, that is, and put computing resources on that board, and even though they have limited processing power, they're able to accomplish the task assigned to them as a complete system in that small form factor. Inside these embedded systems typically will be hardware, firmware, and the software, so that it is able to perform these functions without depending upon another system to enable them further. Now pervasive computing, the notion that computing is available to whoever has an appropriately empowered device, wherever they happen to be, be they in Africa, Russia, China, US, Canada, or out in the middle of the Pacific Ocean, this pervasive computing thus being available everywhere, makes it possible for people to connect to the worldwide planetary area network that we know as the internet, but there is a whole new family of security concerns that they all share. Most of the devices are not fully enabled and have some constraints placed over whatever resources they do have, such as cellphones, such as tablets and other kinds of limited function devices. The computing issues that we deal with on desktops or laptops or in other kinds of more robust computing systems still exist for these smaller, more limited platforms. Consequently, there needs to be some form of anti-malware software that runs on these. There needs to be a way to enable secure mobile communications.
Ideally, we should have the ability to institute strong authentication, multi-factor authentication, there has to be control of some kind on third-party software, there should be separate secured mobile gateways for different kinds of computing access. We should have a family of devices with increasing levels of security, so that consumers have more choices of more securable mobile devices, and then, as with our larger systems, we have to have a way of measuring these things so that we know how secure they are or aren't, as the case might be, and are able to take appropriate corrective action for these smaller, more constrained systems within the realm of their capabilities. Regardless of the system context, we always need to be on the lookout for single points of failure. Now a single point of failure audit needs to be done and it needs to focus on all the aspects of the system.
There is, typically, a single point of failure that resides within each system context, whether that's a mechanical, a software-driven thing, or a human element. When we perform our single point of failure audit, we need to cross-reference those results against the outcome of the overall risk analysis and the business impact analysis to be sure that each of these informs the others, so that we have a complete, consolidated picture of our entire environment. What needs special attention is the mission-critical systems, processes, and supporting components, and as I mentioned, we need to include the human elements in all of these because the human element can be the most critical and the most obvious single point of failure in any of them. The desktops, laptops, and thin clients should be treated with as much seriousness as our enterprise-oriented systems. Thus, being that these are the points where people come into contact with the larger systems and networks and the more mission-critical information, we need to be sure that there are supported operating systems on them, that they're currently supported by their vendors, and that they're in a current state of patching. We have to be sure that the anti-malware and other anti-malware, antivirus, and other kinds of capabilities are likewise installed and kept up to date. On the ones that can support it, an intrusion detection system of some sort would be nice to have. Each one is going to have a drive of some description. Maybe it'll be a rotating drive, it might be a solid state drive, it may be flash memory. There should be some form of drive encryption utility inside the device to enable strong encryption over whatever data might be in there. It may be patient data, it might be financial data, it may be your contacts list or your email. In any case, these are all considered sensitive, and this capability should exist within the device. And there should be no lessening of the seriousness of configuration management and change control over these.
The mobile devices, ideally, should be controlled by a mobile device management system that can provide a number of these functions. They can control and manage these aspects of the devices, including applications, device authentication and enrollment in the system, so that it can be given this kind of control, it can provide a consolidated information archive with integrity validation for legal hold situations perhaps, and it can provide secure container technology for organizational system access and management. When we look at designing server security architecture, we need to look at some very basic things. How is remote access going to be established? In our world today, more and more remote access is required. In fact, it's becoming more the standard than the exception, as it was at one time in the past. We need to therefore think about the unique nature of this arrangement and how configuration management will be performed in a way that is both convenient and effective. How updated code and new versions of software will be deployed will be a fundamental question within that, and without question, the business continuity requirements are going to have to be thought through very carefully as well.
One of the inventions that we've had in recent decades is data warehousing. Now data warehousing, in its classical way, is constructed as a layer of software on top of multiple database versions or instances. Sometimes these are multiple instances of the same product, and sometimes these are multiple instances of different products. The warehouse homogenizes all of these things to the end user by providing a common interface and a method of interaction for the end user with whatever data repository the data they seek is in. As this, it eliminates the organization's original information structures and access controls by homogenizing them all, but bringing to it, its own layer of these same functions. By combining the data or the access method to reach all of the data in all of these different repositories, it makes it possible to access data in a much broader, much more homogenous fashion, that, once the methods are combined, it enables the end user to get much more data value out of this combination than they might otherwise get out of a single version or a single instance of a database housing data.
To make the optimal use of this data warehouse, the user needs to become familiar with data mining, and that means that to really get into the data and get the best value out of it, they need to learn the effects, both positive and negative, of inference and aggregation. Now the data mining is using techniques to explore what is inside a database. Sometimes, it's a trial and error. You write a query, you submit it, it goes in, it returns a negative answer. Nope, that's not in there. Other times, it will return a positive response. Yes, it's in there and it looks like this. Now once we have established what is and what is not in there, the user can start making inferences about what else might be in that database that the warehouse enables them to reach. This inference is the ability to deduce that other, more sensitive or even restrictive information might be in there and help us craft a query that might actually reach it or return attributes about it. As we go through this trial and error of finding out what is in there, having queries submitted and returned with improving quality of responses, we then begin to aggregate the data that we're discovering. This aggregation can lead to learning things about a consolidated whole, that the individual pieces that we might have been authorized to, would not have revealed. This aggregation also leads to exceeding one's authority the more one pieces together, from the queries and the responses received from them, about the picture of the entire data element that we're looking for.
Large-scale parallel data systems are built in various ways. Most computing systems are, in fact, in this day, parallel and are widely distributed even so. We have cluster computing, we have grid, we have the increasingly pervasive cloud computing, which is built on top of the physical layer we know as the internet, which itself is built from the even deeper physical layer of telecom networks. We have, what has been over the past three or four years, a more rapidly emerging issue about cyber-physical systems, and then we have, what is very pervasive, the mobile to mobile. Now in all of these different distributed environments, users are able to log into their own computer and achieve data that is saved locally or stowed remotely on various sites. In these distributed systems, we're not connecting to one specific server. No central server is actually required, though we may be reaching what is actually multiple servers out there on the internet somewhere, and each one of these servers will have its own architecture, its own data repository through which that server is able to access and return information to the authorized user, but along with these distributed computing architectures, we have grave concerns about trust, one that has reared its head very, very prominently in the past few years, privacy, or the loss of it, and then just what is the general security nature of any of these systems that we're trying to use? Part and parcel of this entire discussion is Big Data. What is it? What is driving it?
Well, simply put Big Data is a name for the mountains of data that contain, or are presumed to contain, valuable information. Now one of the things that the cloud enables, with respect to Big Data, is that having a great deal of cheap commodity resources available. Big Data can be accumulated and stored at a relatively low cost. Some vendors have made available, free analytics tools. Now that's free very much in quotes. Free tools are usually worth what you've paid for them. True, very sophisticated tools can be very, very expensive, but in order to get the value out of Big Data, it has to be structured, so that it can be analyzed, so that whatever the value is can be gleaned from it, and the tools are generally worth their weight and their price tag for the value that can be derived from them, but like anything else in computing, you have to have an objective in mind for what the data is to provide you in the way of a product that can then be used to advance your business.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.