This course is the 2nd of 3 modules of Domain 5 of the CISSP, covering Identity and Access Management.
Learning Objectives
The objectives of this course are to provide you with an understanding of:
- How to manage system features supporting and enforcing access control
- Authentication methods and techniques
- Accountability and controls
Intended Audience
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Prerequisites
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
Feedback
If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.
Welcome back to the Cloud Academy presentation of the CISSP Exam Prep Seminar. We're going to continue our discussion of Domain 5, beginning with Section 3 entitled Manage Systems Features Supporting and Enforcing Access Control.
For every form of access control there needs to be an authoritative source from which the authentication and authorization can be drawn. This means there could be, or should be, a central repository that can be checked against and verified through the transactions of authentication and authorization. It means that there must be agreement about where that repository is and what the repository is going to hold in order for it to be considered the authoritative repository for the entire enterprise. This, of course, will be built around a form of directory system.
Now, a typical directory contains a hierarchy of objects such as you see here. Users, groups, systems, servers, printers and other subdivisions that denote all the different subjects and objects within the system. The directory technologies most currently in use include these: X.500, a derivation of that that we know as LDAP, active directory domain services and a companion to X.500, X.400.
Now, the X.500 is a directory service that goes along with the X.500 seven-layer model of the network. In it, it has several different protocols. It has the directory access protocol (or DAP), directory system protocol (or DSP), directory information shadowing protocol (or DISP), and the directory operational bindings management protocol, DOP. Now, a side note, this test is not one that is going to ask you detailed questions about any of these. Having a general level of knowledge such as you're going to gain from this particular class will be sufficient to answer the questions that you will face on the test. It is not intended to be an educational course for someone who will be actively daily administering these particular protocols in an X.500 or other network repository type of environment.
The four layers of the TCP/IP network stack is lightweight directory access protocol, which we know as LDAP. This was defined in RFC4510, which defined LDAP version three in 2006. This uses a hierarchical tree structure for directory entries. It supports the distinguished name and the RDN concepts. Common attributes include distinguished name, common name, domain component, an organizational unit in the hierarchical structure.
For its own systems, Microsoft has its own system which we know is Active Directory Domain Services or ADDS. As directory services go this one is very similar in that it provides central authentication and authorization capabilities for users and system services on an enterprise-wide level. It has the ability to enforce organizational security and configuration policies across the enterprise, and it uses LDAP versions 2 and 3, it's compatible with those, and it employs Microsoft's version of Kerberos and DNS domain name services.
Now, as I mentioned, a companion to the X.500 or formal directory service for network services, we have the X.400, which defines standards for data communications networks for message handling systems, more commonly known as email. This has been effectively replaced by SMTP as the predominant email type protocol in use. But like all directory services, X.400 does address the necessary hierarchical structure of this type of directory service and it contains a series of name/value pairs separated by semicolons as you see here. Country name, administration, management domain or short-form A. And this is usually associated with a public mail service provider. We have the PRMD, the private management domain, short-form P, O for organization, OU for organization unit names and organization unit OU is equivalent to OU0, or it can have another kind of a structure such as OU1, OU2, etc, if there is given name, initials, and surname. So like all hierarchical directory structures currently in use, X.400 maps to that same standard.
Now, we have the situation today where people have to access multiple systems, which typically require multiple usernames, multiple passwords and oftentimes are in no way connected to each other. This creates an administrative nightmare not only for the administrators, but for the users as well because now they have to manage multiple usernames, multiple passwords, systems with inconsistent and incompatible password rules and so on. And so the idea of having a single form signing in process was invented called single sign-on. Now, this goes by several different names: reduced login, simplified login, single sign-on, an analog to that in use on the web these days, federated. But the idea behind single sign-on is exactly what you think. It is signing in once, and then, through that vehicle of signing in the one time, it, in its turn, logs you into the various resources connected to this facility so that in logging in the one time through here, you get logged into everything else that you're authorized access to.
Here we have a diagram of a typical single sign-on system. So Alice, this will be our subject Alice, she sits down in front of a terminal and logs in, her initial login will go in through the single sign-on server through which she is authenticated and legitimized as an actual user. The back-end processing that the single sign-on server will do connects with the other servers that Alice has rights to. Through a transaction, that single sign-on server will authenticate her to those resources for which she's authorized, and that will sustain her sessions throughout her session until she logs out entirely and then those sessions are closed out. It takes over and manages her credentials and her passwords for her. One time is once-in-unlimited-access. This is where the user authenticates once and then has access to all the resources which are connected in the model through the single sign-on server. This is implemented as a service model, so that the session layer protocols that connect the single sign-on server with the other servers to which she is authorized stay in session in accordance with the policy.
An alternative form is the scripted kind where the user authenticates once to gain access to the single sign-on service, and then is authorized to each service as access to that service is attempted. This is accomplished via scripts sent from the single sign-on server to the target system. The drawback to this is simpler to implement, but the scripts are frequently sent in the clear and the scripts mimic the keystrokes that a user would put in when logging directly into that server. And so comparing the two, we have the service-based one, more difficult to implement yet more reliable, more constant in its performance. And the scripted one, easier to implement, but requiring a great deal more interaction at the administrative level.
Now, the weakness of centralized single sign-on systems is, of course, that all of the user's credentials are protected by the single password username associated with the single sign-on server. Many of these systems store all the user credential and authentication information in a single database contained within them, which, of course, means that if we compromise the SSO server, we compromise everything that it contains.
Kerberos is a system that guards the network using three elements. Authentication, authorization, and auditing. Now, it's based on the interaction between three systems. A requesting system, an endpoint destination server and the Kerberos or key distribution center. So here you see a common Kerberos setup. At login, the user receives the service ticket. And at this point, I wanna draw your attention to the term ticket. A ticket as envisioned in this system functions in a manner similar to digital certificates, but it is not a digital certificate as defined by x.509 version three. That's just to make sure that you draw the distinction between the two as different vehicles. In receiving this service ticket, it accesses the user's profile whenever a service attempt is made to verify permission. With each service request, the service ticket or ST accesses the ticket-granting server called the TGT to validate and issue a new service ticket. The access is established and a secure session is created. So as you see, we sign in, step one shows the client requesting authentication to the authentication server, successfully it sends back the service ticket and the session key that the user will then use to access the server they're requesting to use. Then in step two, they request an application ticket. Successfully done, the ticket-granting server returns the application ticket. Then the user at their workstation requests access to that application. The application is granted because of the service ticket that the user presents. And the application server there in process step three sets up a secure session between itself and the user having successfully authenticated to it. Once the session is over, the session itself is torn down and the key itself is destroyed when the user ends the session and clears the cache.
Now, the drawback of Kerberos includes that the system of the whole depends entirely on careful implementation. The KDC itself can be seen as a single point of failure, and therefore, should be supported by backup and continuity plans. Part of the implementation, of course, is to determine the length of the keys that will be needed for this particular system to provide adequate protection and strength for the sessions. Being part of the implementation, this decision is therefore critical. We have the perimeter-based web portal access. If an organization has a directory such as LDAP in place, it is possible to quickly leverage the directory to manage user identity, authentication and authorization data on multiple web-based applications using a portal system tied to the WAM solution.
The various factors in single or multi-factor authentication as a way must be decided to get greater assurance of the individual logging in that they are who they claim they are. So we're going to discuss type one authenticators, something that you know such as a password or a PIN, a type two authenticator, something you have such a token or a smart card, and the type three authenticator, something you are or do, such as the biometric of a fingerprint or an action such as a voice print.
So first, single factor authentication. This is typically going to be a username and a password. The subjects that want to be authenticated must provide that factor. And of course, we have our password policy and the rules associated with it. It could be the common password that we know or it could be a personal identification number used as a password. Here we have the type two, a token, a thing that you have. Now, the tokens are used by their claimants to prove their identity and authenticate to a system or application, and can be either hardware- or software-based. The soft token is typically implemented using access to a mobile device such as a phone or a tablet. We must, of course, come up with guidelines for users and for administration to ensure proper operation and implementation, and the various security controls such as how the token itself is constructed, how the token is going to be sent to the various protective measures. Historically, we have, of course, the hard tokens. The implementations for these hard tokens have been either synchronous where the token itself in some form of a key fob has to be synchronized with the server that will be receiving the challenge response codes and the transactions from the person as they authenticate, or asynchronous, also called event-driven, where the codes that are generated are done as the individual initiates the transaction to authenticate. Typical tokens are the RSA SecurID coming in the form of a small card looking a lot like a credit card in size, and possibly as a small pocket calculator, or they can simply be code generators in the form of a key fob.
Now, multi-factor authentication typically combines the type one and the type two. Because of these two factors, this kind of authentication ensures that users are who they claim they are. Note that we do not use the word guarantee. The more factors that are used to determine a person's identity, the greater the trust of the authenticity. So you see, it's not a matter of a guarantee, it's a matter of increased assurance and trust. And as I've already said, it is most often a combination of types one and two.
We come finally to our type three, the biometric. Biometric devices rely on measurements of biological characteristics of an individual. This technology involves data that is unique to the individual and is difficult, if not impossible, to counterfeit. Selected individual characteristics are therefore scanned and stored in an alphanumeric form, and then compared with the presented template. That is, when you present for example, your fingerprint to be read by the reader, it is read, it is converted to a code, it is then compared to the stored value, and when there's an exact match between what is presented and the stored value, the person is authenticated.
Here we have a typical process diagram showing how this works. It works by scanning the characteristic, such as a fingerprint, a characteristic of your eye, the hand or some other physical trait, and then processing it in preparation to be matched against the template. Once the scanned elements are mapped into the template, the identifying information is then added to tie the trait to the authentic individual, and their enrollment is completed. During normal operations, the scan of the trait is matched to the stored template and the person is either accepted or rejected. And as you see in this, we have the sensor, the pre-processing, the feature extractor, the template generator, the stored templates, the matcher, and the application device, and all of these function together to map the characteristics of the presented form, your fingerprint, your eye, or some other trait. And then it is then mapped to what is stored.
Now, because of how this works, we have to worry about the accuracy of the device. And we have two different types of errors. One is a type one called false rejection. We have the second type which is false acceptance. Now, the biometric accuracy that you see here in this chart shows that in a smooth descending curve we have the type one error for false rejection, and we have the equally smooth ascending curve for type two errors, false acceptance. The type one, the false rejection, is, of course, the rejection of authentic subjects, and the type two is to reverse the acceptance of an imposter. Now, there the near the bottom, you see the CER or crossover error rate. Given the slope of either curve, the CER denotes the lowest possible point of intersection on these two. The lower this point of intersection, the greater the precision. The thing to bear in mind is, considering that the slopes are the same, any change in either one produces a proportional and direct change in the other. As, for example, you move the type one error, false rejects, you move that to make it more so, you move it to the right, the point of intersection slides up the type two error curve, which means as you remove false rejections you are loosening the criteria and causing more false acceptances in direct proportion.
The reverse is equally true. In order to reduce false acceptances, you bring this back and move the green curve to the left and that raises the point of intersection on the red line. And that means though you're reducing false acceptances, you are increasing the false reject rate. And so it's advisable to be very careful in considering what you're trying to achieve before making adjustments in either one. We have, of course, multiple types of biometric readers. Common ones are fingerprint, iris patterns, retinal scanning. We also have facial image, hand geometry, voice recognition, signature dynamics, cutaneous heatmaps, other types of vascular heatmaps, and then keystroke dynamics. All of these characteristics, whether a passive trait such as a retina or an active trait such as a voice recognition, these are very unique to the person and so the great reliance is placed on them to be unique and, thus, give us very high levels of assurance that we are authenticating the genuine individual. The drawbacks, though, of biometrics include expense, administrative overhead, technological fragility, and other traits. They don't tend to be very small and conveniently sized, either, except for fingerprint readers, and these have been proven not to be fail-safe. They are not foolproof. It has been determined that they can be defeated in ways that are surprisingly simple. We're going to continue our discussion now, moving into Section 4, Managing Systems Features Supporting Access Control.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.