1. Home
  2. Training Library
  3. Security
  4. Courses
  5. CISSP: Domain 5 - Identity and Access Management (IAM) - Module 2

Accountability

Contents

keyboard_tab
CISSP: Domain 5, Module 2
2
Accountability
PREVIEW18m 26s

The course is part of this learning path

play-arrow
Accountability
Overview
DifficultyIntermediate
Duration39m
Students48

Description

This course is the 2nd of 3 modules of Domain 5 of the CISSP, covering Identity and Access Management.

Learning Objectives

The objectives of this course are to provide you with an understanding of:

  • How to manage system features supporting and enforcing access control
  • Authentication methods and techniques
  • Accountability and controls

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

One of the characteristics we need to establish is accountability. Fundamentally, this is the trait that means we are able to determine whom or what is responsible for an action, and that that entity can be held responsible for taking that action. Characteristics of the system that help us manage access include things like screensavers, timeouts, automatic logouts, session or login limitations, and scheduling limitations. These support sound access control policy by providing a level of control in the particular area. 

Screen savers, for example, are built into almost every computer. There is some kind of a screen saver function, whether it's to put a fancy design on the screen or simply to blank it. These were originally added to computer systems on consoles to prevent the burn in of Cathode Ray type display systems. Many of these can be configured to lock the session of the user after expiration of a predefined period of inactivity. And these, of course, would require re-entry of the original login session password to unlock it. If, for example, a different user came up, though authorized, logging in with their own credentials would terminate the prior session and initiate a new one. A secondary but very important safety feature.

The timeouts, of course, were originally there to conserve resources, to conserve sessions, so that someone sitting idle would give up their session since they've been idle, and allow someone else to take up a new session so that the resource sharing in the computer system could be enhanced and extended. These timeout controls exist in many different ways that are often cascaded to further restrict access of unattended sessions as time passes.

Additional controls include session or logon limitations. These limitations focus on the usability and security trade-offs associated with the session of the given user attempting login. We have, of course, an opportunity to do schedule limitations. These are based around time, and these could be limited to a time to be able to sign in, and then a time at which the session, regardless of what time of day it might be, at a particular point after the user has passed that particular point, the session either locks or logs them out. Take, for example, a nurse, coming on to the shift at 6:30 in the morning. The system with schedule limitations prevents any attempt to login prior to 6:30 a.m. The normal shift ending at 3:30 p.m. may do one of two things. If they're logged in, and 3:30 comes, that session, when terminated, cannot be re-initiated. Logging off after 3:30 will prevent them from logging back in due to the schedule, terminating the session, or its ability to be logged in, between the hours of 3:30 p.m. and 6:30 the following morning.

And then we have logical sessions. As more information systems become service-based, typically through web browsers, understanding web-based sessions and their weaknesses and how to protect them is critical. And so, having a logical session, based around a profile associated with a particular subject, is the manner of achieving this type of control. 

So, let's walk through a session hijack attack sequence. So, the target-user starts a web browser and navigates to her bank's website. The attacker, watching this, inserts himself between the user and the bank as a man in the middle. And he creates a session between the user and himself. Now there is a session created between the user's browser, the attacker, and the attacker's bank's web server. The user clicks secure login link to get to the bank login page. The attacker, being in session with the target-user, intercepts this request and sends a forged web login page for the bank with invalid certificates. The target-user now has a session with the attacker and the attacker with the bank, typically using SSL. The user then enters their credentials and authenticates, still having no idea or any indication visually that they're in session with the attacker. Now the attacker is able to view and capture the user's credentials as they're decrypted at the attacker's session endpoint. The user conducts banking as usual and logs off the banking website. At this point, the attacker has captured the credentials of the user and will replay them to log into that user's account and transfer money out of it. And the user will never have known that any of this has happened.

Now, one of the things that is done and has become a very common practice is this idea of identity proofing. Through it, we collect and verify information about a person for the purpose of proving that the person is indeed who he or she claims to be. These are therefore sometimes called cognitive passwords. It establishes a reliable relationship that is then trusted electronically between the individual and credential for electronic authentication. Forms of this are the questions that ask you the high school you went to, the make and model of your first car, who your first prom date was, and a host of other questions to which when you are building this particular profile, you will be asked those questions or can even choose from a list of many, and you provide the correct answers. Then, when your identity needs to be verified through this method, you'll be expected to be able to replay those answers back to the system asking those questions.

We have, of course, electronic authentication, or e-authentication, and this is the process of establishing confidence in the user identities electronically presented to an information system.

Now we move on to federated identity management. Now, this is a relatively recent development, a form of implicit access management based on trust. It is used in web-based applications to provide access to users who may not be explicitly trusted by the target site. So let's step through the diagram we have on this slide. So we're going to authenticate. In step one we authenticate, we go through and get a sample token from an identity provider. Then that is presented by the request to access to resources. The service provider will then validate the token, and then the access is provided.

Now, each organization in the federation scheme subscribes to a common set of policies and standards, and procedures for doing the provisioning and managing of user identification, authentication and authorization information. The process for access control through these systems, these users must access using SAML, 0Auth and the related technologies. The entities involved include the user, the identity provider or IdP, and the service provider. One form of implicit authentication is based in the cross-certification model of public key infrastructure. Each organization must individually certify that every other participating organization is worthy of its trust. The organizations review each other's policies and procedures, and they perform their due diligence because in the end, they will have to come up with compatible policies so that the cross-certification can establish the necessary trust.

So, the structure you see in two different forms here. When two PKIs are established between organizations or businesses, we call this cross-certification using public key infrastructure technology to make it happen. There is the nonhierarchical trust path which is a direct mutual trust. It requires two certificates, one for each direction and for each entity. And of course, there has to be a cross-certification agreement so that all of the necessary parameters can be properly defined. The upper diagram shows one enterprise with CA1 and the other enterprise CA2. Through the cross-certification, all the persons at CA1 and CA2 are trusted through inheritance through this cross-certification of the servers of CA1 and CA2. When we do the more complicated one, CA1 cross-certified to a bridge CA, which is, in its turn, cross-certified to CA3, the same individuals at CA1, the bridge, and CA3 are all trusted implicitly by the other parties. Now the bridge CA can perform an additional function. It can make sure that if CA3, for example, is setting up its trust between itself and the bridge, but it's not setting up a trust between itself and CA1, the bridge can block any attempt by either party to go through it to cross certify to the other or attempt to access the other successfully.

There are some drawbacks to this cross-certification model. Once the number of participating organizations reaches more than a few, the number of trust relationships that must be managed grows rapidly and tends to make for a very complex spiderweb type of setup. The process, of course, must be thorough and it takes considerable time and resources to complete. Now, the complexity can be mitigated through the use of a federating hub that serves as a centralizing and translating or switching trusted third party for all the connecting entities.

In this trusted third party model, each participating organization subscribes to the standards and practices of a third party that manages the verification and due diligence processes for all participating companies. Once the third party has verified the participating organization, that organization is automatically considered trustworthy by all other participants. Note again that this is an inherited or implied trust, not direct between the first and the other companies. Later, when users from one of the participants attempts to access a resource from another participant, that organization needs only to confirm that the user was certified by the trusted third party before allowing access, presented in clear text anywhere.

We need to have administrative benefits such as the ability to migrate passwords with ease from one system to another. Fundamental, of course, is the ability to track and audit all access, whether successful or failed. We have to keep control of all credentials all the time. It must be always on and always reliable, always available and we need to plan for disasters. Now the standards that SAML relies on include these: XML, of course, XML Encryption, XML Schema, the XML signature, HTTP, and SOAP.

Now, all of these are representatives of credential management systems or their various characteristics. Credential management systems must enforce several different attributes. They need to keep a history. For example, when you have a password policy that denies the reuse of passwords within some number of generations, say within five, keeping a history is what makes that possible. We have to enforce stronger passwords which leads to the complexity policies that we use these days. We want generating passwords to be a relatively easy process even though we require many things to happen properly to get the passwords.

In order to keep the latency down and authentication for properly authorized users at a convenient level, it needs to be able to search and find passwords fast. That applies to denying access to falsely or badly done passwords so that they can be denied quickly rather than giving an opportunity for someone to stage an attack based on a lengthy delay. We need fine grained access control. There has to be a way to actually limit access through profiling. Passwords themselves, as we know, must be stored in encrypted or hashed form, so they're never present in clear text anywhere. We need to have administrative benefits such as the ability to migrate passwords with ease from one system to another. Fundamental, of course, is the ability to track and audit all access, whether successful or failed. We have to keep control of all credentials all the time. It must be always on an always reliable, always available. And we need to plan for disasters.

All of those things and the importance of the credential management system places it at grave risk, which means we have to put in strong controls to prevent the risks from coming into reality. Should attackers gain control of the credential management system that issues these credentials, it makes them an insider, it also gives them an incredible amount of insight and control over who gets access to the system, including that for themselves. Compromised credential management processes result in the need to reissue credentials, which can be a very expensive and very time-consuming process, to say nothing of the inconvenience and lost productivity that can be created. Credential validation rates can vary and easily outpace the performance of a credential management system, which jeopardizes business continuity. Business application owners' expectations around security and trust models are rising, and these can expose credential management as a weak link.

Now, the benefits, of course, include things like adding higher levels of assurance (please note again that we do not use the word guarantee) to maximize the value of the investment in a credential management system. It needs to be able to meet the highest security standards while ensuring that performance and resilience are also provided. We would like it to simplify administration to a point, not so simple that they become more easily attacked and compromised. Compliance and auditing with a common baseline for the trust. And in future-proofing the enterprise to support more stringent trust models and policies as they emerge, it means that our credential management systems must be able to be reconfigured and evolve so that when these new models come out, we can implement them easily.

One of the guides that we can use is the National Strategy for Trusted Identities in Cyberspace. The NSTIC, as it's called, aims to reduce online fraud and identity theft by increasing the level of trust associated with identities in cyberspace, which of course means the mechanisms for establishing higher levels of assurance that the identities that we're dealing with are, in fact, proper representation of who the human being behind them is. This outlines the needs for parties involved in electronic transactions that require a high degree of trust. And it presents a framework for raising the level of trust associated with defined identities, involved in certain types of online transactions.

One system that implements most of these things, is the GSA GAMS system. It offers a form of single sign-on. It provides self-service capability. In other words, the system gives the user having trouble once they've registered, to be able to re-provision themselves, correct passwords, reclaim forgotten passwords, create new passwords, and do a lot of self-administration. It has all the necessary protections to protect against unauthorized access. It gives the ability to provide audit capabilities and reduces the audit reporting time. Through its single sign-on, it enables the reuse of identity data and it expedites employee and contractor onboarding. So we're going to end this module here. We'll begin again next time with Section 5, when we talk about integrating identity as a service. Thank you. We'll see you next time.

About the Author
Students1422
Courses29
Learning paths1

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.