1. Home
  2. Training Library
  3. CISSP: Domain 3 - Security Architecture & Engineering - Module 4

Public Key Infrastructure (PKI)

The course is part of this learning path

Preparation for the (ISC)² CISSP Certification (Preview)
course-steps 16 certification 3 description 1
play-arrow
Start course
Overview
DifficultyAdvanced
Duration48m
Students1

Description

Course Description

This course is the 4th of 6 modules of within Domain 3 of the CISSP, covering security architecture and engineering

Learning Objectives

The objectives of this course are to provide you with and understanding of:

  • The history of cryptography across the era's
  • The principles and life-cycles of cryptography
  • Public Key Infrastructure, known as PKI and the components involved 
  • Digital signatures and how they are used
  • Digital rights management (DRM) and associated solutions

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Now public key infrastructure is another form of usage. This of course uses the asymmetric of public and private keys, but it makes use of a hybrid in which it will bring in symmetric keys for specific uses. It is a system that is used in systems, software and communications protocols to control most if not all aspects of cryptography. The primary purposes of this will be to create and issue digital certificates and the accompanying key pairs. It will perform the function to certify that a key is tied to an individual or entity through the issuance of the digital certificate and it will provide verification of the validity or the loss of of any public key through this digital certificate mechanism. 

Now the certificate authority is a service or a server or a company acting in this role who sits at the top of the public key infrastructure pyramid. It creates and issues the public and private key pair and the digital certificate that accompanies them. As it sends it to its owner whose identity it has verified, it sends a package by first signing it and then sending it so the certificate owner can receive it, decrypt it and then install the keys. And the process you see is here. The CA signs a message digest and sends a certificate holding the necessary public key. Now the steps to verify a digital signature. The receiver runs the certificate through a hashing algorithm. They decrypt the hash in a certificate to ensure that the trusted CA did in fact sign the certificate. Then they compare the two results, the one that they received and the one that they generated at their own end and they compare them to ensure that there's an exact bit for bit match. As long as that happens, they then extract the public key from the certificate. Then they run the message through a hashing algorithm to calculate the new hash and as long as these comparison operations are performed and there are no discrepancies, they are able to take the digital certificate, the public and private key pair, install the digital certificate and the public key in their own system or in their enterprise directory and install the private key in their own system so that it remains secure. Unshared, undistributed to anyone else. 

Now the current standard for digital certificates and the public and private key generation process comes from X.509 version three. The digital certificate, whatever the physical appearance of this might be, has information in these fields which you see on the left hand side of this slide. On the right hand side will be a specific description of whatever the element is. For example, whatever the algorithm is such as RSA, a very common one. Then the X.500 name of the certificate authority issuing the digital certificate and bear in mind that in saying the digital certificate, I mean the public and private key pair that accompany it that will employ it in their usage. The period of validity is the duration with the beginning, a from date and a to date in which it will be valid. Now the owner of the public key will be displayed. The public key and the algorithm used to create it will be there. Remember, having the public key will not give the possessor any opportunity or any capability to calculate the private key from the possession of the public key even though they're mathematically related and generated simultaneously at the CA. For anyone to have the public key and be able to generate the private key from its possession, it would completely eliminate the system from being usable at all. So knowing this information will not help them. 

Now the issuer's unique identifier is an optional field that can be used as the subject's unique identifier is also. And there are extensions that can be added. Now the digital signature of the CA is a hash of the certificate encrypted with the private key of the CA to certify and attest to the legitimacy of the certificate. Now in encryption, Kerckhoff's principles are well known. These were put together by Auguste Kerckhoff, a Dutch cryptographer who in the 19th century around 1880 or so put together a paper and in it he cited these five principles, one of which we still hold to be non-violated. The system must be practical, if not mathematically indecipherable which means as a practical matter anyone intercepting the message should have great difficulty if not actual impossibility to decipher it within a period of time that the information will still possess its value. 

The second principle, it must not be required to be secret and it must be able to fall into the hands of the enemy without inconvenience, meaning that this basic rule reflects the fact that if the enemy gets a hold of it, it should not compromise everything about the system that we're using. Otherwise the system itself is completely useless from that point on. Its key must be communicable and retainable without the help of written notes and changeable or modifiable at the will of the correspondents. Now this one poses a certain problem because as the keys get longer and longer, retaining it without the help of written notes becomes something of an impossibility. So this one may not be quite as trustworthy as the prior one. It must be applicable to telegraphic correspondence. 

In the 1880s we did have telegraphy and so it obviously had to work through that and a system to translate what we were doing into a telegraphic language needed to happen. We of course have extended this a great deal because now it has to pass through all different forms rather than just telegraphic. And it must be portable and its usage and function must not require the concourse of several people, meaning that one person should be able to do all the necessary functions. And in some cases that could still hold true, but we find that in order for cryptographic systems to be strong and very resistant to attacks, it does in fact require the coordination of multiple persons and some systems in between them. One of the compensating controls, not electronic in nature that we have to install to make sure our systems and our management processes do not weaken the system overall include segregation of duties in a couple of different forms. At a general level segregation of duties is done to prevent too much unsupervised control resting in too few hands and ensuring that the combined duties that are put into those few hands do not themselves create a conflict of interest. One method is dual control. 

Think of this as the time that you go to your bank to visit what you have in your safety deposit box. The bank officer accompanies you into the vault. They have a key, you have a key. Each of you must put the key into the respective slot and open the box so that your contents can be withdrawn and it requires both keys performing their function at the same time, each one being in the possession of a separate person so that neither key can work on its own. That way you are secure that the bank can't get into it and they are secure that you can't sneak into the bank's safety deposit vault and get into your box without them supervising your being in the vault in the first place. 

And then we have one called split knowledge which may be something as simple as a spy versus spy kind of setup where one person says one thing, the quick brown fox, and the other person to confirm their authentic identity states jumped over the lazy dog and between the two of them, the combined information authenticates and allows them access. 

Now the creation of keys is one of the places that requires the most careful attention to design and the implementation of a cryptographic system. In our systems today we have to have automated key generation. This is not the kind of thing that can be done by a human because humans being patterned creatures from our DNA upwards are based on patterns. And in automated key generation we must ensure that we have maximized randomness throughout all phases of this process. The generation of the key should produce a key that has a sufficiently high work factor to break it that it makes it economically or mathematically impractical or verging on impossible for a codebreaker to be able to create a key that will undo our cryptographic work. And the basic rule is the length of the key and the strength of the key should be commensurate with the asset value of the information that is being protected by it. The system itself must be implemented in such a way that as it goes through a random function called ASLR, it will select the key with a highly random reflection of pattern destruction to ensure that no pattern in key selection has been detected or is predictable. The thing to bear in mind with this trait is if the person inventing the algorithm and testing the math could in any way predict what key would picked, then an attacker would be able to do the same thing. 

Now asymmetric keys as a general rule must be longer for equivalent resistance to attack than a correspondingly strong symmetric key and that's based largely on the math that goes into producing asymmetric keys versus the math that is used to generate the symmetric keys. One of the great challenges in cryptography is how to distribute keys once created so that senders and receivers can exchange them or share them or publish them and yet not face the risks of them being captured and compromised by someone acting as a man in the middle. So we do key wrapping and key encrypting keys. Now these are part of the key distribution or exchange system and they're intended to be protecting the keys that are actually used for the decryption of the message traffic that will be sent or exchanged. They can be either symmetric or asymmetric and asymmetric is preferred because of the public/private key pairing that goes on with it. The process of using a key encrypting key to protect session keys is called key wrapping. Now when you have a protected or trusted channel, the use of a key to protect another key both of the same type, that is a symmetric key to protect another symmetric key, this may be an acceptable practice. But in general practice using a symmetric key to wrap and protect another symmetric key as it passes over a public network link that could be listened to by a man in the middle, all you've done is double the ability of someone to capture both keys and if by decrypting one they're given a clue about the other, then you haven't really solved your problem. And this is the very essence of the problem that public key encryption was designed to resolve. 

Now every key management system must do key storage and destruction. And the methods for protecting stored keyed material will include containers, trusted tamperproof hardware security modules such as the TPN we find in many computers, in particular laptops. We have to use other controls such as passphrase protecting smart cards, using key wrapping for the session keys, using longterm storage key encrypting keys. Splitting cipher keys and storing physically separate locations for the various pieces of these keys. And then protecting keys using strong passwords or passphrases, using an appropriately short key expiry period and other kinds of compensating controls. One of the things we have to consider whenever we use encryption is the fact that there are costs that are associated with it. Sometimes that cost is measured by the machine time that it takes to properly encrypt things. Other times it is what is the cost of the downtime or the cycle time in doing replacement of keys, replacement of digital certificates. Sometimes the costs can be painfully high in terms of how much time is down or the purchase price if you buy one commercially, how costly that key and that digital certificate associated with it, how high that cost is. In some cases the expense of the security measures necessary to support longer crypto periods may be justified. In other words the value of the asset that is to be protected by this key and the impact that will be suffered if it should get compromised will justify the additional expense necessary for the appropriately long or long lived key that will protect it. 

Now invariably keys will get lost and in getting lost we're going to have to go through the problem of key recovery. A properly implemented PKI system typically has a function built into it that will do key recovery. But symmetric key systems typically have to have this built as a process for doing key escrow, key recovery because if the key is lost, then we the genuine and authentic owners of the data, of whatever the asset might be and the key systems will have to go through some form of an attack ourselves to attempt to get the data back. And so we establish trusted directories. We establish processes for multiparty key recovery to make those processes easier, but these are reactive. We should be doing proactive protection as well to complement these, hopefully reducing the time and the number of incidents where these occur. One form of that is key escrow where we bring into play a third party that maintains a copy of all the keys that we are currently using and making it relatively simple for us to recover the key from this escrow so that we can try them and get back our information because the most important thing to do is ensure that that information is not lost. So key escrow is one compensating control that we can use to prevent this from happening. In a purely symmetric system this is really a requirement. Otherwise we truly run the risk of losing these keys and possibly losing the information they're protecting forever. So it should be mandatory that for certain kinds of information we have this kind of protective measure to prevent an outright loss of these keys and that we use cryptography to protect our most valuable asset, the information.

About the Author

Students306
Courses16
Learning paths1

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.