- Home
- Training Library
- CISSP: Domain 3 - Security Architecture & Engineering - Module 3

# Key Encryption and Ciphers

## Contents

###### CISSP: Domain 3, Module 3

## The course is part of this learning path

**Difficulty**Advanced

**Duration**1h 22m

**Students**89

### Description

This course is the 3rdof 6 modules within Domain 3 of the CISSP, covering security architecture and engineering.

## Learning Objectives

The objectives of this course are to provide you with and understanding of:

- Vulnerabilities of security architectures, including client and server-based systems, large-scale parallel data systems, distributed systems
- Cloud Computing deployment models and service architecture models
- Methods of cryptography, including both symmetric and asymmetric

## Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

## Prerequisites

Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

## Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

### Transcript

Now, in all of these environments, encryption is going to play a role. So we need to look at the elements of encryption and define some terms. As you might imagine, on an exam of this type, terms, definitions, question types will be present. So these key clustering, synchronous, asynchronous, hash function and digital signatures are terms that we're going to spend a few minutes exploring. Let's start with key clustering.

Every encryption algorithm, whether it's public or its secret key will have a key space defined by the length of the key itself. Two to the power of n where n equals the length of the key in bits is the way that we decide how large the key space is. Within this key space, the algorithm will select at random keys using parameters and constraints built into the algorithm to ensure that key clustering does not take place but 100% guarantee that this won't happen is an extremely difficult thing to obtain. To recognize it, it means that keys that are clustered, each key within that cluster has the ability to either in whole or in part decrypt a message that may have been encrypted with a different key. And so, the algorithm needs to select these keys at random to try to minimize the possibility that this situation will arise.

We have synchronous, which is where the encryption or deencryption function is processed immediately upon the access of the object or asynchronous where the encryption or deencryption of the objects are placed into a requests queue and then processed sequentially. Now, a complementary function to encryption is hashing.

Hashing is not encryption. It is a one-way process that is mathematically related to encryption. And what this produces is something we commonly call a message digest which is a fingerprint of whatever the object is that has been passed through the hashing algorithm. In public key encryption, we have the digital signature which is a product that makes use of both the public and the private key. The digital signature takes an input, it's passed through a hash algorithm to generate that fingerprint I just mentioned and then the private key of the sender is used to encrypt that fingerprint. Then whatever has been encrypted is then sent to a destination, restored on a file along with the digital signature and the digital signature must be decoded by someone who has the public key related to the private key that was used to create it. When the public key is used to decode the digital signature, it extracts the hash that is at the heart of the digital signature and then the integrity check operation can be performed. Digital signatures are a product solely of public key encryption and cannot be created by symmetric key encryption.

Now, the asymmetric is another term for public key encryption and it's called asymmetric because there is a key pair and these keys generated simultaneously by the Certificate Authority are mathematically related but neither can be derived from having the other one. Were that true, since the public key is created for the owner to distribute it to all persons that they're going to communicate with, this would be no better than symmetric encryption or secret key. So the asymmetric must use the public and private keys in all operations and what one key does, that same key cannot undo. What one key does must be undone by its mate in the hands of the other party. The public key, as I mentioned is going to be distributed to all persons with whom I, for example, am going to communicate with. Along with my public key will go my digital certificate. Now, this can be stored on my own workstation or it can be stored in a directory system that my enterprise uses and provides for this reason. The digital certificate is an electronic document that attests to the validity of my public key so that anyone receiving my public key or obtaining a copy by accessing the directory structure where it's stored, they're able to evaluate the key, look at its components and make sure that it is valid and assigned to who they think it is, who it represents itself as being assigned to. This digital certificate is used, created and issued by the Certificate Authority. This is a party that sits at the very top of this particular pyramid. It creates and issues key pairs and digital certificates plus it does all of the other operations such as issuance, revoking, managing, being used to validate that keys are current and acceptable or not.

Now, an administrative helper of a sort that can work with the Certificate Authority is the Registration Authority. Another entity that supports a large network of digital certificates but is unable to create and issue digital certificates and key pairs, it supports the operation of the Certificate Authority by handling much of the local administration and management of these keys and digital certificates as they're being used. Now, continuing our discussion of key encryption concepts and definitions, these are very common terms but ones that you must be familiar with. We have plaintext or cleartext. This is the human readable form that is either an input to an encryption process or the output of a deencryption process.

Then we have ciphertext or the cryptogram which is the output of an encryption process or the input to a deencryption process. The cryptosystem is the complete system of keys, the algorithm, the key space, the randomness functions, key management functions, all the different components that make it up. AES, the Advanced Encryption Standard is a cryptosystem. The Data Encryption Standard of many years before is also a cryptosystem. And each one has its respective algorithm which goes by a different name. Now the key or cryptovariable is a string of zeros and ones in a specific combination generated by the cryptographic algorithm. The encryption is to take plaintext and turn into ciphertext where the deencryption reverses that process.

Non-repudiation is a characteristic that we derive from public key environment. It means literally translated, it means the inability to deny and when non-repudiation is established, it means that the creator of a particular article, the sender of a particular note, the signer using a digital signature of a particular thing, such as a document or an email, cannot deny that they were the ones that did it because the public key associated with the digital signature purported to be their product could not be disassembled by anything except the public key associated with that signer. And the algorithm, in any of these cases, is the mathematical transformative process that creates the encrypted version or is used to undo that and recreate the human readable version.

Cryptanalysis is the study of analytical techniques for attempting to defeat cryptographic methods and information services. This is the method that code breakers, I should say the family of methods that the code breakers will use to examine how an encryption algorithm works. They take it apart to study its strengths and its weaknesses. Cryptography, literally meaning hidden writing, is the science that deals with hidden, disguised or encrypted communications. Synonymous with cryptography is also the term cryptology which literally translated means the study of things hidden. One of the things that we are concerned with in hashing is this idea of collision. Each hash algorithm has its own space, two to the power of whatever the length of the hash that's produced is is the size of the space in which the hash values can be drawn from. Collisions are produced when two different inputs will produce the same fingerprint output. Trying to produce a collision is a way that an attacker attempts to have something that is not the genuine or original article taken as the genuine or original article. Collisions, if they're easy, represent serious flaws in hash algorithms. Theoretically, collisions are possible in every algorithm that have ever been produced or under our current mathematics, are ever likely to be produced. What we're attempting to do is make sure that they are neither easy nor any time to be done within a short period of time. So the goal is to make it more and more and more difficult for collisions to be produced within anything less than an extremely long period of time or an extremely high number of iterations of attempts. And as I've defined the key space, it represents the total number of values for any cryptographic or hash algorithm and the formula is two to the power of n where n equals the length of the item in bits.

We have our work factor. This is the time and effort required to break a protective measure. One of the key elements of work factor is to find the balance between the protective value needed based on the value of the article being protected. We can set the work effort at the very highest level by selecting the very longest encryption algorithm key that's available but if it isn't commensurate with the value of the thing we're protecting, we're exerting an awful lot of effort to protect something that isn't worth it and that uses a lot of compute time and energy to be able to do. While it certainly will provide adequate protection, it's far more than is needed, so having a very good sense of the work effort involved will help us align the protective value of the components that we select with the value of the thing we're going to protect with them.

Now, in coding and decoding, are the actions that change a message into another format through the use of a code and decoding being its reversal. This would be something along the lines of taking an English language message and changing it into Egyptian hieroglyphics. The initialization vector is a term for a part of the key of any cryptographic system that its used to initiate the randomization process for generating of keys or starting the encryption process for a given input. Transposition or permutation are two different mathematical techniques that are used to rearrange the characters of the original plaintext into the jumbled version we know as cryptograms or outputs. These are used to make certain that the randomness contained within the product is as high as it can be reasonably raised so that any sort of pattern, any sort of representation of anything that might correspond back to the original plaintext input is destroyed to make it that much more difficult for a code breaker to reassemble something in its original human readable form.

Substitution is a complementary technique to transposition a permutation and this is the technique of substituting or changing one letter from the source to another letter, a different letter in the product as in the case of the Caesar Cipher. Now, the SP-network is the formal name for what we call rounds. The SP-network stands for substitution and permutation. Block ciphers use these in a number of rounds of substitution and permutation to heighten the randomness that is produced through the encryption process. Accompanying this are the terms confusion and diffusion and both of these are produced through, as in the case of confusion, mixing or changed the key values or with diffusion, mixing up the location of the plaintext throughout the ciphertext. All told, these are other methods for producing a heightened amount of randomness and pattern destruction in the crypto text output.

Now the avalanche effect is typically related to hashing where avalanche is taking a change of some sort on an input and producing some order of magnitude of change on the output. For example, if we have a hash value of some value that's an output from an email, if we change so little as one bit on the input of that message which is not even as great as changing a period to a comma, the change on the output of the hash value will be no less than 50% of the total bits in the output of the new hash following that change. This is our detection mechanism used commonly in email systems so that our systems will very quickly detect any sort of change of virtually any magnitude on our input as compared to the true original. So as I was saying about the high work factor, this is measured in hours of computing time necessary to retrieve a plaintext from a crypto-text and this is what it costs to break this. Now, as I had mentioned earlier, what we're trying to do is balance the work effort necessary to protect some informational asset at a level of strength equivalent with the value of that asset. Trying to protect it at the highest level possible is far too expensive in computing resources and not in line with the value of the asset itself. So seeking the balance in this is what we're attempting to do. Now cryptosystems typically come in a couple of forms. One form is stream-based ciphers. This is a form where it encrypts on a bit-by-bit basis and this is most commonly associated with streaming types of applications such as audio or visual types of media. And this slide as a symmetric stream cipher example shows how this is done.

Working from left to right, we have the plaintext at the source, we create the keying material from which we are going to generate a key string which is the product of a block of pseudorandom-generated bits, you might see PRG as either RNG, random number generator or PRNG, pseudorandom number generator as well. Please be aware of them. This will produce a key string generator which will put out a stream of bits which will then be mixed through the XOR or exclusive or operation with a stream of the input. This performs the encryption operation, then it is transmitted as cipher text encrypted in transit and at its destination, this process will be directly reversed. If this is something like a video, it will be returned to its plaintext, in other words, a watchable video at the destination. The cryptographic operation for a stream-based cipher relies to a great degree on this exclusive or operation. It generates the ciphertext by doing an apparently random bit-flipping operation so that on a random scheme, it flips one bit from zero to one or back from one to zero depending upon this truth table operation that you see here. This Boolean operator explicitly states that it's one or the other but neither both or neither. So you see, the truth table, if the input is A and B, two zeros will equal the output of zero, two ones, also an output of zero but if either of the bits is different, the output will be one and this random flipping serves to heighten the randomness of the overall data stream as it travels.

Now, the operation of the cipher relies primarily on substitution but these requirements must be met in order for this to be of sufficiently random strength so that it cannot be broken. The key string should bear no linear relationship to the crypto-variable, it must be statistically unpredictable, meaning that no matter how many bits you've collected, you can't do any better of a job of predicting what the very next bit is going to be than a 50/50 chance. Statistically unbiased means that in the entire key stream of whatever the broadcast is, audio, video, that you will have exactly the same number of zeros as you do of ones. Any sort of pattern, any sort of bit stream repetition that's done, that will happen one time and if you've missed it, there won't be a repetition. And the functional complexity means that it's put together in such a way that trying to deconstruct it through figuring out the algorithm that the functional complexity is such that that will be a practical impossibly. The other mode is a block mode.

Now, a block mode cipher operates on blocks or chunks of text. Now, as the plaintext is fed into the cryptosystem, it is divided into blocks of a preset size, the most common one being 64 bits but 128, 192 and some other sizes are also present and these are based on ASCII character size. Now, the initialization vectors, as I mentioned, are used to heighten randomness. These are a fixed size input to the cryptographic primitive that is typically required to be random or pseudorandom if you're a mathematical purist. Randomization has a characteristic of encryption systems cannot be overstated in its importance. The whole idea is to ensure that what is produced, whether it's the functional complexity of the system itself or the randomness of the stream of characters that make up the output, that there is no way that a pattern can be discerned or described or discovered regardless of how much time or effort a code breaker will put in on.

Now, some cryptographic system require that the initialization vector only to be non-repeating and the required randomness is derived internally from the operation of the algorithm. In this case, the initialization vector is called a nonce which stands for number used once. Now, an example of stateful encryption schemes is the counter mode of operation which uses a sequence number as a nonce, each sequence number being used only once. Now, the size is generally related to the cipher's block size. Ideally in encryption schemes, the predictable part of the IV has the same size as the key to compensate for time memory or data trade-off attacks. Now, the block cipher modes we commonly find are these. We have an Electronic Code Book, abbreviated ECB, and because of the fact that no IV is used in ECB, these are typically best encrypted by this method if these messages are short, say less than 64 bits in length, such as the transmission of a 56-bit DES key.

Now, Cipher Block Chaining mode is a block mode that employs initialization vectors to heighten its randomness. Cipher Feedback, Output Feedback are both stream mode data encryptions. Now, the Cipher Feedback mode has one drawback and that is it is susceptible to forward error propagation because it doesn't have any mechanism built into it to do correction for the forward error. Stream mode data encryption using output feedback has a mechanism within it that will do forward error encryption. Now, the counter mode that was meant is used in high-speed computing applications such as IPSec and asynchronous mode. Now, in all cases of encryption, key length plays an extremely important part. As I'd mentioned, the work factor is an attempt to balance the strength of the encryption mechanism with the value of the asset being protected. Obviously a critical aspect of that will be key length. Now, generating keys of any length will require computing resources which means time and compute cycles. The size of the key and the cryptographic algorithm used itself is something that needs to be, even if it's generically known in a particular use, it should be concealed from any sort of discoverable knowledge. Along with key size will be the block size. Now, the key size and the block size are related. Like the key length, this has a direct bearing on the security of the key. Block ciphers produced a fixed-length block of ciphertexts and in some cases, this may require that padding be added as it did in the Data Encryption Standard.

Now, the encryption systems, there are, of course, many variations on this particular theme to encrypt and decrypt the information. Now, many of them share a lot of the same characteristics and operations. One of them would be the null cipher, another would be the substitution cipher mode. Now, in a null cipher, this is also known as a concealment cipher. It's an ancient form of encryption where the plaintext is mixed with a large amount of non-cipher material. Today, this is regarded as a very simple form of steganography which can be used to hide the ciphertext. In classical cryptography, a null is intended to inject confusion. In a null cipher, the plaintext is included within the ciphertext and one needs to discard certain characters in order to decrypt the message. Most characters, in such a cryptogram, are nulls. Only some are significant and some others can be used as pointers to the significant ones. So various techniques, as you see, have been added over the centuries to heighten the strength of a null cipher. The substitution ciphers are based on the idea of substituting one letter for another based on some crypto-variable or other formula. It involves shifting the positions of the alphabet of a defined number of characters.

Transposition ciphers, on the other hand, are used to transposition or permutation as their methods. These reply on concealing the message through the transposing of or interchanging the order of the letters of the plaintext into the output product. Now, a simple transposition cipher, known as the Rail Fence, takes a message that is written and in this particular example, it's written on two lines. The message, purchase gold and oil stocks, would be written in diagonal rows as shown. As you see, starting at the upper left with P, we move down one directly and it's blank, so we move down one and over one to find the U, straight back up to find the R and then repeat this action as we go through. In taking these, we then begin to rewrite the message as we transcribe it starting at the upper left and going directly across until we reach SOK and the blank and then another blank, we discard those and then continuing writing the message starting with the U and ending with the S. And the ciphertext, thus transcribed, would read as you see the string of characters there at the bottom. Now, all of the things being equal, there is no pattern that is detectable within this. And this as a very simple sort of transposition cipher could be very successful in a one-time use scenario. A variation on this same sort of thing is the rectangular substitution table.

Now, as an early form of cryptography, this relied on the sender and the receiver having decided on the size and structure of the table in which to draw the message from and then the order in which to read the message. Now as you see, this particular block which was at the heart of the di Vigenere ciphering system uses 26 alphabets both down and across and then uses character shifting as it picks out the characters for the plaintext message from processing the plaintext input to develop the ciphertext output. Over time, deciphering systems have employed mono-alphabetic ciphering systems where a single alphabet is used or as you saw on the previous slide with the di Vigenere cipher, a poly-alphabetic cipher or 26 in that particular example we used. Now, in a running key cipher, which can make use of one alphabet, the key is repeated or run for the same length as the plaintext input. So the operation looks like this. A ciphertext equals plaintext plus a key, a mod 26 which is based on the number of characters in the alphabet. So the formula would be written C equals P plus K mod 26 and the ciphertext is equal to the value of the plaintext plus the value of the key. So as you see here, we have A, B, C, D, zero, one, two, three, etc., through to Z. Now, one of the cryptographic techniques that has been used since 1914 has been the Vernam cipher which we also know as the one-time pad.

Since the work of Gilbert Vernam, in that period, it has been proven that this ciphering system is the only unbreakable form so long as it meets certain criteria. Now, proven unbreakable by Claude Shannon in 1949, meant that as long as the Vernam cipher that was used, the one-time pad, that the plaintext that makes it up has in fact, been generated by a sufficiently random stream and that it is exactly the same length as the text that will be enciphered using it that this unbreakable characteristic is a result of that true randomness and that it is never reused thus giving it its name, the one-time pad. Now, in symmetric algorithms, that being the secret key, managing the key and protecting it from disclosure is of course, one of the most important aspects of keeping the content encrypted by this key secret, kept away from those who are not authorized for it. So as you see the process here, the plaintext is then encrypted using a specific key, the ciphertext is transmitted, then at its destination, it is processed by the de-encryption method using the same key to return it to its plaintext in the hands of the authorized receiver. But in order to keep the key secret, it has to be transmitted or delivered by some mechanism out of band so that should some party be listening in as it were, to the line where the ciphertext will be traveling, they will not also pick up the keying material.

**Students**1667

**Courses**30

**Learning paths**2

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.