1. Home
  2. Training Library
  3. CISSP: Domain 3 - Security Architecture & Engineering - Module 4

Digital signatures and Digital Rights Management (DRM)

The course is part of this learning path

Preparation for the (ISC)² CISSP Certification (Preview)
course-steps 16 certification 4 description 1
play-arrow
Start course
Overview
DifficultyAdvanced
Duration48m
Students4

Description

Course Description

This course is the 4th of 6 modules of within Domain 3 of the CISSP, covering security architecture and engineering

Learning Objectives

The objectives of this course are to provide you with and understanding of:

  • The history of cryptography across the era's
  • The principles and life-cycles of cryptography
  • Public Key Infrastructure, known as PKI and the components involved 
  • Digital signatures and how they are used
  • Digital rights management (DRM) and associated solutions

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Now speaking about the digital signature services provided to us by public key encryption, these are things that purely symmetric systems cannot do. Without the use of hashing, a symmetric system does not provide integrity or authentication, and it doesn't provide non-repudiation. 

Now a message can be encrypted, which of course provides us the confidentiality we need. It can be hashed, and frequently is, which provides our integrity check value so that we know if it's been tampered with or any way altered. It can be digitally signed, which provides the authentication, it supports the integrity function, and it avoids non-repudiation. Now if we combine all of these by encrypting the message and adding digital signing, this provides the confidentiality in addition to the authentication, non-repudiation, and integrity. So, what is a digital signature actually? 

Well, as you see here, we have a MAC, a message authentication code, which is a message digest processed against some sort of input, and that is encrypted by a secret key. Now look at the digital signature, you see that a message digest is encrypted with a private key. The thing to note here is this, the secret key is a shared item. By its very nature, it must be. A recipient cannot decode the message authentication code or MAC without it. But the digital signature, a message digest encrypted with a private key can be done and undone only with the public and private key pair of its own, the private key being used by the sender, the public key being used by the recipient to decode this, extract the message digest, and in doing so verifying the identity of the sender. Now the use of digital signatures is first employed to sign digital certificates from the CA to the purchaser or the owner of the digital certificate. 

Now the digital certificate is an electronic document that attests to the authenticity and the data integrity. It is tied to a sender. It attests to the validity of the key itself because it is tied to the public key and travels with the public key as a recipient acquires the public key from its owner, the sender. Now many governments and courts recognize digital signatures and other forms of electronic signatures as a verifiable form of authentication. This public key infrastructure is a fundamental ingredient in the digital rights management world. Now DRM, as it's known, and it can also be called IRM for information rights management, has three key components. One is the creation of the content to be secured by this method, the distribution and upkeep of the various components, and various kinds of allowances or restrictions on the contents used. This comprises of a broad range of technologies that grant control and protection to content providers over their own digital media. It should account for all three of these components if it is to be effective and define the interactions between user, permissions, and the content itself. 

One form is always on, meaning that every time a user is going to activate and use the various contents that they may have acquired that carry this technology with it, it must be accessed each and every time. It may have an expiration, it may not, but unless it's activated, the content that it secures cannot be used or even viewed. There is a digital watermark, which might be an embedded symbol within the content. It always must be there. Without that digital watermark, which must be verified by the boot up process that brings up the content, it may not activate, and the content will no longer be available to the user. Sometimes it uses a USB key or a dongle where the USB key must be in place and the code that is on it must be verified by the application that opens the content. Without this key of course, the content can't be accessed. And then other forms of fingerprinting whereby the ownership rights are asserted each and every time a user who is licensed to this accesses the content that it protects. Now the non-repudiation and integrity have to be established and are typically established through the use of the public key and hashing. 

Non-repudiation ensures that the sender cannot deny that a message was sent and that the integrity of the message is intact. Non-repudiation is typically accomplished through the use of digital signatures and PKI. Hashing provides that fingerprint of a message, but it does not contain or transform or secure or obscure the original message content as encryption does. What it generates is an output. This is an alteration detection technology used to ensure the authentication and integrity of the information, but it isn't used to ensure the confidentiality of the content. That is reserved for the use of encryption. Now for hash function to be considered useful and usable, it typically has to reflect these six properties. It needs to be flexible, that is, it needs to accept an input basically of any length, anything from hello world to war and peace and from that generate a fixed length output. It should be uniformly distributed. If for example you use email, making use of MD5 or an SHA-2 algorithm for the hashing function, then you should be able to expect that whoever you're sending it to, and that means literally whoever you're sending it to, would have access to those same algorithms such that they would be able to generate the hash on their own based on what they've received from you so that they have a reasonable chance of confirming that what they received arrived in an untampered or unmodified form. Without the ability to run those very same algorithms, they wouldn't be able to do that. A hash algorithm must be deterministic, meaning that whatever the input, it will generate an output that, though related to that input, is not a transformation of the input. But it must generate the same hash value because of a built-in trapdoor in the hash algorithm so that it will always produce the same output from the same input. Now it does need to be difficult to invert. 

Now hashing is a one-way process. So, trying to reverse it would be reverse engineering to determine how the mathematics works to produce the outputs that it does. But unlike encryption where, if we're doing reverse engineering, figuring out how it generates keys so that we might look for a weakness to break it, there would be no point in doing reverse engineering to determine the input from which the hash algorithm generated its output because it's not going to give you back the content of the actual input. It should be collision-resistant in both a strong and a weak mode. The weak mode reflects that it should be difficult to find a second input value that hashes to the same value as any other output. Now note the language here. Weak means any input and any output. So, any two messages that you're hashing should not be able to produce the same output with the same algorithm. Now in strong collision resistance, it would be difficult to find any two inputs that hash to the same value. Now the hashed message authentication code takes a small block of data that is generated using a secret key and then is appended to the message itself. So, it's used much like a message digest is used. When the message is received, the recipient generates his own or her own MAC using the very same secret key and know that the message has not changed. 

Now using an HMAC specifically is intending to foil the multi-collision attack because what we have is we have a message digest that has been encrypted with a secret key, and then that MAC is itself hashed, and that this hash, now having two layers of hashing having been performed, solves this by having two hashes that have to be resolved by the attacker. Now the MD5 message digest algorithm is very widely used. It is described in RFC 1321. RFCs will not appear on the exam, by the way. The MD5 generates a 128-bit fixed length digest from any message put through it. This is done by using 512-bit blocks and processed in four rounds of hash processing. Now through MD5, because of its 128-bit length, the weak collision is estimated to be two to the power of 64, that is, that the finding of any two messages that hash to the same value is expected to be that difficult. The difficulty of finding any message that can be hashed to the same value as compared to a specific other message is given to be the full value of two to the power of 128. Now in order to begin an estimation of how difficult it is to do either the weak or the strong collision, we have the birthday paradox. 

Now the birthday paradox poses the question: what would I have to have in the way of people together to have a greater than 50% probability that two of them will share the same birthday, any one of the 366 that occur during any given year. That includes of course February 29th and leap years. Now the likelihood of finding a collision in a case like that, that is, two people having the same birthday, for the weak collision, it would be 23, selected at random, and it would have a greater than 50% likelihood that two of them will share the same birthday. For a specific chosen birthday, say your own, you would require 10 times that many people, on average, 253, to keep that same 50% probability. Now in classes that I have conducted, just to illustrate, over the years, I've taught several thousand students, this class, and I've tried this experiment in virtually every one. Only on a couple of occasions have I ever done better than this. On one occasion in particular, I had three people with the same birthday, two of them the same year. 

Now statistically, that is a possibility in any setting just like it's a possibility that the very first key or the very first hash value you attempt will be successful in being the one that you want. So, probabilities are always what we have to work with here, not possibilities but probabilities. Now the likelihood of finding that collision between two messages and their hashes may have been a lot easier than many have believed, meaning that collisions actually are easier to produce than was originally thought. Now the secure hash algorithm, which is a family of algorithms, SHA-0, one, two, and three currently, SHA started out being based on the MD4 or message digest version four algorithm whereas SHA-1 follows the design logic of MD5, a successor to MD4. Now SHA-1 operates on 512-bit blocks and can handle any message up to two to the power of 64 bits in length. SHA-0 and one both produce a fixed length output of 160 bits, and this processing is done in four rounds of operations of 20 steps in each round. Now SHA-1 was moved to deprecated status in September of 2012 because collisions became more feasible economically and mathematically than were proven before then. At this point in time, SHA-2 remains an acceptable mode for all lengths of it, and it comes and were processing several. SHA-2, though acceptable, is still not preferred over SHA-3. The winner of the algorithm that is used in secure hash algorithm version three was Keccak, which was named in October of 2012. 

Now the standard hashing algorithms is SHA-3 because no collisions have yet been demonstrated in it. This will augment the hash algorithms currently specified in the Federal Information Processing Standard 180-4, which specifies the secure hash standard that these are members of. Now HAVAL breaks the mold in that it produces a variable length output. It does variable numbers of rounds of operation on 1,024-bit blocks of input. The output may be 128 bit, 160, 192, 224, and various other lengths up to 1,024 bits. The number of rounds may vary from three to five. Now HAVAL has the advantage that it operates 60% faster than MD5 when only three rounds are used and is as fast as MD5 when it does its full five rounds. Now HAVAL is actually an acronym, a contraction of hash of variable length. It is the only hash algorithm in use that does a variable length output. 

So, we're going to stop at this point at the end of this module. We will continue in the next module with our discussion on encryption and talk about common attacks against cryptography beginning on section eight. But for now, we're going to pause here. I'll see you next time. Thank you.

About the Author

Students378
Courses16
Learning paths1

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.