The course is part of this learning path
This course covers the foundations of information security and prepares students for the CISM (Certified Information Security Manager) exam. It starts off by taking a comprehensive look at the fundamental, core concepts of information security. We then move on to governance, goals, strategies, policies, standards, and procedures of information security, before finally doing a deep dive on security strategy.
If you have any feedback relating to this course, feel free to reach out to us at support@cloudacademy.com.
Learning Objectives
- Prepare for the CISM exam
- Understand the core concepts of information security
- Learn about governance, goals, strategies, policies, standards, and procedures of information security
Intended Audience
This course is intended for those looking to take the CISM (Certified Information Security Manager) exam or anyone who wants to improve their understanding of information security.
Prerequisites
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
Welcome to the first part of the CISM examination prep course. This is the first half of our course presentation and it covers security foundations. This is intended to serve as a refresher of the fundamental concepts and processes that a CISM candidate should have a solid grasp on. Re-familiarizing yourself with the information covered here will also serve as a basis for the second part of the course, which will cover the four domains of the CISM.
This part is divided into six modules, each one of which will take you about an hour to cover. This format should make your in-depth coverage of these topics easier by providing you several self-contained subject areas in readily explored segments. It will also help you discover areas that may require deeper exploration in order to help you gain an enhanced understanding of the given area.
The information presented here will very likely appear in some form on the CISM exam, whether it's an actual question to be answered, as the context for a question on one of the CISM principles, or as a part of a scenario style question.
Although we cannot predict which form it will appear in, there can be little doubt that the information presented here will be very valuable in any case. So be sure to focus on the various concepts to be shown to you and make sure you grasp them well.
So, we're going to begin on slide two, which starts section one, entitled Security Concepts. The principle we're going to discuss is the principle of least Privilege. The principle of least privilege stated another way means, giving a person the kind of access they need at the proper level to accommodate all of their job duties, but at the lowest level of exposure of the information. To give people only the minimum resources and access they need manages and decreases in general the risk to that information of a person having too much access and being able to adversely impact the data mass as a whole.
Once the information access has been granted, it can of course be changed either up or down or in some other way expanded or extended, if the person's job duties change. There are of course caveats that come with trying to apply the principle of least privilege. In every case of identity and access management, the administrator should begin with a clear plan for how access will be defined and provisioned. This requires paying increased attention to the segmenting of resources to ensure that resources can be properly aligned with the access required by the individual's jobs.
With regard to any access being granted, there are trust and other requirements that must be met in order for a person to ultimately be provisioned access. A person needs to be authorized for access and have a valid need for that access. The process we know, it begins with identification, followed by authentication authorization and accountability. To begin this process, we must first establish that a person has a level of trust and need to know for the information for which they are requesting access.
One thing to be careful of is that not so much access is given that the authority exceeds the actual need that has been defined, but enough so that the person for whom the request is being submitted, has sufficient access to make them successful in their role. Going through this process will add an extra layer of security to ensure that the access granted to the individuals needing it, does not exceed their actual need, thus it prevents the exposure of the information from excessive levels of privilege and the potential for causing corruption of that information.
Here, we see how that access is going to be granted by the initial process of establishing a level of trust and the requirements of need for the person, for whom the access will be granted. We begin with a clearance for that subject. Even though it has a military sort of connotation, we're going to look into that person's background and investigate various aspects of it to establish a level of trust.
Now, the clearance will equate to a characteristic possessed by the object or the data that the individual is gonna be accessing. The clearance itself establishes a level of trust for the subject. This parameter is going to be compared with a comparable one assigned to the object that the subject will be accessing.
The clearance itself is hierarchical in nature. An example of this would be the clearance levels assigned to people in the military. It means that someone with a secret clearance, would have included in that confidential as well. Similarly, someone with a top secret clearance, would also include secret and confidential.
Now, along with the clearance will be established a characteristic we know as, Need To Know, or NTK for short. This parameter must match the comparable level that will be assigned to the data asset the individual will be accessing. Unlike the clearance however, need to know should match precisely the comparable characteristic of the data object.
On the object or passive entity that will be accessed by the individual, will come the classification parameter. Classification equates to clearance in terms of how the two are matched. Classification defines the data sensitivity level and is compared to the subject clearance level in order to equate sensitivity on the object with trust for the individual. As before, the classification is typically hierarchical in nature.
Now to compare with the need to no parameter, the object will have a compartment or more commonly called a category. This parameter must match the need to know of the subject before access is actually going to be granted and should precisely match the need to no parameter assigned to the subject.
Along with clearance and need to know for the subject and the classification and category for the object, there will be another general characteristic that will be governing how the access is going to be granted. This is called segregation of duties. The purpose of segregation of duties setting is to prevent a single role from having too much authority within its ability to access information.
Along with segregation of duties is another characteristic, known as separation of duties. Separation of duties is different from segregation in the sense that it prevents a combination of duties that would create a conflict of interest.
As an example, a person who prints checks should not be allowed to change names on the checks as well. This of course in combination with their access to print checks would enable them to print checks and possibly even change the amounts as well as the names for anyone they liked. This of course would create a serious conflict of interest.
Segregation and separation of duties, therefore highlights different people should be required to execute distinct actions in cooperation with each other in an attempt to prevent this sort of conflict of interest and collusion from occurring. This reduces the chance of fraudulent activities and behaviors.
One of the defining attributes for data is criticality. Part of the definition for criticality is to assess and assign what level, meaning what type and order of magnitude of impact the loss of the given asset will have. This of course establishes the level that the given asset will have to the business.
For example, the loss of a specific IT system will prevent orders from being processed in a timely manner. This of course, being critical to the company, makes this system critical to the business and therefore means that its loss or impairment of operation proves to be a critical impact to the business itself.
Another example would be the payroll system going down. Despite the fact that the loss of the payroll system is in the minds of the workforce a critical loss, it is less critical because this business will be able to survive for a longer period without payroll than it will, if it tries to survive without the IT system that processes orders.
Criticality is also looked upon as being a binary sort of trait. By this we mean, by its presence, enables business functions to take place. If however it is lost, its absence means a critical function does not take place. In a similar fashion, certain assets will have an assigned sensitivity. This characteristic is in the same sense that criticality defines and provides an indicator of a system of a greater or lesser importance to this business operation.
While criticality is defined as a binary quality, sensitivity defines more of a gradient type of characteristic. As an example, a system that has a certain level of function produces a certain amount of output. As that system's performance degrades, its output necessarily decreases. The impact of a decrease in performance can magnify business losses and reduce the amount of revenue generated from that system's output, so that by the systems performance degrading rather than being taken out of operation altogether, the benefit provided by that system degrades in proportion to that systems decreasing performance. This is more indicative of sensitivity and the gradient that follows sensitivity.
There is, of course, part of the definition of sensitivity when it comes to certain kinds of information and the inherent value that they have. For example, a recipe based on a secret sauce, if that sauce should be compromised, it will have less value. While the leakage of this information regarding the secret sauce may not impact day to day operations, it does provide degrading of competitive advantage due to the fact that it is sensitive and therefore key to the owner's success.
Next, we have characteristic of assurance. This of course is the level of confidence that we have in the system or the security systems performing their assigned tasks up to the level that we expect. This is the main process for managing our security risks. It includes the processing and measuring of the performance of these characteristics, configurations, software and other items that ensure our program elements perform as desired.
One of its functions is to ensure that vulnerabilities are identified, that these vulnerabilities are properly and appropriately addressed and that the threats are identified and mitigated. Together, the assurance program keeps vulnerabilities and threats to an acceptable level within the risk appetite and tolerance of the organization.
For example, the deployment of a firewall and encryption to protect the database is making certain that it is assured to be available and to perform when it's needed and to a safeguard its contents. TCO of course means total cost of ownership. TCO represents the full ownership burden associated with any asset, which will include controls and countermeasures and includes the acquisition price, maintenance upgrade, repair and all of the facets of the cost associated with a given asset.
What TCO also reflects is the concept of value contribution, rather than mere balance sheet entries. TCO is reflective of that actual value of an asset and provides the basis of comparison between the cost to protect and the cost of loss or compromise. Used correctly in risk or other security related decisions, TCO is the best measure of how much an asset is actually worth any organization, and similarly, how much the mitigation technique or technology under consideration will cost in comparison to the value of that asset that it will be protecting.
In looking at the Economic Value Added or EVA, as it's known, we consider several items being with the total cost of ownership of the asset to be protected and the control to be implemented to protect it. Looking at the slide, we see the calculation of the value of security control beginning with the effectiveness premise. First point is to control and consideration must obviously result in a material risk reduction. Meaning, that it results in a measurable, positive benefit in terms of cost.
Next, the controls that are chosen must also be cost-effective and these would be shown to be so by a cost-benefit analysis, comparing each one with each other one. Finally, from an organizational perspective, the control to be chosen, achieves the desired goal and enables business success without unacceptable impediment or increased complexity by the employment of that chosen control.
As a summary of the calculation, we take the annual loss expectancy as a function of pre-implementation total loss potential and subtract from it the annualized loss expectancy as a post-implementation factor, following implementation of the controls. The difference between these two numbers is the unincurred cost of loss, which is the positive monetary offset obtained through the introduction of the control under consideration. Furthermore, we then have to subtract the total cost of ownership of the asset, which is the annual depreciation plus the operational expense of that control.
In the end, this will equate to the economic value of the control as well as support the cost benefit analysis that has led to its being chosen in the first place. The cost trade-off calculations that we're discussing makes comparison between the total cost to protect and the total cost of loss or compromise. This is the basis for the cost benefit analysis for the security control and requires that we considered two basic effectiveness factors.
The first one is operational effectiveness. This might seem an obvious point, but to consider operational effectiveness means that it must be compared to the requirements before us, and it must be found to meet them all satisfactorily.
The second one is commonly referred to, as most bang for the buck, or cost-effectiveness. What this is intended to reflect is not purely the lowest cost, but the lowest reasonable cost for the control of asset that most effectively does the job required. It is one factor in calculating EVA or Economic Value Added of a control.
As has been discussed, all values that are used must reflect the total cost of ownership over the life cycle of both asset to be protected and the control to do the protection, to ensure that we have an apples to apples comparison.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.