image
Annualized Loss Expectancy & Identifying vulnerabilities & threats
Annualized Loss Expectancy & Identifying vulnerabilities & threats
Difficulty
Advanced
Duration
1h 7m
Students
602
Ratings
4.7/5
starstarstarstarstar-half
Description

This course covers the third of 4 modules in Domain 1 of the CISSP, covering security and risk management. It will focus on risk and risk assessments, annualized loss expectancy, vulnerabilities and threats, risk response, countermeasures, considerations and controls, assessment types, penetration testing and reporting.

Learning Objectives

The objectives of this course are to provide you with and understanding of:

  • An introduction to risk, including qualitative and quantitative risk assessments
  • How to identify threats and vulnerabilities
  • The risk assessment analysis process, including risk assignment or acceptance
  • Different security and audit frameworks and methodologies and how to implement the program elements
  • Risk frameworks

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Now the Annualized Loss Expectancy, calculation is one that is very famous, within the CISSP practice and it's one that we have borrowed from insurance companies. It's a calculation to estimate a rough order of magnitude for the decrease in value or capability of an asset after an adverse event occurs. When we do this, we perform this for each critical asset by each that is each asset or we do it by a category of assets. The result of this is a budget amount that will be used for controls or countermeasures that will be employed to mitigate the loss in some effective way. So let's walk through this. 

We have AV which is the Asset Value. This needs to reflect the total cost of operation or ownership of the given asset. So, it reflects everything material about that asset. We multiply that by the Exposure Factor. Now, this represents the amount of capacity or capability or value that will be lost through the adverse event occurring. Taking the product of these two, we produce the Single Loss Expectancy, shown there as a the SLE. Then the particular event characteristics that produce an Annual Rate of Occurrence of whatever the effective element is that causes this loss, we calculate how often that will happen, causing the Single Loss Expectancy over a period of time. Now the ARO is normally represented as a decimal. If it's greater than one that means it happens greater than one time per year. If it is one, then obviously that means one time per year and if it's a decimal less than one then that means it doesn't happen every year but some periodic interval over a period of years. So, we multiply those together and we come up with the Annualized Loss Expectancy. Now this number, the ALE is what is the budget amount that we'll use for our decision making process about what controls or countermeasures we will put in place to mitigate the Single Loss Expectancy, for each event occurrence. From these we have to first assess what the Acceptable Risk would be that we have to meet and we have to compare that to the Residual Risk. In other words, we compare our target, the Acceptable Risk to what we actually achieve in the form of Residual Risk. 

So, there you see the calculation, the ideal relationship between Total Risk, Acceptable Risk and Residual Risk is, Total Risk of course, will be the most will be the greatest amount. Less than that will be the Acceptable Risk. Acceptable Risk and Residual Risk could be equal but Acceptable Risk should be greater than residual. Now, where the Residual Risk is less than or equal to at its greatest to the Acceptable Risk limit by management, this has to be regarded as encompassing compliance items as part of the attainment of the Acceptable Risk. And all of these should be based on total cost of ownership or operation of the asset in order to come up with an appropriate valuation method. So we have Acceptable Risk which as I said represents a level of allowable exposure or loss or outage as defined by management. That an enterprise can absorb and continue operating without any sort of crippling effects. Now, Acceptable Risk does contain any sort of compliance requirement that has to be met and it can be something that might even be arbitrary but it's decided upon by management and it becomes the target that we have to achieve as a minimum for our risk assessments to be successful. So it is defined as an SLE or ALE at or below defined threshold. Examples could be a variance of up to 10% in operating expenses, a variance in uptime as measured against the Service Level Agreement commitment by a service provider of some kind. Or a delay of five days in project completion which could represent five days of float or slack that might be in a project plan. That will make no difference and if it's achieved, in the delivery considered to be on-time of whatever the product is of a given project. Now, along with Acceptable Risk is Residual Risk and this is the level of remaining exposure loss or potential outage that remains following risk reduction and mitigation efforts. This is what we actually achieve as compared to the Acceptable Risk which serves as I said of our target. 

So the examples of this could be achieving an ALE, reduced by 40%. That is by modifying the asset to reduce its exposure to power fluctuations. Another example would be decreasing data entry errors by 80% through operator training and better supervisory work quality checking. Adding hot-failover to a critical application system to eliminate lost service due to system failures or a process change where design review QA processes are done to improve time-to-market by 30 days per year by reducing in-stream-design re-work and break-fix activities. Again, these are simply examples of what might highlight the Residual Risk, what we've actually achieved through our mitigation efforts. So in this table, there are other examples of ALE and SLEs so we'll take a few. Our threat, you see there on the far left. The fire to a facility and a facility has a TCO value set at $560,000. This might also be what is represented as the value of it on an insurance policy. Our exposure factor is 40% that means that the fire has caused a 40% loss to the building and that might represent a claim value, translating to $224,000 of a Single Loss Expectancy. Our Frequency of Occurrence is 0.25 which means that once in four years a loss of this magnitude occurs that calculates to a $56,000 annualized loss. The theft of a trade secret where the trade secret itself by some calculation is valued at $43,500. 92% of that value is lost probably through some form of exploitation. So our Single Loss Expectancy is $40,000 and change. This happens 0.75 which means it happens three times in four years. Now, to be clear, that could mean that it happens three times in one year and then not again for another three years or it could happen one time each of three years in a four year period or some other combination. But our Annualized Loss Expectancy is $30,000 and some change. The others are similar evaluation scenarios that occur. In some cases as with a file server the loss is 100% so that the Single Loss Expectancy is the assessed value of the server itself. This should include the value of the data that might be lost on the server as well. 0.50 simply means it happens every other year. The insider threat, the data is valued at $325,500, 83% exposure factor might mean that 83% was in an exploitable condition and was in fact exploited in some way by the insider. Possibly sold to a third party who might be hacking into it. For a Single Loss Expectancy of $270,000, a frequency of occurrence 0.65 which means that in a 10 year period, it happens six plus times, for an Annualized Loss Expectancy of $175,000. Now, the ALE values that are shown in this table would be used to put together a budget that would then be expended on various types of controls, changes in procedure, different kinds of employee training perhaps but a mixture of these things and this money would be spent to stop these losses from happening. Or reduce the losses from occurring if there is no way to effectively stop them from occurring. Whenever we're considering controls, there is never a time we're going to consider controls that are not operationally effective, in achieving the desired results. So, the cost/benefit analysis is always done on the controls that will produce that result. 

The annualized cost of these safeguards to protect the threats is going to be compared to the expected cost of the potential loss itself. So, if a server worth $10,000 and one suggested security safeguard, would cost a company 12,000 to protect that one server that would not be a cost effective comparison. The server itself is 10,000, a solution proposed at $12,000 is worth 20%, more than the server itself is worth, nothing cost effective about that. Now when we do cost benefit comparisons, we have to include all the same sorts of business logic and business values so that this comparison is done with the same sort of rigor and consideration that any business case would contain. So it will involve tangible and intangible values and again, it needs to be based, on the Total Cost of Ownership or Operational expense associated with the given asset. Now Return on Investment is a common term that is thrown around when these comparisons are made. It's the term that the business folks will continuously use to rate which one would be the best option to invest in. The classic ROI model does not work when it comes to selecting security countermeasures because the assets that usually bring that countermeasure effect into reality, typically begin depreciation, from the moment they go into operation, until they're changed out for something new. So classic ROI as I say, does not work. In contrast to that, what we want to do is calculate the worth of the security control by the Economic Value Add of the control. So we start with the effectiveness premise operationally that it produces a material risk reduction, in the given scenario. That economically, the chosen controls are cost effective and organizationally or economically if you like, the control achieves the goal and enables business success without undue or unacceptable impediment which could include things like administrative complication, delicacy, technological fragility or other characteristics that might make a particular control undesirable. So our Economic Value Add formula is we calculate the ALE, pre-implementation total loss potential from that we subtract the ALE post-implementation which is our residual loss potential and then add back the annual cost of the control based on the Total Cost of Ownership or Operation and that gives us the Economic Value Add of the control. 

Now, there will always be the question of what Cost Trade-Offs need to be made? And what we do is we compare the Cost to Protect and the Cost of Loss or Compromise. Because this forms the basis for a Cost Benefit Analysis, for a security control when we consider, A, the operational effectiveness and B, the actual cost effectiveness of the control which we can sum by saying, most bang for the buck. But this and this is an important point it needs to be reflective of more than just the lowest possible cost. One factor in calculating the EVA of a control is this cost comparison. Now all values as I've said need to reflect the Total Cost of Ownership or Operation over the life cycle of the given asset and for each candidate control. This is to make sure that the business logic is sound and that it is fully reflective of the true cost of purchasing operation and the value of what is being replaced by the new item. The Cost to Protect equates to the cost of the candidate control to be implemented on that basis. The Cost of Loss equates to the value of the asset at risk, it's loss or it's compromise and that again is a Full Cost Analysis. So in figuring the cost trade offs, if the Cost to Protect is less than the Cost of Loss or Compromise and there you see the formula showing this. The result of this calculation shows that it is less expensive to use a control than to risk the loss of the asset. If the comparison shows this, then it financially validates the decision to actively mitigate the risk to this asset and supports the business decision to choose in favor of mitigation. Compare that to this where the Cost to Protect is greater than the Cost of a Loss or Compromise reflected in the formula that you see. And this shows that it is more expensive to use a control than to risk the loss of the asset. This validates the decision to accept the risk of loss for this asset. But it also validates choosing an alternative control that produces a better cost effectiveness outcome. And of course, we have the Cost Trade-Off where it shows that the Cost to Protect is approximately equal to the Cost of Loss or Compromise. This result shows that it is no more expensive to use a control than it is to lose the asset itself. 

Now, that may in the minds of some justify doing nothing but the literal doing of nothing is not an acceptable outcome because that does not show proper risk management. The decision based on this formula to mitigate or accept the risk is typically based on criteria, other than cost alone, such as the requirements for compliance or safety or similar kinds of factors. But the decision has to be made in the full knowledge of what this means. So some definitions then that will go into the process that we have for doing these kinds of calculations and making these kinds of decisions. Vulnerabilities are defined as an inherent weakness in an information system, the security procedures, internal controls or an implementation that could be exploited by a threat source. Unlike a risk assessment though, a vulnerability assessment tends to focus on the technological aspects of an organization, such as network or applications. But in the end a vulnerability is the thing that we are attempting to identify, crystallize and in some way compensate for. The threat sources that we have of course fall into several different categories and with respect to the type of threat that we've got, each one may demand a different sort of solution to properly address it. 

We have of course, human, we have nature, we have technical, physical, environmental or operational and each one of these categories may require us to take on a different strategy. Human approach to this might be training, vetting our employees a little differently so that we get a higher level of assurance that they're reliable, properly trained, etc. With nature, the threat sources that come from natural sources. The truth is there's very little, we can do about natural threats, picture hurricanes, tornadoes, anything that comes of that nature. That means we're going to have to focus more on the asset itself and what we can do to protect it, from these things rather than focusing on the threat because natural threats don't lend themselves to us doing very much or anything about them. Technical or physical or environmental, these may all be factors in the environment and in examining them we have to look at how we can remove them, reduce them, compensate for them or failing any of those, how we can accept these things and hopefully reduce the chances of a compromise to the asset. Operational tend to reflect a failure on the part of a human following certain kinds of procedures, such as following a particular process and failing to take into account certain steps that might be in it, omitting them or abbreviating or taking shortcuts can raise operational types of threats. And all of these are going to be part of the overall strategy we're going to take when we try to compensate for their possible effects on our assets. Once we have these things figured out and we have them reasonably clear, then we're going to look at likelihood and consequences and likelihood qualification and rating. What we're going to do then is look at qualification as it relates to the attributes of the event or after and the attack that is done. In some cases, it will require a highly skilled individual, there may be high incentive in attempting to make whatever attack they're going to make. In other words, there might be great, financial reward for them. It could be that doing a particular attack, it's a very costly thing for them to do but it might be a high value target that they decide to attack anyway because the payback will be very great. 

Now looking at the grid that you see likelihood, certain, highly likely, moderately likely, unlikely or rare, the grade A through E, and then the impact magnitude, catastrophic to negligible. In this kind of a rating, we would rate as CB, C being moderately likely and B being major in terms of the impact magnitude. This gives us an effective rating of what we might expect from an actor or an event. Moderately likely and probability of occurrence but a major impact if it does, it is successful. Along with them, we're going to calculate our likelihood of qualification and rating by saying, the definition of impact carries elements of type and order of magnitude and financial reputational market position, any one of a number of other trades that can reflect type and order of magnitude. And here we have a grade of skill level required of the actor, one to five, meaning one is low and five is high, ease of access. The incentives that they have to commit this act and then the resource requirements that will help them accomplish their goal. 

So between these factors, we could have a rating of CB moderately likely, major impact and then we determine all of these others, skill level, ease of access, incentive, resource and we calculate those and we come up with some form of exposure scoring giving us the essence of how likely it is to occur, the impact that might result and then what the actor would have to be a wizard like Harry Potter or a script kiddie, someone with very low skills who can't really do much except being annoyance, in attempting to accomplish it. And then as we take that Risk Exposure Scoring, we can put it into this table and as you see it varies from low risk to very high. And that can lend clear definition to the types of impact that we can have and help us set priorities for certain kinds of threat vulnerability asset scenarios. Very high risk of course should require immediate effective, attenuation of that risk. Whereas low risk, those might be candidates for risk acceptance. But we have to take the values and we have to plug them into a matrix of this kind to visually represent exactly the kind of priorities high or low that will result, from the risk assessment calculations we've made this far. Now the Determination of Impact that we're going to have will vary greatly. These things could be danger to human life, it could purely be dollars, it could be something more intangible such as prestige. Or if we're a market leader or we're seeking to obtain a higher market share any sort of an impact that can damage our ability to achieve that goal. These are the sorts of things that will be defined by management on the exam as an example, human life safety will always be a very high priority and it should be in our businesses as well. But the organization must assign these definitions in order to give us clear guidance as to how we are to treat the various impacts that will occur from these risk threat scenarios. And the risk itself, high medium low or however you rate it is determined as a byproduct of likelihood and impact.

About the Author
Students
8674
Courses
76
Learning Paths
24

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.

Covered Topics