The course is part of these learning paths
This course covers section one of CSSLP Domain Four: Common Software Vulnerabilities and Countermeasures. You'll learn the elements, ideas, concepts, and principles about what issues must be considered before embarking on a building program of secure software.
Learning Objectives
- Understand programming fundamentals
- Become familiar with different development methodologies
- Learn about common software attacks and the means of exploitation
Intended Audience
This course is intended for anyone looking to develop secure software as well as those studying for the CSSLP certification.
Prerequisites
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
Now, we have input validation failures. And these are very frequently occurring conditions in all programming areas. Now, it's well understood that nearly every program requires some form of input, be it fixed, variable, password, data, alpha or numeric. A philosophy that has to be adopted then is 'trust no one'. A system of defensive checks is used to ensure that no input is accepted until successful post-parsing validation. Meaning that while we might not trust the input we make certain that we trust the output of the parsing operation.
Now, trust but verify, might sound like a good philosophy to have but I think perhaps it should be no trust without verification instead. There's also the problem of content checking and contextually neutralized on the inputs. And the question is, is the input received dead or alive? Now, not always performing due to processing or contextual complexities, is hardly a reason for these things to occur. What usually occurs is we simply assume trust if we know where it comes from, not keeping in mind that many things that appear to be valid and have the right name frequently do not.
Other input validation failures include; failing to compare inputs and other sorts of outputs and other controls to what the rules are for the processing to be done. This would include failure to perform before and after validation checks and failure to employ increased rigor and programming methods for security in deference to choosing performance over a balance with security. Input validation includes the very popular buffer overflows. This is probably one of the most well known and quite possibly the most damaging of an input validation failure type of an attack vector.
Now, estimates suggest that this type has figured in at least 50% of the attacks that have been recorded. Some very well known example would include; the Morris finger worm of 1988 by Robert Morris Jr, the Code Red blended-feature worm of 2001 and the Slammer SQL worm of 2003. Now, the cause of it is well known. It stems from an incorrect calculation of the required buffer sizes, usually as a result of poor programming practices, the so-called quick and dirty types of shortcuts that make something happen quickly on the promise that it'll go back and be fixed and frequently never is. It also stems from programming language weaknesses, such as weak typing and poor memory management control methods.
The effective mitigation begins with improving programming methods and better choices with language checking options. This is another case of 'trust no one'. In other words, what you're going to do in this case, is treat all input external to a function as unknown and potentially hostile. We have the failure of canonical forms. Now generally speaking, these are programmatic manipulations to reduce primary inputs to a common foundational form of encoded representation such as Hex, ASCII or others for subsequent processing and diverse context. This makes it much more transportable, much more diverse in acceptance and it helps with legacy systems.
The validation before or after is the question. If validation before resolution is done it may yet miss issues. The validation that takes place before might render the parsing of the submit code and yet obfuscate certain character strings that perform disallowed functions after they have been encoded. Validation after resolution into a canonical form can ensure that the resulting input stream is or does only what is expected or allowed. In other words, unlike the one done before resolution, it is what it appears to be following resolution.
So, we need to look at defensive programming functions and the fact that if they are not present, certain other bad effects can result. Now, here we have several. First one being critical functions that are not authenticated. And this comes from CWE-306, which means we accept them as trusted interface or ignored to emphasize performance. And even though performance is of course extremely important we again must emphasize the point that we have to balance security needs with performance.
Another one is the unrestricted upload of dangerous file types, CWE-434. This is another case when an integrity check has not been performed again, to streamline processing and by inherently trusting unvalidated sources. We have execution with unnecessary privileges. This is one that comes from two different CWE entries number 250 and number 269. Now, the function executes allowing a temporary or persistent escalation due to not restricting its level to be commensurate with the necessary privileges.
In other words, this is not following the principle of least privilege. And perhaps simply to avoid in aborting by not having sufficient privilege, we obviate the need for that by giving it full privilege to everything. And this of course is what is taken advantage of. Now, all of these represent avoidable failures because they will ensure consistent security performance throughout, by ensuring no single point of failure and by trusting only what has been validated. Our general programming failures should be included in this discussion because these are things where we make errors that again are largely avoidable.
A general programming failure would be because; we are failing to be rigorous enough, failures of the philosophy of trusting everything and everyone, not following our guidelines of defense-in-depth in our programming environment and ensuring that security is built-in as a set of requirements equally critical to those emphasizing performance. Some mistakes that we make would include trusting and reusing old code. Since it's been trusted before why not trust it again here now? Instead of going to the extra step or two to revalidate it. It's like reusing a tool that cuts down trees that hasn't been sharpened in quite a long time.
We think it's still going to work and yet it disappoints by its performance or by possibly not working at all. Also we use code obtained from an apparently trustable source without any sort of pedigree validation. And this would include previously generated code from internal sources, and the logic that well, since it came from inside, then it must be trustworthy again, trusting everything not trusting no one. We have previously used programming methods. No harm caused before, so we ignore the fact that contexts differ, conditions change from one usage scenario to another and in each previous one errors may not have surfaced for not having been triggered. But the following context or future contexts may produce the conditions that may make those very errors appear and quite possibly at the most inconvenient times.
Now, certain flexibility is often required to meet changing objectives, this is well understood. And security should do likewise to adapt to changing conditions but on balance it should always remain commensurate with what the program is meant to do and the sensitivity of what it will be processing. So, we should establish guidelines, set standards and build models to provide the structure and rigor and yet give us the flexibility that we need to ensure that we're always remaining commensurate with the needs of the job in hand.
Changes will undoubtedly drive changes in the code from changes in the context. And that will in turn produce changes in the approach by changes appearing in the details.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.