1. Home
  2. Training Library
  3. Programming
  4. Programming Courses
  5. CSSLP Domain 4:2 - Defensive Coding Practices

CSSLP Domain 4:2 - Defensive Coding Practices

Contents

keyboard_tab
Defensive Coding Practices
2
Primary Mitigations
PREVIEW1m 22s

The course is part of this learning path

Defensive Coding Practices - Introduction
Overview
Difficulty
Beginner
Duration
14m
Students
64
Ratings
3.6/5
starstarstarstar-halfstar-border
Description

This course is the second course in our series covering domain 4 of the CSSLP certification and explores defensive coding practices and how they can help secure your software.

Learning Objectives

  • Understand the foundations of defensive coding practices
  • Learn about the primary threat mitigations that can be employed
  • Understanding how we can learn from our mistakes in the context of software security

Intended Audience

This course is intended for anyone looking to develop secure software as well as those studying for the CSSLP certification.

Prerequisites

Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Transcript

In this, we're going to cover the topics of declarative versus programmatic security, memory management, error handling, primary mitigations, and a very important area, our lessons learned. Now we're going to address execution of the secure design. We're going to look at routine re-evaluation of the attack surface, reduction in the use of unnecessary code elements and simplification of those being used, acting upon areas of consistent security failures through the PDCA or plan-do-check-act model, and exploring the primary mitigations to be used effectively in defensive coding.

Now, the question arises, regarding how can we do a better job and what produces defense in the code that we write? So what we're going to do is we're going to explore employing the tools, practices, and components to accomplish this so that we can get improved software quality, which in itself adds to improved security. We want to try to reduce our attack surface where possible, and generally enhance security for the data and information products.

So we need to consider that the moment a single line of code is written, the attack surface has potentially increased. And so it is important to recognize that the attack surface of software is not only evaluated, but also reduced. Some examples of attack surface reduction related to code would be reducing the amount of code and services that are executed by default, reducing the volume of code that can be accessed by untrusted users, and limiting the damage when the code is exploited. So we have to ask ourselves the questions along this line, what do we validate? Where, and when do we validate? And of course, one of the most important questions, how to validate.

So in the defensive coding practices, let us begin with declarative security that specifies what is to be verified, but not the mechanism of how that verification will be performed. Now, security functionality is not intended to be and should not be embedded in executable code, but in a set of rules and conditions implemented in the contextual environment that must be accessed anytime a security function is to be called. This makes the program more flexible and potentially adaptable to changes in contextual situations and circumstances.

The question of what should then be separate from the processing logic of that, which performs the how. Now in imperative or programmatic security, this is implemented as the how directly in the code of the executable. It does enable more precise, finely grain security to fit in the contextual environment, but it tends to be much less flexible and it reduces, or at least adversely impacts portability. We have bootstrapping, and these of course are certain types of programs that require direct involvement with system elements, like system variables, to perform their functions and internal sequencing. Such critical programs are very high value targets due to their global impact on the system context.

Now management of the variables employed should be rigorously controlled within the application itself to prevent some form of external influence causing changes without controls to prevent them. We also have the alternative of configuration files. These of course, hold parameters that are acted upon during processing and control how a system or an application will behave.

Now with the potential impacts being negligible to catastrophic, security over config files must be commensurate with their importance to the operational context. This will prevent unauthorized changes and functional subversion by unmanaged, uncontrolled, or unauthorized sources. One of the most important ones of the defensive coding practices we're discussing will be memory management. Now this is a dynamic and highly variable goldmine where all the data, the process activities, and the other resources are active and held in open states in anticipation of eminent use. Now this creates a shared frontier and involves actions of both the operating system and the given applications or systems that are running within it.

Now, the concerns of running with managed code or unmanaged code can vary greatly because we find both in these environments. So managed code is itself stronger in memory management, the point of the managed part of it. It manages all resource over the lifetime of the application's operation. When you combine this with type-safety elements, it simplifies the tasking of elements and memory even more. Unmanaged code, however, has the potential to improve performance, but it brings less inherit strength while possibly increasing it. That is the performance. It places all resource control and security concerns in the developers' hands.

Now this adds significant complexity or can, and along with that will come increased technical fragility, as well as the opportunity to forget or consciously neglect to include something. This increases the opportunities for overlooked control points and mitigations that otherwise should be employed. And speaking of type-safety practices, employing these adds to the overall security of the program because they enable better memory management through predefined variables, and the pre-definition enforces the memory parameters and controls memory access and the buffer size.

Now, type-safety code typically stays within the defined boundaries and does not encroach on memory resources outside these defined ranges. Now locality of course, is part of where things are located in memory. Now, when primary memory addresses have references that are known, subsequent accesses become predictable, and this can lead to various memory target attacks like buffer overflows.

Now using the ASLR, a randomization scheme created by Microsoft to help avoid predictability, the memory addressing does become less predictable and thus reduces the possibility. But as has been noted, ASLR is a longstanding technology that is used and it may be aging, and it's either time for update or replacement. A defensive coding practice will include something as mundane as error handling. Now, truly, error handling is not mundane, but indeed a very critical function that goes on within every program in existence, because the proper handling of these exceptional conditions provides various processes that are invoked, and both the system and the user to enable our response, to take care of the exception and return things to a normal processing state.

Now proper resolution for authorized users is crucial. So provided information must be accurate, but what is provided should not always create secondary problems by displaying unnecessary information, such as personally identifiable information or other similar sensitive components. That produces a situation such that if an adversary can force errors on the hunch that it's going to reveal this kind of sensitive information, whether it's PII or sensitive information about the system itself, the resulting messages may reveal other sensitive information that will then provide additional information about the system or what's being processed, that will enable further exploitation attempts.

Now, any form of software should be expected to produce errors and exceptions at some point, and should therefore be subject to threat modeling and use abuse case analyses to visualize high probability exploitation scenarios. And while these may not be likely exceptions in a direct sense, secondary effects and uses should also be explored. Unhandled exceptions and errors could also result in operational degradation and failures, an operational concern, but clearly having an effect on the security of the overall system itself.

Now another defensive coding practice involves interfaces. One of the things that we do early is source verification, testing and exercising of these use abuse case analyses to enhance the operational integrity. And we need to establish a level of trustability of any third party approved APIs, just because they do doesn't mean we do. Again, trust no one. Now data flow analysis and correct implementation applied with this can further diminish potential exploitation and should be employed.

About the Author
Students
6530
Courses
75
Learning Paths
17

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.