image
CISSP: Domain 8 - Software Development Security - Module 3
Considerations for Secure Software Development
Difficulty
Intermediate
Duration
35m
Students
209
Ratings
3/5
Description

This course is the final module of Domain 8 of the CISSP, covering Software Development Security.

Learning Objectives

The objectives of this course are to provide you with an understanding of:

  • Considerations for secure software development
  • How to assess the effectiveness of software security
  • Assessing software acquisition security

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Welcome back to the final section of domain eight, software development security and the Cloud Academy presentation of the CISSP exam preparation review seminar. We're going to begin in section five entitled Considerations for Secure Software Development. So the basis of all of this is what we call the trusted computing base. The TCB is the collection of all the hardware, software, including firmware, and the controls within a computer system that can be trusted to adhere to the security policy. And there in the diagram you see the TCB with the hardware, software, firmware, and controls inside the circle with the perimeter of the circle establishing the security perimeter.

Now, the security perimeter is, of course, that logical line that separates the TCB from non-TCB objects. It enables controlled access while blocking all others. It provides this logical separation and allows interaction through controlled interfaces through which all traffic is passed and filtered between TCB and non-TCB components. Now, the trusted computing base contains within it what has been a software function inside an operating system but must be a larger function encompassing all objects within the trusted computing base. This is known as the security reference monitor.

Now, historically, this reference monitor is a virtual machine that mediates and controls all subject access attempts to all objects within the TCB. The security reference monitor utilizes a thing called the security kernel which is a rule set inside the OS kernel that embody the system security policy and enforce the reference monitor function. And the three main requirements of the kernel and the reference monitor include these: the security kernel must provide isolation for the processes carrying out the reference monitor concept and must be tamper-proof, the reference monitor must be invoked for every access attempt and must be impossible to be circumvented, the reference monitor must be small enough to be tested and verified in a complete and comprehensive manner.

Now, given the role of the reference monitor inside a system, these three concepts must be met entirely. When it says every, it means every. When it says comprehensive, it means comprehensive in the total sense because the job of the security reference monitor is to make sure that mediation happens for every subject to object attempt. And here you see a graphic depiction of a reference monitor. We have on the left subjects, processes, or users. On the right we have the objects they wish to get access to to do whatever it is they intend to do. The reference monitor sits in the middle, the embodiment of the policy of the enterprise. It interfaces interactively with the security kernel rules database.

As the subjects attempt access, the reference monitor accesses the security kernel rules database to determine whether or not that subject attempting that access is allowed to get to that object in that mode. If it's allowed then, as indicated by the green arrow, it goes through successfully. If it is not allowed, as indicated by the red arrow, it is blocked. In both cases, successes and failures, everything is written to an audit file to record all interactions between all subjects and all objects. Now, the thing that the security reference monitor enforces is two different qualities: state, which reflects the security level of a system and is related to the system being stratified into hierarchical levels where we have security clearances which establish trustability or reliability, and object classification, which characterize its sensitivity, if it's confidentiality in order, or integrity. Also reflected is the mode which reflects the partitioning of a system and data into compartments or categories. This means the system is partitioned functionally. The subjects have to have a need-to-know to be in one or another of the compartments and this reflects that subject's functional role. The object compartment reflects the object's functional use and by mating together the subject need-to-know with the object classification and compartment and building rules around that, we either enable or prevent access. Consequently, the reference monitor protects the processor and the activities that it performs through all of these states and modes of access. And privilege levels are referenced in a ring type of a structure as you'll see on the next slides.

The original conception of the Protection Ring Model put everything within four concentric rings. Starting with ring zero, the OS kernel, all internal operations, and this being the highest privilege level, it covered the entire core of the system. Ring one was outside of that encompassing ring zero and it included internal device drivers and an external command set. Ring two covered the file system which was external file drivers, file systems, and a host of other things. Ring three was where the users lived. Users were able to execute certain commands. And of these four, it was the lowest privilege level.

Now, in the Protection Ring Model, one of the unbreakable rules was any subject attempting access to any object reinforced by the reference monitor function had to enter one ring. If the subject had privileges at ring three and they attempted to access something contained within ring two, they would have to go through the perimeter of ring two and the rules that enforce that. If they needed to do something that they were allowed to do, accessing an object within ring one, they would have to pass through ring two. There would be no bypassing of an intervening ring and these were the hard and fast rules in the security kernel enforced by the reference monitor.

Now, this particular four-ring model was conceptualized and implemented as part of the Multics operating system that ran at MIT University under Project Guardian. Today's operating systems, on our laptops and desktops, run on a somewhat simplified mode of this. These run in two rings. Ring zero still contains the OS kernel, all internal operations and is the highest privilege level. This equates to a mode of access called administrator or super and this allows that user possessing that level of privilege to execute all privileged and non-privileged instructions. Ring three still encompasses the user level applications and is the lower of the two. It is a user mode, in mainframe days called problem mode. It allows for the execution of non-privileged instructions only.

Now, this two-ring version of the ring model is what we find implemented in Windows, Apple's iOS, and the Unix/Linux variants. Part of the reference monitor's job is to enforce control over memory management functions within the operating system. Memory management done to today's operating systems through a GUI function provides a level of abstraction for programmers. This allows them to work with their programs without ever accessing parts of the operating system that the manufacturer, such as iOS or Windows, is restricted from their direct visualization. Thus, it allows them to maximize performance with the limited amount of memory available and protects the operating system and applications once they're loaded into memory while it protects the intellectual property, that is, the source code of the operating system.

Now, every operating system in existence must perform a couple of functions. One is process isolation and that is simply to make sure that as one process runs and exercises its operations on its subject data, it is neither interfering with or being interfered with by any other process. To do so causes what we know as the blue screen of death, also a general protection fault or GPF. Process isolation functions have to be enforced ruthlessly by operating systems and must be enforced 100% of the time. Any operating system that cannot do this is an operating system that cannot be trusted simply because it will fail at unpredictable times. It makes sure that multiple processes and our multitasking, multiprogramming, multiprocessing systems which are the commonplace today do not attempt to access the same system resources at the same time or violate their bounds and confinement rules. 

Interrupts are common in programming. Interrupts allow the operating system to ensure that a process is given sufficient time to access the CPU when necessary to carry out its required functions. Programs that have timing issues such as race conditions and resource competition oftentimes have to have interrupts to regulate internal timing to allow this sufficient time to transpire so that they don't get into deadlocking types of situations causing denial of service and system halts.

Now, the memory manager function of an operating system has several other tasks that it has to perform. It must track an object put into memory through all of the different addresses and states that it will pass during the time that it is taken off a drive, pass through its sequence of executions and put back on the drive and so it must track it through relocation. As it moves, the rules of protection over that particular object must go with it so that the protection rules never fail to protect it regardless of its location or state. If sharing is allowed on a particular object, only certain states can sharing be enabled to ensure that there is no failure, process isolation, or a violation of access rules. The logical organization has to be followed and the physical organization has to be accounted for to ensure that wherever the data is, the computer itself, whether we as the users or programmers or administrators know exactly where it is located or not, but at some level the operating system must know exactly where these data objects are located.

Insecure memory management oftentimes will result in fatal system problems. Most commonly we find insecure memory management results in a buffer overflow. Now, a buffer overflow is caused by improper bounds checking on an input to a program and must be corrected by the programmer through a program of testing and then patching to ensure that the system handles the memory properly. So controls for incomplete parameter checking and enforcement have to be sure that they examine buffers to ensure that the parameters do not exceed the buffer's boundaries. And the operating system must provide some kind of buffer management in order to ensure that these controls are invoked each and every time the buffers are used, each and every time a program segment is put into a buffer to make sure the rules of parameter checking and boundaries are not violated.

Now, data moves and programs execute along various forms of channels. Channels typically are about how data moves and about how communications are passed from one process segment to another. Now, channels are controlled like all other resources. They must be identified and they must be allowed or restricted as the rules of the operating system enforced by the reference monitor function require. Those channels that are not known, not seen, not defined, are covert channels.

Now, these channels permit either communication or data movement in a way that the system neither sees nor controls. And the types are timing, in which communications are done using non-traditional ways that the system, as it says, will not see as a channel of communication, and then storage where data is moved along a pathway that again the system does not interpret as an actual defined channel or pathway.

Now, inspection of shared communication channels that could allow two cooperating processes to transfer information in a way that violates a security policy is something that must be tested for. And covert channels can be very damaging because if data is moved or secure communications are not controlled properly, the system can't see it, can't control it, rules cannot be made, the reference monitor function cannot be invoked to mediate it, in other words, almost anything could happen. And so covert channels are something that must be tested for on a regular basis.

One particular set of practices that we must use on occasion will be software forensics and these will be used to determine the existence and usability of things like covert channels through the analysis of program code and the system to determine or provide evidence of the intent of the program or the authorship of a program causing these things to occur. As we spoke about runtime environments, these things form what has come to be known as a sandbox.

Now, a sandbox provides a logical boundary around a segment of memory where a program is executing. Sandboxes are typically constructed around programs that are either of unknown character or are non-native to the environment in which they are executing. Java programs, for example, execute in a sandbox we know as a Java virtual machine. And the sandbox creates a safe environment inside the Java virtual machine which itself, in its turn, is contained within a browser which is contained within an operating system environment to ensure that it acts only across controlled interfaces, exercising only allowed authorized resources. And all of the things that we have discussed throughout this seminar, all the program changes, all of the policy changes, all of these attributes of the enterprise and the systems within them must have some form of configuration management placed over them.

The goal in all of these cases is to guarantee the integrity, availability, and usage of the policy or the software and that the correct version of any of these components is what is currently enforced and that all persons, all system components, the rules of the policy that go into the reference monitor, the rules that are published to the workforce are all the most current ones and that it has full disclosure to all concerned parties.

About the Author
Students
9679
Courses
76
Learning Paths
24

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.