CSSLP Domain 7:2
The course is part of this learning path
This course covers the security aspects that a CSSLP needs to keep in mind when dealing with the final stages of the software development life cycle including operations, maintenance, incident management, and the software disposal process.
- Get a solid understanding of software sustaining engineering
- Understand the software maintenance process
- Learn about incident management and software disposal
This course is intended for anyone looking to develop secure software as well as those studying for the CSSLP certification.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
So in operations, we're going to have the maintenance process itself. We're going to have planned and unplanned maintenance. Routine maintenance is just the normal care and feeding, that will include upgrading, patching, fixing, et cetera. We'll have unplanned maintenance where flaws or breakage in various program routines or failures will result in us having to take the thing apart, perform electronic surgery on it and fix it and get it back into operation as rapidly as we can.
There will be problems that arise, things that pop up that have to be fixed, persistent problems that may require a fairly lengthy process of identification and resolution, and then without question, we're going to have incidents from time to time. Incidents themselves, events with negative impacts, and potentially, hopefully not, but we'll have breaches on occasion. And we're going to be involved in all of these processes and we have to account for them in our performance and in our service level agreements as well.
So the strategy of coping with these has to be bounded on understanding the impact first of whatever the change or whatever the event might be onto the existing system and by extension to the user population. Now, the elements that are involved, affected, of course have to be identified, so that we know exactly what happened. But first we need to verify that whatever change went in that may have caused this, that it went in and it was complete so that nothing is left undone and nothing is missing
We have to be sure before it goes in that all the requirements have been met, all the metrics themselves are confirmed and informative and that all intended outcomes were produced, including the baseline updates and the documentation. From there, we know that we have a solid foundation to proceed, because now all things that should've been done have now been confirmed as done.
Now, in the event that we have to investigate an incident or an event, this depicts the process that we will go through, sources of evidence and the steps that we're going to take. We have three primary groups of activities that we're going to perform. We have prevention, we're going to try to prevent them from happening, but you can't prevent everything, and so we must have a response capability, proactive and reactive controls in balance with our system. So since we can't prevent everything, we're going to have to try to prevent what we can which we've sought to do by a proper, well-orchestrated design and build process.
Here, the graphic shows prevention and sources of evidence that will alert us to things that may have been attempted or things that have actually happened. If we can't prevent, then we have to have detection mechanisms, such as audit logs, IDS/IPS or possibly deception technology items, such as honeypots, to give us information about what has happened. So ultimately, when the thing does proceed to causing its impact, that's where our response capability comes in and for that, we have an incident response team, we have to go through forensics perhaps to dissect it further and then we go through our restorative processes to put things back to normal and then resume operations.
Now, in our maintenance process, we have to monitor what's going on. So this is a definition of what, where and how. Now, in our monitoring and some of this may seem quite obvious, but sometimes we lose sight of the fact that the obvious doesn't mean that it's not absolutely necessary. The object here is to maintain situational awareness of the state of operations and assurance of the given system or application with adequate tools, this is the how, with a view to rapid response in the event of a disruption of any kind.
Monitoring has to be prioritized of course, this is the what, because we can't monitor everything all the time constantly. It's simply not practical or even possible. Now, in general, we need to monitor and this is supported by three things. It's integrated with the overall assurance process. Like other areas, there must be an enforcement mechanism to ensure that all practices and producers are being followed and then third, and this is where we have to build in the response capability, there must be a triage action and a conflict resolution process that all items whether open, in-work, on hold or other status, are actively being addressed and resolved, or that we can at least establish what status it's in at the moment.
Now, monitoring is going to have to be performed in various places, where, meaning that we have to know what to monitor for and where we can find the artifacts and evidence that we're going to need from that monitoring activity to support what we're trying to do. Complex operations, critical functions, they may be technologically fragile, they may be subject to breakage and if that's the case, we need to find out where that is most likely to occur and that becomes our where of where we're gonna get the information that we need to plan maintenance or to plan response in the event that something breaks. But again, we always have to have an idea of what the impact will be to the organization if the thing being monitored should break, what it will impact, what that impact will produce and how rapidly our response is needed.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.