Distributed Computing
Distributed Computing

This is the third course in Domain 3 of the CSSLP certification and covers the essential ideas, concepts, and principles that you need to take into account when building secure software.

Learning Objectives

  • Understand the differences between commonly used computing architectures

Intended Audience

This course is intended for anyone looking to develop secure software as well as those studying for the CSSLP certification.


Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.


So let's take a look at the current world of Distributed Computing. Now, as I mentioned, we went from mainframes, which was typically, if you look back at history, it was the only way that it could have begun. But as we have tried to achieve that goal of developing a more distributed computing system, based around the human nervous system, we have to move away from centralized to decentralized. 

So we have our client-server, a main server supporting many single-user client workstations. Another form of this might be considered to be peer to peer where some number of peer devices sharing a computing workload without a hub. In other words, they're sharing it directly and across, up and down, back and forth amongst themselves, without any central hub controlling it, and then a form of messaging queuing, a network of systems transferring information, bearing messages from a central intermediate server acting as a transfer mediator, handling the logging and the security of this traffic. We have the form we know as SOA or service-oriented architecture. 

Now service-oriented architecture is well-known as a form of distributed architecture that is oriented around the generation and delivery of a service or a capability rather than a specific platform or hardware, whether it be Windows, Macintosh, Linux, or other. And it contains the characteristics that are inherent in this kind of a concept, including platform neutrality, essentially meaning that it can be served to any platform that can successfully connect to the source. 

The ideal of universal interoperability, from a construction standpoint, the modularity and reusability, and then abstracted capabilities, meaning that we don't get involved in the details we get involved in the end product so that we work with it successfully without having to consider how it's built. To achieve this goal, we have to employ the technologies commonality, which include COM or Common Object Module, CORBA, Common Object Request Broker Architecture, and Web Services for connectivity and messaging. Now, the typical construction base makes use of SOAP or REST for web service-based implementations. And these two platforms are the ones that are increasingly common in usage. We have one called Enterprise Service Bus. 

Now, this is a form of service-oriented architecture that handles and secures all inter-process communications between data producers and the consumers of that data. Typical services performed by ESB will include protocol conversion and translation, various form of defined events handling and the messaging queuing data flow for flow management. Based on the concept that we have in computing of a bus, essentially an exchange platform where things move in and then are distributed to their respective destinations, ESP handles movement and translation conversion of traffic between different baseline producers and consumers. Examples include .net to SOAP, to REST, Java, to BPEL, to .net and other forms of multifactor, multi destination, multi origin distribution. 

Now the Web Services, REST and JSON or JavaScript Object Notation, serve as method technology to facilitate web-based communications using XML-based Web Service Description Language, or WSDL, to provide information about services parameters, program calls, and data structures. SOAP continues to dominate due to longevity and the ability to handle XML well, but REST is rapidly catching up due to improved performance and flexibility. So far, REST is proving better when operating at internet scale and JSON originated with JavaScript, but has become language independent and can be supported by almost any other language platform. And this has so far become the most common data interchange format due to these characteristics and its flexibility. Another form is Rich Internet Applications. 

Now, this is an architecture that employs the web as the transfer means while the client acts as a processing and formatting device. Now, the result is that while on the internet, the application behaves like common desktop applications seem to behave. Oftentimes components are built using Flash, Java, or on certain platforms, Microsoft Silverlight. The RIA supports diverse functionalities, which include things like games, social media, and can support common complex business logic. It does support security. However, it suffers from significant client side exploits. 

Now, the client side threats, which are common to RIA are most frequently due to, misconfigurations still a very common form of corruption of our security plans and configuration, malware, of course, and then program corruption. Remote code execution threats employ non-validated input, which leads to things like buffer overflows, which are treated as data but triggered to execution by system conditions. And these operate using the security context settings of the application that triggered it, which means it tends to hide from our ability to detect it by the application that triggered it until the ill effect are felt.

About the Author
Learning Paths

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.


Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.

Covered Topics