This course is the 3rdof 6 modules within Domain 3 of the CISSP, covering security architecture and engineering.
The objectives of this course are to provide you with and understanding of:
- Vulnerabilities of security architectures, including client and server-based systems, large-scale parallel data systems, distributed systems
- Cloud Computing deployment models and service architecture models
- Methods of cryptography, including both symmetric and asymmetric
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
Now we're going to explore cloud computing for a bit. To start off with, the cloud computing definition from the NIST Guide Special Publication 800-145, reads as, "A model for enabling ubiquitous, "convenient, on-demand network access "to a pool of shared configurable computing resources "that can be rapidly provisioned and released "with minimal management effort "or service provider interaction."
Now, part and parcel of this is a set of five essential characteristics for deployment models, and three service models. So, first come the five essential characteristics of cloud computing. These five characteristics are fundamental to the very nature of cloud. Without them, cloud computing really doesn't have any advantages over traditional, more centralized, more on-premise type.
Now the on-demand self service is basically that, each user in cloud computing environment is able to do for themselves, what it would take some period of time, and several other members of staff to do for them. The broad network access is very much a requirement, given the nature of how cloud is actually implemented, using anything less than high speed connections is really a non starter. Resource pooling is the way in which we gain the economies of scale that cloud computing promises. Having a set of drives in your own shop means two things, it means that you are paying 100% of the carrying cost of those drives in your own shop, and two, you're only consuming a certain percentage, much less than 100% of its capacity. But you're still paying for 100% even if you're only consuming a fraction. Through resource pooling economies of scale, you are paying for what you consume and nothing more. The rapid elasticity feature, means that whatever your cloud computing resource looks like it can be grown or contracted at will, and very quickly, when your workloads change. And as a measured service, the economies of scale through resource pooling are realized because for whatever your resource is, whether it's compute, storage, or bandwidth, you're only paying for what you're consuming.
Now, the four deployment models are these: We have private, in which a cloud consumer is there, and they have a logical partition in which their system is operating, and it is shared by no one. Even though it may be right next door to somebody else on the physical layer, this particular private cloud is theirs and theirs alone. We have the public cloud, which is usually equated with something like Google. Where as individuals, we're out there on a platform, and we share that each of us having our own account our own space, but we're there with another thousand or 10,000 other people. We have a community cloud, which includes a set of resources dedicated to a community of users, who share many things in common, such that it gives rise to this community. Something like LinkedIn, might be considered a community cloud, or a group of organizations that share a certain platform, consume parts of the resource body that it has, and pays part of the carrying cost, and shares it amongst the members of the community. Then we have the hybrid. The hybrid is a combination of private and public.
Now the private cloud is where each individual operator works, and doesn't share that space with any. But as happens, our workloads change and we need additional resources that must be brought in. In a hybrid cloud model, resources, be at storage or compute or network, are brought in from the private cloud allocated and placed under the private clouds rules of access and control for as long as they're needed. Then when the workload reduces, and the operation moves back within its normal parameters, the part of the cloud that was brought in from the hybrid is cleansed and returned to the public cloud so that it expands as the workloads change and the expand or contracts as they contract.
And these are our service models. We have the infrastructure as a service model, which is built using API's. It provides connectivity on a hardware layer, and all the facilities controlled and managed by the provider. The Infrastructure as a Service is, essentially, your own computing environment, as perhaps a private cloud or a hybrid, on which you build all of the software that runs your actual application. And your provider provides you the physical platform and the logical equivalent of that physical platform in which you load your own software and run your own operation, without having any of the computing inside your own shop. Underlying that, underlying the Platform as a Service, is Infrastructure as a Service. And the Platform as a Service model is typically equated with a development type of an environment, such as Microsoft Azure, or Salesforce's Force.com. Typically, the Platform as a Service enables an enterprise to load up an environment and shift it many times changing that environment in different ways, and allowing them to build, develop and deploy an application system. Then we have the Software as a Service. Essentially, the users of this Software as a Service, are end users have a completed application environment. Just like with Microsoft Office running on your desktop or your laptop, Office 365 in the Cloud, runs as an application system that you use as an end user would use an application system.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.