The course is part of this learning path
This is the third course in Domain 3 of the CSSLP certification and covers the essential ideas, concepts, and principles that you need to take into account when building secure software.
Learning Objectives
- Understand the differences between commonly used computing architectures
Intended Audience
This course is intended for anyone looking to develop secure software as well as those studying for the CSSLP certification.
Prerequisites
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
Now, this leads us to a discussion of cloud computing. So briefly, we're going to cover certain things about cloud, because this is a fundamental aspect of pervasive, ubiquitous computing. We have the five key characteristics of cloud: On-Demand Self-Service, which typically means whenever you require it and this can be you as in you, the individual, or as an enterprise, whenever you require it, you are able to get into your profile in your account and make the necessary changes, adding or reducing compute resource, storage resource, or transactional bandwidth resource, or any other feature or resource available to you through your cloud version. We have Broad Network Access.
In days gone by, a 1200 baud, 4,800 baud, or 9,600 baud modem might have done the job even to the point of 56K modems. But in the world of the web and the world of the cloud, that will simply not do, and high-speed, large capacity access is truly a requirement. We have the economies of scale that Resource Pooling provides. We get to use storage on the order of terabytes, petabytes and larger, but we only pay for what we're consuming. So whatever we may need, we can get relatively easily as the On-Demand Self-Service enables, but we can acquire as much or as little as we need. And rather than pay for a total amount as we would pay for what is present in our data center, we're going to pay only for what we're consuming. This gives way to Rapid Elasticity should our needs change.
We are able to scale up or down in any of the resources for which we are subscribing, and this would include compute storage and transactional bandwidth. And the Rapid Elasticity means it happens now, not a month or a year from then. And the Measured Service highlights the fact that we pay as we go. If, for example, over a given month, your resource requirements change weekly, however they change weekly will be reflected in the billing that you receive with a line item to indicate every change. And this means that as your needs change, so too can the requirements and the capacities that you need to make those changes and meet the demand, will change when and as quickly as is necessary. So we have the five key characteristics and we have four cloud computing deployment models. So we begin with the private cloud.
Now, when you're running your data center and it's only you, and you're not giving the services to anyone else, it is all you in there. Now, this is what is called a logical equals physical implementation, because it is based on the hardware capacities that you have and the software capabilities that you employ, but in the end, it is all you. In the cloud, a private cloud means it is also all you. This is a logical construct rather than a physical and a software one. And it does not mean that you are the only one on the given platform of hardware. You will have neighbors, it is a multi-tenant environment, but within the logical confines of your private cloud, it is truly all you.
At the other end, we have the public. The public cloud is you, usually as an individual, having your email, your Gmail account, and you have neighbors. The rules of isolation, and I want to emphasize this very clearly, the rules of isolation for you and your workspace, be it a private cloud for your enterprise or you as an individual, each case, you are isolated from your neighbors. There is no sharing, no commingling, but it's a matter of scale. And if you are one of 10,000 that are in a Gmail account instance, then you don't commingle with the others, but you are there with all of them as you are with multiple users in any other environment. Now we move over with hybrid.
Now, hybrid began as a way of migration. To go to the cloud, you have your own data center. To move from there to the cloud, we start with hybrid. For your in-house computing, you get a cloud instance that allows you to employ it for cloud bursting, as it's called, when you require additional resource to sustain operations that have expanded, possibly suddenly, or for disaster recovery purposes. And the hybrid would kick on when needed and kick off after you returned to your normal capacities. So the hybrid has since been modified and evolving to where hybrid is now that same service for a 100% cloud native environment, and it can produce a multi-cloud environment where instead of having your hybrid on the same cloud that you use for normal operations, you have it on a separate provider, say Amazon Web Services for your primary and RackSpace or another provider for your backup. Again, it will kick on when needed and turn off when not needed any longer. But this is a case where it now works totally in the cloud without anything on-prem, although the original model still functions when needed, if that's how you began. Community cloud.
Now, this is constructed as an enclave for a number of interrelated, intercommunicating types of subscribers. It is effectively a restricted community to these particular types, based on business type that they might be. And in the community cloud, an operator will offer the services package to deliver the capabilities that this particular community may want. For example, if it's a healthcare one. In a community getting to the cloud, it may include the providers, an insurance company offering a health plan to subscribers or multiple plans, it may offer a health information exchange, sort of like a freeway interchange for the healthcare information super highway, and other parties, all of whom are involved in a particular healthcare delivery network.
But only subscribers can participate, even though it is still out on the cloud. But once again, it is logically separated in its multi-tenant environment from all others around it. And here we have the three cloud computing service models, upon which, all of these constructs I've mentioned delivering the five key characteristics configured in one of the four deployment models. We have Infrastructure as a Service symbolized by Amazon Web Services or RackSpace. And in this particular environment, a lot of things may be provided to you, but it is effectively, when it's fully operable, it is a logical construct in which you load your own operating system and applications and run it. So it's you operating logically on someone else's operated hardware.
On top of that will be built the Platform as a Service. And Platform as a Service began with the idea of being a development environment which can be used and reconfigured at will, depending upon the project that the subscriber is about to perform, but it turns off when not needed and turns on when needed, which is a stark contrast to what we would have in our own data center. We would have a standalone host or a standalone rack kept specifically for development and isolated from the operational environment, but having it in our own data center meant we were paying 100% of the carrying cost of that stack, whether or not it was in use.
And going back to our five key characteristics, this is a platform we could turn off, paying only a reservation fee, a small fraction of what we would pay when in full use, to reserve it for us so that when the resource is needed, however it's configured, we simply activate it, pay the full operations cost, and then when done, we turn it back off. Again, paying the much-reduced reservation fee and only paying for what we are using, what we have reserved. Then we have the Software as a Service model, and there are many of these. A very common one, very well-known, is the Office 365 in the cloud offered by Microsoft. Here, a subscriber, whether an enterprise or an individual, operates as though they are an end-user. We're going to discuss some specific details, some of which will be benefits, some of which will be concerns, and depending upon the kind of development project you may be involved in, require our close consideration for inclusion and how it will be built.
So in this Software as a Service, as I mentioned, this is acting as an end-user as though you're running it on your desktop, which is typically the case, usually through a web browser-type interface. Now this is constructed and delivered as software to you as an end-user by a vendor, themselves running on a cloud backend of some construction. They provide the hosted application management, relieving us of the responsibility of doing that, and they provide a software in demand where it may take us some time to buy, and then more time to configure and implement the software. Once we've looked at this, since it's delivered on the web and delivered in the form that it is, it may be simply deciding that it's what we want and signing up for it. And then within a matter of, say, moments to perhaps as short as a week, it can be available.
Now, the benefits here are we are able to work as end users with adept desktop-like performance and convenience, and do so very quickly after the implementation has been decided. It gives the additional benefits of global teams being able to work and having a shared but controlled environment in which they can share and collaborate. The reduced time to benefit means that there's typically almost zero local installation. If they're delivered through web browsers, which virtually every end user computer in existence has, that can be an extreme advantage for ease of use and quick implementation. It also means that the product that we are using is maintained by the vendor with the underlying promise that the only version we will ever see as the end consumers of it will be the most current patched and up-to-date version.
In the Platform as a Service, once again, this is where development typically takes place, following which a migration will take place to the following one, which will be Infrastructure as a Service. Again, this has the advantages of someone else managing an environment. And we having the advantage of being users and being able to reconfigure at will to meet needs of development, patching, fixing, testing, and so on. It means we can change our operating environment when and as we need to. Again, globally scattered teams can work on the software together as though they're all in the same room, and the services can be available from other sources to be brought in and fed to us through the common interface.
Now, the upfront or recurring costs or the ongoing costs of having, say, in the normal, having a stack of servers in a corner of our data center, it is significantly reduced because as I say, this is turned on when needed and turned off when not, paying again only the much-reduced reservation fee to guarantee the resources availability when and as we need it. And finally, we have the Infrastructure as a Service model. Now, the Infrastructure as a Service model is typically the foundational model on which all of the others are built. It basically provides the consumer a virtual environment in which they will install an operating system environment that will then be used to run their own applications. And this becomes the logical equivalent of when you have any physically populated data center on your own premises.
Now, the benefits here is we configure the infrastructure as we need i to conform to our needs with a very great deal of control over what that would look like in terms of storage, compute capabilities, and transactional bandwidth. It has the ability built into it to scale up or down in any way that we might need within the envelope of all of the offered services. In a sense, it does reduce the cost of ownership because instead of paying 100% of the carrying cost for the asset that you have in your data center on your premises, regardless of what percentage of utilization you're getting out of it, you're able to control the capacities to be much more closely aligned with what you're actually consuming, and the ability to change them up or down as your demands for usage and capacities might change without the delay of, say, going out and buying additional hardware or more server or a mainframe perhaps, and waiting anywhere from weeks to months to perhaps even a year or more before it's delivered.
But it needs to be mentioned that the cost savings that you get are in overall cost savings. Incrementally, cost for these utilities to run the data centers from which your cloud is delivered, are still prorated into the subscription fee that you pay on a monthly basis, regardless of what your consumption level is. And because it's calculated and evenly distributed across all consumers, you are paying, again, for what you consume, but it is very likely a discount from what you would pay if it was all in your own shop.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.