This course is the 2nd of 3 modules in Domain 4 of the CISSP, covering communication and network security.
The objectives of this course are to provide you with and understanding of:
- How to secure network components
- Instant messaging
- Virtual Private Networks (VPNs)
- In-transit encryption
- Remote Access
- Network casting
- Network topologies
- Virtual LANs (VLANs)
- SDS/SDS Architecture
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
In all of these different forms, we have to have different ways of casting. We have unicast, which is sending things on a one-to-one association between sender and destination. We have multicast, which uses one-to-many-of-many or many-to-many-of-many associations where datagrams are routed simultaneously in a single transmission to many recipients. We have broadcast, which expands this a little further as in one-to-all associations. And then we have anycast addressing, which is a one-to-one-of-many association where datagrams are routed to any single member of a group of potential receivers all identified by the same destination address. And then we have geocast and this refers to a delivery of information to a group of destinations in a network identified by their geographical locations. As a specialized form of multicasting, it is good for both mobile and ad hoc networks.
We have various types of switched networks. In the original networks as circuit-switched, these were voice-quality circuits with fixed paths and fixed delays. More commonly today, we use packet-switched networks in a variety of flavors: IP-type packetization using routing algorithms for management and then virtual circuits created by the switching technology, known as frame relay. It combines a pair of circuit types: permanent virtual circuits, which are the fixed billing period basic service capacities, and then as surge conditions arise, a switched virtual circuit that kicks on, provides additional capacity during the surge period and then kicks off when the surge period diminishes back to normal heights.
Now, throughout all of these different networks, they work in different topologies. The topologies that we have are derivations of the bus style. Now, in these topologies, the bus basically connects all stations to the data carrying cable, which at one end is connected to the primary, a server, a mainframe, or other, and then at the far end is a form of terminator because, as an electrical circuit, it must complete a loop.
Now, a variation on this is the tree, which provides branches in between the source and destination, between the primary and the far end where the terminator is. Now, in the bus and the tree both, the cable itself and the cutting of the cable or damage to it highlight the most common vulnerabilities.
We have the ring, typically a logical ring, not necessarily an actual circle, but a ring where the traffic travels around in a loop and these are usually run in dual concentric contra-flowing rings with each station having two connections. We have the star, basically an uplink brought into a hub, and then the distribution of the hub to all the connected work stations on its front. And then the mesh which provides a connection from each to each of the others in its mesh. For example, one system with four NICs is connected to three other machines, as they each are all connected to the other three machines in the mesh.
Now, the network protocols that we've been talking about all tend to stem from the IEEE 802.3 Ethernet baseline description. The physical topologies supported by Ethernet vary from bus to star, to point to point. The logical topology that normally is employed is the bus itself. The Ethernet standard supports all different kinds of connection media: the coaxial cable, twisted pair, both shielded and unshielded, and fiber optic as the transmission media. And each of the specific sets of protocols that drive it through coaxial, twisted pair, or fiber optic are supported through this.
A variation on that is the token ring as written up in the 802.5 standard. Token ring was adapted as 802.5. It represents the architecture's name, even though this is not truly a ring, meaning it's not a circle, but it's a physical loop-based topology. It uses a 24-bit control frame managed by one of the stations known as the MAU, the multistation access unit. And the multistation access unit creates the key, creates the token that is used to deliver traffic within its ring and makes sure that only one token is available for all the different stations to use in their sequence.
Now, a variation on the theme of token ring is the fiber-distributed data interface, as specified in the IEEE 802.4. Like 802.5 traditional token ring, this is a token-passing architecture that uses the dual contra-flowing rings. This is based on a fiber-optic backbone flowing at 100 megabit network backbone rate, whereas traditional 802.5 token ring ran at 4 or 16 megabits most often. Only one ring as a primary is used with the second ring running as a backup so that if anything happens to disable the first ring, traffic is switched to the second ring as the backup, so that traffic and capacity is not lost.
The information flows in the rings in opposite directions from each other and so they're called contra-rotating for this reason. Now, when these remote connections are set up using public key and public key digital certificates, there needs to be an exchange. Now, the exchange that we have passes from client to server. A challenge is generated and the encryption protocols are defined. The server then responds by returning its certificate and generating its own information which it sends back to the client.
Through this handshaking, a session key is generated and agreed upon between the two stations, the client and the server. It generates the key and sends it, it encrypts the session key with the public key of the recipient, the server, that pulls out its private key and decrypts the session key. And then it generates read/write key pairs from there and then sends back the final confirmation to the client. And with that, the session as a secure session is set up completely and now traffic can flow safely through the encrypted session.
Now, one of our recent inventions, recent meaning within the past 10 to 15 years, are virtual local area networks. This logically allows us to create the functional equivalent of a wired local area network. By programming groups of IP addresses within the logic of a switch, we're able to create a virtual equivalent of a physically wired, without having to run any additional wires. We can also put up controls between VLANs to ensure that they either do or don't connect and are not able to communicate with each other.
Now, VLANs are not immune from any form of attack. They can be subjected to a MAC flooding attack, an inter-switch link protocol tagging attack, a double-encapsulated nested VLAN attack, ARP attacks, multicast brute force, spanning-tree, or random frame stress attacks, all of which attack the logical switching and the logic that sets up the VLAN.
Now, one technology that is making its way faster and faster across networking is this idea of software-defined networking or SDN. In SDN architecture, the control and data planes are decoupled. A management layer is placed between them with the application layer or northbound interface and the southbound interface facing the infrastructure passing their traffic through the network layer, the management layer in between them. One of the things about software-defined networking is it contains a learning capacity through its deterministic logic that allows it, from the moment it's turned on and for as long as it runs, to learn and continuously optimize the network data flow, providing a constant change as it learns more and better efficient ways of managing network traffic. And this, of course, is being taken to the WAN itself.
Now, as a broader form of this same technology, it breaks out in the same way with a northbound interface, an infrastructure southbound interface, and a management plane in between. By de-coupling the data plane forwarding and control plane allows you to centralize the intelligence of the network and allows for more network automation, operation simplification, centralized provisioning, and troubleshooting.
As with other software-defined constructs, the SD-WAN architecture decouples the orchestration management control and data planes to provide greater flexibility and, it should be said, a greater level of abstraction to worry more about the functionality that's going on in managing that, rather than the actual physical layer. The deterministic logic in the SD-WAN devices makes for a learn and continuously optimize type of approach to continually improve how on the WAN traffic is handled. So as you see in this picture, the basic SD-WAN operation between the branch site and the enterprise data center allows the enterprise data center to use all of the services in between based on which path is the most optimal. So we have the cloud services, we have private MPLS, we have the total internet, and a wireless WAN. And depending upon what the SD-WAN logic determines as the most efficient route, it passes from the enterprise data center back and forth to the branch site in a combination of these through dynamic multi-path optimization logic built into the SD-WAN equipment.
Now, taking this one step further, we have software-defined storage. In software-defined storage, the same deterministic logic after a fashion is used so that data availability is assured through software resilience rather than buying the typical four-drive RAID type appliance, as we have in the past. The software layer decides, based on administrator-defined policies, where it is optimal to place the various pieces of the data mass so that we get much greater software resilience managing the data availability.
Through software-defined storage services, we are able to virtualize our storage capacities and get dynamic tiering, caching, replication, improved quality of service, we're able to deal with cloning of drives in a virtual world, compression of data, we're able to program a policy that deals with deduplication, and, of course, from time to time as necessary, taking snapshots. In essence, what we're doing with software-defined storage is we're relieving ourselves of the necessity of managing the physical resource and working with the logical equivalent so that data placement, data access, and the pathways between user and data source are optimized to stay within latency parameters and performance parameters while providing greater protection for the data through these various services.
The SDS storage systems, thus, are involved in learning, as all deterministic logic software brings into operation. So the intelligent data placement puts the data where it needs to be for the user, based on usage patterns and traffic patterns. The controllers are the primary source of this control and where the learning takes place, and this builds a software RAID so that instead of having your data replicated across four hardware units, it might be replicated across 25 or 100, or even multiple data center locations, to provide the equivalent or improved software resilience for our data that the RAID 5 array that we bought from one of our vendors might have provided us in the past.
All right, that brings us to the end of our second section. We're going to stop here and we're going to continue our discussion with section three of domain four. Thank you for joining us for this one. We look forward to seeing you next time.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.