CISSP: Domain 4, Module 2
The course is part of this learning path
This course is the 2nd of 3 modules in Domain 4 of the CISSP, covering communication and network security.
The objectives of this course are to provide you with and understanding of:
- How to secure network components
- Instant messaging
- Virtual Private Networks (VPNs)
- In-transit encryption
- Remote Access
- Network casting
- Network topologies
- Virtual LANs (VLANs)
- SDS/SDS Architecture
This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.
Any experience relating to information security would be advantageous, but not essential. All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Welcome back. This is the Cloud Academy presentation of the CISSP review seminar. We're discussing Domain Four: Network Security, and we're moving into section two. Our topic here is going to be the subject of securing network components.
Now, in covering the devices that are going to be in the network or have access to it, we're going to start at a fairly low level. A modem, considered a layer 1 device in that it has no intelligence and simply converts signal between analog and digital, this is, of course, one of the earliest methods for getting remote access to a mainframe computer. Dialing up through this modem, it converts to a digital signal, the analog dial-in and transmits the circuit switched call to the mainframe. Other layer 1 devices that are used for various purposes, are these: concentrators, multiplexers, hubs, and repeaters. These devices oftentimes manipulate power levels to keep the power at the nominal level stated by the manufacturers in order to preserve data content.
Other devices, like hubs, act as distribution points, but due to the fact that they lack any distribution intelligence or scheme, that was considered a layer 1 device. Multiplexers, based on statistical or time-division type models, are also considered layer 1 devices because they, too, function on the basis of an algorithm and not necessarily any actual program intelligence.
Going to layer 2, we talk about bridges. Now, the basic bridge filters out frames that are not destined for another segment. In other words, it lets only that traffic through designated for the segment behind the bridge. These are often used to connect similar or dissimilar, in other words, a translational kind of a function network architecture. It can connect LANs with unlike media types and it can filter traffic between segments based on MAC addresses.
Switches, also a layer 2 device, are more sophisticated and thus more intelligent than a bridge. The core device is what we use to build a LAN. It establishes a collision domain per port and provides a more efficient method of transmission using CSMA/CD logic within ethernet (CSMA - carrier sense multiple access; CD - collision detection). Security features can include port blocking, port authentication, MAC filtering and of course, the ability to build virtual local area networks. Routers are a layer 3 device, and before we move on from this point, if you should encounter a question on the exam relating to a layer 3 switch, don't be misled; switches are classically placed in level 2, while routers are a layer 3 device.
Routers route packets from other networks into the network behind it. It is most commonly used to connect LANs to WANs. It reads the destination IP address in the received packets and then using routing tables within the routers logic, it determines the next device to send the packet to. Now, if the destination address is not on a network that is directly connected to the router, it sends the packet to another router. The one likeliest to have their destination address somewhere in its table.
Firewalls are a very commonly used device that enforce administrative security policies by whatever layer or level of sophistication they might employ. It's basically a packet filter on incoming traffic based on a set of rules. Now, each rule instructs the firewall to block or forward a packet based on one or more of the conditions contained in the rules. Now, packets should be filtered based on address or service or, in the more sophisticated new generation type, based on connection state.
The first generation is a static packet filter, usually based on some very simple filtration logic. This typically built on a router, takes a look at each packet coming through without regard to the packets context in a session. The packets are examined against static criteria, meaning that they look at port, protocol, IP address, source and destination and based on very simple rules, filter out a large amount of traffic or let through a large amount of traffic based on those parameters.
A proxy firewall's considered second generation because the rules here are much more sophisticated. It does mediate connections between trusted and untrusted endpoints. What a proxy firewall will do is sit on a dual-homed host, usually, breaking logical connections and then filtering traffic based on the rule set. It may forward traffic from an internal client to an untrusted external host. As a by-product, it hides the trusted internal client from potential attackers, as well as a large portion of the overall network architecture and nomenclature.
Now, the most advanced form of proxy firewalls currently is a kernel proxy. It is part of the kernel, it's a kernel-critical process running at all times that the system is online, but it still functions in a proxy type of service format. The two basic types of proxies are a circuit level, which creates a conduit through which a trusted host can communicate with an untrusted host and encompass a number of different protocols. Or the application level proxy, which relays traffic from a trusted endpoint, running through a specific application to an untrusted endpoint. The third generation firewall is the stateful inspection. The stateful inspection examines each packet in the context of the session. It keeps track of the state of the network connection, such as TCP streams or UDP communication, and is able to hold significant attributes for each connection in memory, so that in monitoring and defining the state based on these attributes, is able to judge the connection and its flow.
The rules are written in such a way that they allow for dynamic adjustments to these rules and they adapt to changes in the connection state. It can also be referred to dynamic packet filtering.
We're going to talk about bound media for a moment and the effect that it has on securing our components. When we talk about bound media, what we're concerned with is more or less the throughput of the physical signal that goes through the medium, be it metal, fiber optic or wireless. We have to concern ourselves with distance between devices as the power moves through the various materials in this bound media, energy is lost as the signal is passed up the line.
We have to concern ourselves with data sensitivity and various kinds of protections, some of which will be built into the bound media and some of which will be in the environment through which the medium passes. And then the environment itself. So here we have very common cable types. We have the coaxial type. Here, it has a solid center connector made of copper. There's an insulating dielectric around it, surrounding that is some form of shield, which can be braided or a single sheath and then the jacketing, usually meant to provide environmental protection.
Then we have our twisted pair. We have the twisted pair, with the number of twists per inch and the amount of metal overall as one of the key factors in what category it belongs to. Surrounding this will be a shield, surrounding that will be an insulating jacket and then again an external sheath.
Here we have a table of the usual standards that the manufacturers of these cable types publish so that in keeping within these, we can expect top-level performance, all other factors being equal. So we have the distance supported in this chart means the unamplified distance between amplification sources, such as repeaters. For thick coax, we have 500 meters. With thin coax, we have a 185. And for the categories of twisted pair, typically the manufacturers set the distance at 100 meters as a standard, whether it's category three, five, six, or seven. But it's certain that the higher you go, the more the carrying capability of the cable, due to the greater amount of carrying medium, such as copper, within each one.
Now, preferable, in the proper circumstances, to copper-based cable, is, of course, fiber optic. We have different modes: multi-mode, single-mode. We have very, very large bandwidth within this, many times the capacity of the equivalent copper. Typically, it goes long distances between amplification sources and it is practically immune to electromagnetic interference. In compensation for that, however, it is much more expensive to buy than other cable types. It is considerably more expensive to install than other cable types and it uses expensive electronics, both in connections and in joining, as well as network cards and hubs.
Another thing to bear in mind is it is actually glass inside the orange jacketing, so that dropping it on the floor and stepping on it, one confusing the signal inside the other. And we have to take steps to prevent this because this creates dirty data, dirty signal, which will of course contaminate, garble, corrupt or even destroy the data that's passing through the cables.
The thing with metal cabling is that it has some issues with the medium. One of them is attenuation, and that is that the signal diminishes its strength as it passes up the line and goes further and further without amplification or protection to keep the signal at its proper strength. Then we also have crosstalk, unshielded cables broadcasting, carrying a great deal of what is electrical power through them carrying the data, they will bleed over into each other, one confusing the signal inside the other. And we have to take steps to prevent this because this creates dirty data, dirty signal, which will of course contaminate, garble, corrupt or even destroy the data that's passing through the cables.
One of the things we've always been concerned about in IPv4 is the 4.3, approximately, billion address limitation. With the addition of network address translation, this became something less of a concern. Starting on the inside of the network using a non-routable across the Internet routing scheme of an address of 10.x, we're able to take those, route them internally, and then by passing them through a NATing device, take the internal address, the protocol in the service being used, placing them in a table inside the NATing device, typically a firewall at the perimeter, and mapping it to the external routable device on that firewall that joins that enterprise to the Internet.
These internal addresses, thus, are mapped in the table and are able to pass their traffic out through this device so that it reaches the external IP at the destination. Outbound traffic coming back, typically goes through the reverse process. The routable address - the protocol - is then translated back through the map, seeking its proper destination inside the intranet, within the destination enterprise, so the traffic can be delivered back to the proper destination. And by doing this, we're relieved from the responsibility of having to get multiple registered addresses inside our network, aligned with the local intranet scheme.
Now we're able to do it using the single mappable address on the border device, the firewall or border router to do the translation from internal to external. We can also do port translation, and this is an extension of the address translation that we've used, and this permits multiple devices on a local area network to be mapped to a single public IP. The goal of PAT is thus to conserve IP addresses. When the computer logs on to the Internet, the router assigns this client a port number, which is then appended to the internal IP address. In effect, this gives the computer a unique address. If another computer logs on to the Internet at the same time, it assigns the same local IP address to a different port number.
Although both computers are sharing the same public IP address and accessing the Internet at the same time, because of this mapping, the router knows exactly which computer is sending which packets to which place and it is, therefore, able to keep them straight, having given each one what is, ultimately, a unique internal address.
Now, some guidance on what we need to do for our endpoint security. Our workstations, we should, of course, always have antivirus and other forms of anti-malware software installed, and it should be without question that these must always be kept turned on and up-to-date. We should have a configured and operational host-based firewall on the workstation, which serves as an augmentation of the enterprise policy for all firewall operations. Ideally, they should be hardened fully to provide the smallest amount of attack surface, and of course, all workstations, all devices, should be patched and kept current.
Our mobile devices are somewhat more problematic though, being small, battery powered, wireless in nature. There's less of a device, so to speak, that can run this kind of application because it eats up battery power and shortens the life between charges. We should have some form of encryption for the whole device.
We should have remote management capabilities, as might be provided through mobile device management or MDM system. And we should have policies and user agreements to ensure that for those things which we don't have proactive controls available, we have the ability to hold the user and their behavior accountable to the policy and ways to monitor compliance with them.
Our content distribution networks, of course, are made up of large farms of servers deployed in multiple data centers across the Internet, which means they can literally ring the globe. And the point is to put out their content with high availability settings and high performance for delivery of the content. These must also have the same sort of endpoint protection services relative to the kind of services these are. In these, high-speed, highly reliable delivery and thus availability, are key elements, and so high-performance devices and technologies need to be employed to protect these from disruption.
We're now moving into section three in which we're going to discuss design and the establishment of secure communications channels. One of our oldest technologies is, of course, voice circuitry. The plain old telephone system, as we commonly refer to it, is based on a circuit-switched network that was originally designed for analog traffic. But in today's world, we have other kinds of switches, based on logic, that switch and convert all the traffic to digital.
In many locations, such as office parks or high-rise buildings, we have private branch exchanges that operate within the building to divide out the calls coming in to the various subscribers in the location. It's an internal phone system that acts as a switch, as well as another device that handles complex traffic coming in that is voice, data, and video, and it's attached to a telecommunications trunk. Typically, you don't find this anywhere except in a business in an office park or a high-rise, as I say.
We have P2P or peer-to-peer applications. These typically open an uncontrolled channel through network boundaries, normally by some form of tunnel. The problem with peer-to-peer is they're typically open source or freeware and these channels are unregulated, oftentimes even unfiltered. It's been known that these tools have been used to act as agents to spread botnet, spyware and viruses.
LimeWire was notorious, as a peer-to-peer application, for doing this, until eventually it got shut down permanently. Ares and others were the same sort of thing and they fell under the same dark cloud as a way for hackers and hostile parties to spread their botnet agents, spyware applications and other sorts of malware.
We have, of course, our remote meeting technology: freeconference.com, WebEx, Zoom, RingCentral and a host of others. These are typically web-based, that require either that you install an extension inside your browser, or possibly even add an add-on agent to your email client, and you're able to run the software on that host system. These technologies allow voice, video, data transfer, text messaging and as desktop sharing features, they're allowing anyone who is in charge as a presenter at that time to share their desktop with all the other attendees.
Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years. He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant. His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International. A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center. From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.
Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004. During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide. He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004. Mr. leo is an ISC2 Certified Instructor.