1. Home
  2. Training Library
  3. CISSP: Domain 4 - Communication and Network Security - Module 1

Wireless networks, network scaling, security issues and network segmentation

The course is part of this learning path

Preparation for the (ISC)² CISSP Certification (Preview)
course-steps 29 certification 5 description 1
play-arrow
Start course
Overview
DifficultyAdvanced
Duration43m
Students34

Description

This course is the first module of Domain 4 of the CISSP, covering communication and network security.

Learning Objectives

The objectives of this course are to provide you with and understanding of:

  • How to apply secure design principles and network architecture
  • IP Version 6
  • Network ports and protocols
  • Network design patterns
  • Network scaling
  • Network segmentation

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Previous lectures:

Now, our wireless metropolitan area network will connect several different LANs through connections like WiMAX. This is governed by a long-range standard, 802.16, sometimes referred to as metropolitan Wi-Fi, and then, of course, we have the wireless WAN, typically used to cover very large areas, connecting branch offices through radio frequency broadcast, terrestrial cables, and other sorts of methods. 

Running in parallel with this, but not necessarily part of the same network structure, of course, is our cellular network. Now, this is a radio network that is distributing signal over land in what are called cells. Each one is delivered by a fixed-location transceiver sitting in the center of the given cell, and this cell site or base station then, through radio frequency, overlaps with its neighbors so that, like with Wi-Fi, it will store and forward the traffic that it receives through the network from one cell to the next, to the next, to the next, and so on until it arrives at the destination.

So, let's kind of sum this up a little bit. How the network scales you see here. We have the PAN - a personal area network - which would typically be the smallest, a radius of around 10 meters. Next up from there would be a LAN - a local area network. Now, as terms go, LAN is probably the most fluid of all; basically, what we use to describe any networking that we're dealing with. But a LAN has its limits. It's typically where we put a group of workstations together. There can be multiple LANs within a single room, say in a classroom or in an office space. Then, if we look at something like a university or a NASA site or a military base, we have a CAN - a campus area network. 

Now, the campus area network tends to be logically and physically defined by the physical boundary of the campus. At that point, that's typically where the campus will join the local metropolitan area network that serves it, and a metropolitan area network generally defines exactly what the name suggests, a metropolitan area like Washington, D.C., like Chicago, San Francisco, Seattle, Houston, and you can have multiple MANs in each metropolitan area given its size. 

Now, as we join the MANs together (and they're called MANs, not men), we have a WAN - a wide area network - and this can span continents. It can span oceans, and when we join all the wide area networks together, we have what I call a PLAN - a planetary area network - which is, in fact, what the Internet itself actually is. It goes by several names: Internet, World Wide Web, PLAN, or the cloud, which is another euphemism for the internet. 

Now, across all of these different network configurations, we're going to have a lot of very similar kinds of issues. We have Open System Authentication, which is not really authentication at all because the connecting spot that you're going to join first doesn't ask you to authenticate in any way. It authenticates at the very most to a hardware device that's compatible with it. 

We have Shared Key Authentication, which does provide authentication and an encrypted connection. We have the two basic connection modes used in Wi-Fi, ad hoc where it's one-to-one or infrastructure mode where we connect to a wireless access point. We have the obsolete protocol, Wired Equivalent Privacy, which was our first generation, basically built as a stopgap to provide some level of protection and encryption to the traffic, but it suffered greatly from the very poor quality of its implementation, and so WEP served to be a proving ground for a lot of attack methods for hackers. Unfortunately, they were very successful, and so Wi-Fi Protected Access - WPA - had to be invented to fix most of those, and the next step was WPA2, which fixed all the stuff that WPA did not fix.

So WEP and WPA both are obsolete, and both should've been retired, and WPA2 should be the one that is being used, but there's more that we must do because we still have other attacks that are possible. We have TKIP - the Temporal Key Integrity Protocol - attack. We have the parking lot attack, which is pretty much exactly what it sounds like, and then the Shared Key Authentication flaw, which enables a spoofed sharing of keys between false sharing parties.

One of the kinds of controls that we attempt to use is using digital certificates. Now, the system is that we have a client SSL version, a server SSL version, and the swapping of these two certificates authenticate each to the other, the genuine nature of the server to client and then the client back to the server, and they set up the secure connection. We have S/MIME, of course. We have object signing, which provides greater assurance that the objects that we're going to be using, dealing with, interacting with, are from genuine, authentic sources, and all of these digital certificate types of technologies are certified and attested to their validity through a certificate authority such as Verisign, Microsoft, eTrust, Entrust, and a host of others. 

Now, the certificates themselves are generated, and they're stored in a directory form. Now, the Lightweight Directory Access Protocol - LDAP, a four-layer version of the X.500 Directory Service that was developed for the OSI model - supports on TCP/IP a great flexibility for the management and access of the digital certificates. System administrators can, of course, store a lot more information in there if they wish to, but the more information you store in the directory service, the more you have to do some upkeep. So a balance always needs to be found for exactly how you want to use your directory service, but this at a minimum is what should be used with LDAP.

One approach that we can take is deterministic routing. Traffic only travels to predetermined locations by predetermined route. Now, they're either those routes that are known to be secure or, at the very least, less susceptible to compromise. Now, the fact that these are running over predetermined routes also means that they're fixed or static, and so, just that characteristic alone might set them up to be assaulted more aggressively once that is learned.

We have to have boundary routers. The service that they provide is they advertise routes that external hosts can use to reach internal ones. They can be used to prevent inbound and outbound IP spoofing attacks, and, as the name suggests, they're placed at the boundary of our networks, and they require very substantial hardening and regular review, regular testing, to make sure that hardening is either improved or, at the very least, maintained. 

A lot goes on at the security perimeter of any network, so at this particular point, we need to place a lot of our basic first-line defenses. In general, it will most likely include a firewall. It'll probably also include anti-malware, routers that can help filter traffic, and very likely, it will include an additional firewall type - the proxy - and IDS/IPS type systems as well to augment the protection that we get from the firewalls and the other filters.

One of the things that should not be overlooked to provide value, to protect our security, methods, and our data, and the flows of email and other data through the network would be network partitioning. Networks have a history of growing organically, but growing organically, while it may sound very good, is really not a very good strategy. It needs to grow in accordance with a good design model so that it continues to serve our corporate and our government needs in ways that are very efficient. Growing organically doesn't necessarily do that. And by following a good design, which isn't rigid but instead adapts to how the organization moves and flexes, grows, shrinks, it allows us to put in place a philosophy that allows us to control the traffic as it is moving between the various partitions or network segments. In doing that, we can set up trusted pathways that have varying levels as is required to control the traffic, but it's more to design needs and organizational needs and not something that simply grows in an uncontrolled or less managed sort of way. Because in the end, whatever philosophy we're going to use, we have to be sure that our network, if assaulted, can stand up to the assault with resilience and robustness and resistance, or, if it's simply an accident, that it can recover itself from that and continue to serve our needs.

One protective method is a dual-homed host. Typically home to a proxy style of firewall, the dual-homed host has two network interface cards, one on one side of a boundary layer and one on the other side, and between those two layers, the dual-homed host breaks the logical connection so that it can process the filtered traffic through its rules before it forwards it on, and that's going from inside to outside or the reverse, and it can be a very effective measure as part of our overall strategy to isolate a network or a network segment.

There is, of course, the bastion host. Now, a bastion host, historically, is a very hardened kind of a device that is exposed to an untrusted or unknown network like the Internet, although that is not necessarily the only unknown or untrusted network that we're likely to come across. This kind of a fortified device can act as a protective measure, or it can act as the home to an application that will be served out through one of these methods. We harden it, of course, by disabling unnecessary services, patching the software, closing ports, and so forth so that we shrink it down to the smallest reasonable attack surface that we can that still makes it operable.

As you see here, we have a firewall. We have different kinds of machines in the DMZ. Those machines, the authentication server, the server that is connected to the remote access network, the bastion or proxy server, these are all examples of bastion hosts that have been hardened so that they present the smallest attack surface to whatever hostile party might actually find their location and start assaulting them.

Okay, we've come to the end of our first module in Domain 4. We're gonna stop here. Next time, we're gonna continue on, and we're going to look at how we can secure network components as we begin our discussion of Domain 4, section two. Please join me for that. Thank you.

About the Author

Students893
Courses29
Learning paths1

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.

Covered Topics