1. Home
  2. Training Library
  3. CISSP: Domain 4 - Communication and Network Security - Module 1

IP Version 6, Ports, Protocols and network categories

The course is part of this learning path

Preparation for the (ISC)² CISSP Certification (Preview)
course-steps 16 certification 4 description 1
play-arrow
IP Version 6, Ports, Protocols and network categories
Overview
DifficultyAdvanced
Duration43m
Students8

Description

Course Description

This course is the first module of Domain 4 of the CISSP, covering communication and network security

Learning Objectives

The objectives of this course are to provide you with and understanding of:

  • How to apply secure design principles and network architecture
  • IP Version 6
  • Network ports and protocols
  • Network design patterns
  • Network scaling
  • Network segmentation

Intended Audience

This course is designed for those looking to take the most in-demand information security professional certification currently available, the CISSP.

Prerequisites

Any experience relating to information security would be advantageous, but not essential.  All topics discussed are thoroughly explained and presented in a way allowing the information to be absorbed by everyone, regardless of experience within the security field.

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

IP version 6 brings a much larger data address field, expanded from 32 bits to 128. It has, therefore, improved quality of service, it has fuller IPSec integration, and just generally improved security. And there on the right-hand side of the screen, you see the 40-bit depth and the 32-bit breath of how IPv6 was designed. IPv6 was designed for the express purpose, at least one of them, to integrate directly with IPSec, whereas getting IPv4 integrated with IPSec was something of a wrestling match. 

Here, it's somewhat more straightforward. So let's look at what we have in the way of a comparison between IPv6 and IPv4. IPv4 was originally conceived in 1969 at DARPA, the Defense Advanced Research Projects Agency. The idea being the DOD recognized that a need existed for the military to have a protocol that would survive a nuclear holocaust, or so the legend goes. It wasn't actually deployed in its final form, if you could call it that, in 1981. A 32-bit dotted decimal sort of notation - here we have 192.0.2.76 as representative of the very common example - with the prefix /24 and the number of addresses, two to the power of 32, being 4.2 billion. 

Now, IPv6 comes along, some 18 years later. Now we have a 128-bit address, and we have a much less readable hexadecimal notation, which you see fully expanded there: 2001, colon, etc, etc. Setting it up as a prefix, we have a much smaller one, but it's still nothing as convenient as the very old, very comfortable 192.0.276. But the number of addresses being two to the power of 128, this number - 340 followed by all of those digits - is a number that is so large it doesn't even have a name. But the benefit is, at least we don't have to worry about running out of IP addresses anymore - very much a concern in the days when IPv4 was the only game in town. 

Now, the ports as we know them are these. They break down into three common categories, well-known, zero through 1,023, the destination ports for traffic with their assigned protocols. There you see FTP assigned 20 and 21. SSH, 22 and so on down the list. The registered ports are generally assigned to software vendors for their own specific uses, typically isolated to them. And then we have the ephemeral or the dynamic ports, starting at 49152, and running through the last one, 65535. These are the source ports, opened when needed to send traffic to the well-known, or conceivably to the registered ports. 

The network can be broken down into three basic groupings. We have the Internet, that Planet-Wide Area Network or PLAN, as it's coming to be called. We have the intranet, that is your corporate network, typically what we consider behind your firewall or inside your company's logical perimeter. And then we have the extranet. I'm just pronouncing them to make sure that you get the reference. The extranet is oftentimes set up as a buffer zone. A DMZ is what it contains, and it sits between the Intranet and the Internet, acting as an isolation network so that additional services can be provided, while at the same time we provide greater protections for our intranet from the unknown and largely untrusted and untrustworthy Internet. 

The directory services that we have here are ways that things that are listed on the network that the network knows about, things that we need to get connected to, draw information from, can be listed. As such, we need to place proper organization through LDAP, NIS or NIS+, and NetBIOS over it to organize it all, and then on top of that, to ensure that everything stays secure the way that we've designed it and manage it to be, we then put DNS security extensions over it to ensure that the only time it changes is when we mean for it to, and in a controlled way. 

A very, very commonly used protocol, HTTP. This, of course, is the protocol that we use through port 80 anytime we fire up a browser to go jump onto the Internet, the information superhighway, to find out things like the answer to CISSP questions, for example. Now, this is designed to transfer HTML-encoded web pages between a client and a server. Now, the traffic is typically sent in clear text, but of course, by employing SSL or TLS or a host of other things, we can encrypt the traffic. Our most commonly used ones are SSL, or, a much better way, TLS.

Now, as we discussed in the previous module, we have to be concerned more and more with industrial control systems, one type of which is SCADA. As an example of this particular subspecies of network, we have a network that is powered by DNP, the Distributed Network Protocol version 3, that is most commonly used in SCADA systems. Now, this is the primary protocol that connects the devices to their controllers, but sadly, it doesn't have any security features built into it. 

Now, the network perimeter vulnerabilities are the things that we concern ourselves to a very great extent about, because this is where hackers find a point of purchase that they can then start using to penetrate into the internals of our network. Protocol vulnerabilities do exist throughout the stack, and of course, we have to worry about not just them, but insecurities within the data itself. All of these, because of the way that they work to get setup, manage, do flow control, session hijacking and man-in-the-middle attacks are possible. 

We have to worry about operating system and server weaknesses in all the devices that the network connects, because once connected, they can be reached by anybody who is able to get onto that particular avenue in the network. And then, of course, we have things that we do to ourselves. And we all know that device and vendor backdoors oftentimes populate the software that we acquire, and these have to be hunted down and they have to be removed or closed or deactivated in some way, because it's one of the very first things that hackers will look for when they get into our network system.

Now, it is a very common strategy to bring all of the IP-based devices together into a convergence model. Instead of having a series of proprietary or non-interoperable protocols, like we have with circuit switching for telephones, IP traffic for some, IPX/SPX traffic for others, and so on, we bring things together and put an IP-type layer on top of them to make them all appear at least to the network management software to be the same. What this offers is the ability to bring things together for multimedia support, something such as Skype or WebEx or RingCentral, or a host of other applications. The converged IP network puts everything on very, very similar, if not exactly identical physical layer, and handles all of the differences with software. It provides a much greater, much fuller integration of this componentry, and eventually it gets to the point where things are homologated on the software layer, and the hardware layer is all but removed from the equation. 

It therefore gives us a much more uniform hardware environment, requiring fewer differentiated components. So, if everything is running in software, and they're all running on things like computers or identical server types, etc, it simplifies to hardware layer so that the software takes care of all of the busy work of making everything look and get handled the same way. This can streamline operations to a very large extent, and that translates into saving money. One example would be Fibre Channel over Ethernet, or FCoE. Now, this lightweight encapsulation protocol is one that lacks the reliable data transport that TCP provides. It has to operate on a DCB-enabled ethernet and use lossless traffic classes. 

One of the advantages of using FCoE is that it mimics the lightweight nature of native Fibre Channel protocols and media. However, in that we are layering it with ethernet type of encapsulation, it means that the things that come with TCP/IP are going to be present here. It means that the threat can be sniffed by those on the network and it only travels a short distance. For example, it's used only within data centers to move large masses with high speed and high capacity. 

If we have to move large amounts of data a much greater distance, we use iSCSI instead. This, again, is an IP-based protocol that handles storage networking by standard linking data to storage facilities. It facilitates data transfers over intranets and to manage storage over long distances, unlike FCoE which is only within a storage location. And therefore, iSCSI enables location-independent data storage and retrieval. 

Now, moving into the WAN itself, we have some predominant WAN protocols, MPLS, Multi-Protocol Label Switching. Instead of doing complex table lookups, as happens with frame relay, this performs first establishing a label by reading the underlying protocols and creating the label, which it then slaps on all the packets, so that reading the label is what each subsequent switch will do, and so it reduces the complexity of trying to determine packet content, protocol types, what routing to give it. It speeds up the transfer of packets through the switches, which means greater throughput, hopefully, lower packet loss, and higher capacity of transfer.

Now, the protocols that are coming on now have to do with software-defined WAN. And what this does is this applies machine learning and deterministic logic to optimize overall WAN performance. NSD, whether it's SD-LAN, SD-WAN or SDS, Software-Defined Storage, these machine-learning-amplified protocols show much promise. 

We also have our Content Distribution Networks. Examples of this would be Voice over IP. We have WebEx, e-learning sites, AWS CloudFront, the Azure version of this, Rackspace CloudFiles, Netflix, Facebook, and a myriad of gaming sites. Now, the essential basis of a Content Distribution Network is that it is something, like a game, augmented reality, converged communications like WebEx, or movie sites like Netflix or Amazon Web Services, Amazon Instant Video. And what they seek to do is, through lightweight protocols, distribute the various content that they offer. And these are becoming much more the standard than the unique and even luxurious things that they were as little as 10 years ago.

We have our Wi-Fi, our Wireless LAN. This is typically based on the family of IEEE 802.11 specifications. And what they do is, of course, they create a wireless version of what we are accustomed to in a wired local area network. Because they're on radio frequency carriers, we can't see them, obviously, and so what we need to do is ensure that what we have in the wire is what we have in the air, and that we have comparable security in both cases. 

The benefit, of course, of doing it wirelessly is we don't have to run wire anymore, or at least not in particular locations. It gives us a link through the air that will connect at least two and more connections, stations, through a distribution medium that doesn't require any wire anymore. The Wireless Mesh Network looks something like a cellular network does, with overlapping nodes and store-and-forward type of technology. These radio nodes work in a mesh where each node that is the current one that you're centered on will forward messages as you move between nodes, and each node that receives it between you and the point of sending will forward this to each node in between just like it goes through a switch in a wired network.

They therefore have the ability to pick multiple routes as you move, which gives them the capability to self heal if anyone of the radio nodes should happen to go down. Now, one form is Bluetooth. This is Wireless. It's called Wireless PAN, a Personal Area Network. It interconnects devices in a small area, generally about 10 meters in radius, and it's governed by the 802.15 standard. It uses a very low power signal so that reaching out to about 10 meters is about the best we can expect. 

If we need to go further in the wireless realm, then we'll probably use something like WiMax, running in the 38 gigahertz range. WiMax has the potential to deliver data rates of up to 30 megabits per second, which is pretty good. Providers offer average data rates, however, of something a lot lower, something in the neighborhood of six megabits per second, and, in actual reality, something frequently a lot less.

About the Author

Students415
Courses16
Learning paths1

Mr. Leo has been in Information System for 38 years, and an Information Security professional for over 36 years.  He has worked internationally as a Systems Analyst/Engineer, and as a Security and Privacy Consultant.  His past employers include IBM, St. Luke’s Episcopal Hospital, Computer Sciences Corporation, and Rockwell International.  A NASA contractor for 22 years, from 1998 to 2002 he was Director of Security Engineering and Chief Security Architect for Mission Control at the Johnson Space Center.  From 2002 to 2006 Mr. Leo was the Director of Information Systems, and Chief Information Security Officer for the Managed Care Division of the University of Texas Medical Branch in Galveston, Texas.

 

Upon attaining his CISSP license in 1997, Mr. Leo joined ISC2 (a professional role) as Chairman of the Curriculum Development Committee, and served in this role until 2004.   During this time, he formulated and directed the effort that produced what became and remains the standard curriculum used to train CISSP candidates worldwide.  He has maintained his professional standards as a professional educator and has since trained and certified nearly 8500 CISSP candidates since 1998, and nearly 2500 in HIPAA compliance certification since 2004.  Mr. leo is an ISC2 Certified Instructor.

Covered Topics