1. Home
  2. Training Library
  3. Module 1 - Computing and Networking fundamentals

Network security

Developed with
QA

The course is part of this learning path

Foundation Certificate in Cyber Security
course-steps 5 certification 21 description 1
play-arrow
Start course
Overview
DifficultyBeginner
Duration2h 45m
Students27

Description

Course Description 

This course introduces the basic ideas of computing, networking, communications, security, and virtualization and will provide you with an important foundation for the rest of the course.  

 

Learning Objectives 

The objectives of this course are to provide you with and understanding of: 

  • Computer system components, operating systems (Windows, Linux & Mac), different types of storage, file systems (FAT & NTFS), memory management. The core concepts and definitions used in information security 
  • Switched networks, packet switching vs circuit switching, packet routing delivery, routing, internetworking standards, OSI model, and 7 layers. The benefits of information security  
  • TCP/IP protocol suite, types of addresses, physical address, logical address, IPv4, IPv6, port address, specific address, network access control, How an organization can make information security an integral part of its business 
  • Network fundamentals, network types (advantages & disadvantages), WAN vs LAN, DHCP 
  • How data travels across the internet. End to end examples for web browsing, send emails, using applications - explaining internet architecture, routing, DNS 
  • Secure planning, policies, and mechanisms, Active Directory structure, introducing Group Policy (containers, templates, GPO), security and network layers, IPSEC, SSL / TLS (flaws and comparisons) SSH, Firewalls (packet filtering, state full inspection), application gateways, ACL's 
  • VoIP, wireless LAN, Network Analysis and Sniffing, Wireshark 
  • Virtualisation definitions, virtualisation models, terminologies, virtual models, virtual platforms, what is cloud computing, cloud essentials, cloud service models, security & privacy in the cloud, multi-tenancy issues, infrastructure vs data security, privacy concerns 

 

Intended Audience 

This course is ideal for members of cybersecurity management teams, IT managers, security and systems managers, information asset owners and employees with legal compliance responsibilities. It acts as a foundation for more advanced managerial or technical qualifications. 

  

Prerequisites  

There are no specific pre-requisites to study this course, however, a basic knowledge of IT, an understanding of the general principles of information technology security, and awareness of the issues involved with security control activity would be advantageous. 

 

Feedback 

We welcome all feedback and suggestions - please contact us at support@cloudacademy.com if you are unsure about where to start or if would like help getting started. 

Transcript

Welcome to this video on Network Security. 

 

We’ll look at some of the issues that can be encountered when trying to secure networks, and examine technologies that can assist us in achieving the most secure network we can. We’ll cover:  

 

Security Planning, 

 

Active Directory,  

 

Group Policy,  

 

Security & Network Layers, SSH and 

 

Firewalls.  

 

 

 

First, it’s important to recognise that having a secure network is not something that will happen by magic, or wishful thinking. It requires a robust and thoughtful approach to policy creation and planning, and will always be an on-going activity. The network that is secure today may not be secure tomorrow. 

 

Security policies that are developed must ensure appropriate levels of security for the activities performed in the network by: 

 

 

 

Making it clear what is protected and why –  knowing what is being secured, and why it is important to secure it 

 

Clearly stating responsibility for providing that protection – any policy that is developed will be of no use unless there is some sort of chain of accountability. Who, ultimately, bears responsibility? 

 

Making it clear what users are allowed to do, and what they must/must not do – if users of a network are not clearly told exactly what sort of behavior is acceptable, we cannot complain if they do things that we don’t approve of 

 

Providing grounds on how to interpret and resolve conflicts in policies later on – the cyber security landscape is constantly evolving. A good policy today may be negated by changes in technology tomorrow. Policies can often interlink with others, and changes in one may cause conflicts in another. 

 

 

 

The most common security technology within the majority of organization’s is Microsoft’s Active Directory, largely because the majority of organizations will run their internal network using Microsoft servers, and end user machines running a Microsoft Windows operating system. 

 

Active Directory, or AD, is a Windows OS directory service that facilitates working with interconnected, complex and different network resources in a unified manner. It provides a common interface for organizing and maintaining information related to resources connected to a variety of network directories.  

 

The directories may be systems-based (like Windows OS), application-specific or network resources, such as printers.  

 

AD serves as a single data store for quick data access to all users and it controls access for users, based on the directory's security policy. 

 

AD provides the following network services: 

 

Lightweight Directory Access Protocol, or LDAP – An open standard used to access other directory services 

 

A security service using the principles of Secure Sockets Layer (SSL) and Kerberos-based authentication. Kerberos is a network protocol that uses secret-key cryptography to authenticate client-server applications.  Kerberos requests an encrypted ticket via an authenticated server sequence to use services. The protocol gets its name from the three-headed dog (Kerberos, or Cerberus) that guarded the gates of Hades in Greek mythology. 

 

This diagram gives a simplistic view of how AD is structured. 

 

The terms Object, organizational unit (OU), Domain, tree, and forest are used to describe the way AD organizes its directory data.  

 

Like all directories, AD is essentially a database management system. The AD database is where the individual objects tracked by the directory are stored. It uses a hierarchical database model, which groups items in a tree-like structure. Each node on the tree is referred to as an object and Is associated with a network resource, such as a user or service.  

 

Like any database schema concept, the AD schema is used to specify attributes and types for a defined AD object, which facilitates searching for connected network resources based on those assigned attributes.  

 

For example, if a user needs to use a printer with color printing capability, the object attribute may be set with a suitable keyword such as ‘Color Printing’, so that it is easier to search the entire network and identify the object's location, based on that keyword. 

 

A domain consists of objects stored in a specific security boundary and interconnected in a tree-like structure. You will have encountered the term domain before, likely relating to domain names for websites. The concepts involved are not too dissimilar, but here the term is used to mean the extent of the network. 

 

A single domain may have multiple servers, each of which is capable of storing multiple objects.  

 

In this case, organizational data is stored in multiple locations, so a domain may have multiple sites for a single domain.  

 

Each site may have multiple domain controllers for backup and scalability reasons.  

 

Multiple domains may be connected to form a Domain Tree, which shares a common schema, configuration and global catalogue (used for searching across domains).  

 

A Forest is formed by a set of multiple and trusted domain trees and forms the uppermost layer of the AD.  

 

Let’s consider some of the major components of an AD installation.The database schema is the skeleton structure that represents the logical view of the entire database. It defines how the data is organized and how the relationships amongst them are associated. It formulates all the constraints that are to be applied on the data. The basic unit of data in AD is called an Object. AD can store information about many different kinds of objects. The objects you work with most are users, groups, computers, and printers. 

 

Attributes are the properties associated with any given Object, detailing how it can behave or be used within the confines of the domain. The domain is the large ‘container’, holding everything that belongs on the network. One or more servers, called Domain Controllers, are responsible for managing the domain, and every object within it.  

 

 

 

Having seen that Objects can be given Attributes, a simple method is needed for assigning these Attributes to the correct Objects. The way that this can be achieved at this scale is the use of Group Policy. Group Policy is a feature of the Microsoft Windows NT family of operating systems that controls the working environment of user accounts and computer accounts. It provides centralized management and configuration of operating systems, applications, and users' settings in an Active Directory environment. 

 

 

 

Group Policy, in part, controls what users can and cannot do on a computer system: for example: 

 

To enforce a password complexity policy that prevents users from choosing an overly simple password.  

 

To allow or prevent unidentified users from remote computers to connect to a network share;  

 

To block access to the Windows Task Manager; or  

 

To restrict access to certain data storage folders.  

 

A set of such configurations is called a Group Policy Object or GPO.  A GPO is a storage place for a collection of Group Policy settings that enable an administrator to control various aspects of the computing environment.  All Group Policy settings are stored in a GPO along with the properties associated with the objects in the AD store. Policy settings for sites, domains, and organisational units are stored in GPOs. To create a GPO for a domain or an OU, use the AD Users and Computers console and use the Group Policy Management Console (GPMC). 

 

 

 

This slide shows the management console for Group Policies. The Group Policy Container, or GPC is the portion of a GPO stored in AD that resides on each domain controller in the domain.  

 

The GPC is responsible for: 

 

Keeping references to Client Side Extensions, or CSEs;  

 

The path to the Group Policy Templates, or GPTs;  

 

Paths to software installation packages; and other referential aspects of the GPO. 

 

For the GPC, we are concerned with the System container. By expanding this container, you will find a sub container named Policies.  

 

Expanding the Policies container will expose a list of Globally Unique Identifiers, or GUIDs, which correspond to all of the GPOs that exist within the domain. A GUID is a 128-bit (16 byte-) number, shown under the name tag in the console, used by software programs to uniquely identify something. GUIDs are typically written in hexadecimal notation, containing 32 digits, with groups of digits separated by hyphens. 

 

The Group Policy Template (or GPT) is where the actual contents of any GPO can be found.  

 

What we see in the management console when we look at a GPO is only a pointer; the main portion of the GPO is held within the GPT. The two are tied together using their respective Group Unique Identifiers (GUIDs). 

 

Group Policy Administrative Template settings are delivered only once.  

 

Once a setting based on the Group Policy Administrative Templates has been delivered and applied, it is never delivered again.  

 

Let’s say that you made a computer policy that turned off System Restore. That setting would then be delivered to the registry and prevent users from utilizing this feature. No matter how many times that computer boots up, it will never download that setting again.  

 

While Group Policy does a good job of enforcing settings for most users, it cannot prevent local admins or registry savvy users from circumventing them.  

 

On top of that, there is no remediation. 

 

Having examined a software solution to network security, let’s now consider how physical devices can be used to secure networks. As discussed in other videos, Networks can be described using the OSI seven layer model, with the ‘lowest’ of those being the physical layers. 

 

There isn’t technology available to secure the physical layer as such, although if we consider a wireless network’s ‘physical’ layer to be the air through which the radio signal is sent, then we can think of one way in which we can possibly secure it –  

 

– using spread spectrum techniques. 

 

Spread spectrum is a technique used for transmitting radio or telecommunications signals. The term refers to the practice of spreading the transmitted signal to occupy the frequency spectrum available for transmission. The advantages of spectrum spreading include noise reduction, security and resistance to jamming and interception. One way in which spread spectrum is implemented is through frequency hopping, a technique in which a signal is transmitted in short bursts, "hopping" between frequencies in a pseudo-random sequence.  

 

Both the transmitting device and the receiving device must be aware of the frequency sequence.  

 

Let’s look at the security considerations for the other layers in the model.  

 

The data link layer covers all traffic on that link, independent of any protocols.  

 

Some of the attacks that can be launched against the Data Link Layer, such as those which target DHCP, can be mitigated with technology like DHCP Snooping from Cisco. 

 

There are other attacks that can be launched at this layer such as poisoning the Address Resolution Protocol, or ARP.  

 

A Port stealing attack can occur when attackers trick network devices, such as switches, into thinking that their computer is actually another legitimate device on the network. They can then steal all of the traffic intended for that device. 

 

Mitigations for many of these attacks are implemented within network switches, as these are the network equipment that works most heavily with Layer 2 protocols. 

 

Next in the OSI model is the Network or Internet Layer.  

 

Traffic carried on this layer is using the Internet Protocol. This protocol is stateless, meaning that it doesn’t maintain any record of the ‘state’ of any connection. 

 

As this layer is removed from the Application layer of the OSI model, there are both positives and negatives.  

 

Firstly, the protection can be applied regardless of the Application involved, so there is a single point of processing for authentication or the exchange of security keys.  

 

The Application knows nothing of the work being done in protecting the traffic at this layer, so does not have to include any functionality relating to it. 

 

On the downside, the Application’s lack of control over its communications could be regarded as a risk as the Application has no idea whether the security provided is adequate or not. 

 

This layer of the OSI model works with stateless connections – not knowing the state of a connection could make it easier for attackers to interfere with network traffic. 

 

The lack of state checking also makes IP unreliable, which may not be helpful if a connection requires that all data is guaranteed to be delivered, and delivered in a certain order. 

 

TCP sits on top of IP, and is a connection oriented protocol, meaning it is stateful by definition.  

 

Were this not the case, TCP could not guarantee the ‘in order delivery’ of all of the bytes sent via TCP 

 

TCP is a stateful protocol because of what it is, not because it is used over IP or because HTTP is built on top of it.  

 

TCP maintains state information in the form of a window size - endpoints tell each other how much data they're ready to receive, and their packet order - endpoints must confirm to each other when they receive a packet from the other.  

 

This state, how many bytes the other system can receive, and whether or not it did receive the last packet, allows TCP to be reliable even over inherently non-reliable protocols.  

 

Therefore, TCP is a stateful protocol because it needs state to be useful. 

 

Each application can make its own decisions on whether to implement security at this layer, on a per-connection basis if required. This does mean that each application must be configured in such a way as to do this, paying attention to the actual protocol used for the communication. 

 

The presence of state information can mean that it is easier to implement some security services that rely on knowing state. 

 

At the top of the OSI and TCP/IP models, we find the Application layers. 

 

Here, there are a plethora of ways in which we can secure network traffic, not least because every application can take its own approach to handling security. 

 

The applications approach to security can be tailored directly to the functionality and requirements of that application. 

 

At the application layer, we can also add in extra security requirements such as non-repudiation – proving that the particular user of an application was in fact the genuine sender of the information. 

 

However, this individualistic approach to security may cause its own problems.  

 

Without a standard approach to security for applications, each developer is free to implement their own design, and this can lead to conflicts and/or errors in the overall security strategy for any given set of applications or a computer. 

 

The Internet Protocol Security (IPsec) protocol offers confidentiality, integrity and authorization.  

 

However, unlike TLS which operates at the Transport/Application Layer and is therefore application-specific, IPsec is IP based and therefore operates at the Network Layer, meaning it can be applied to traffic regardless of the application producing the traffic. It authenticates and encrypts the packets of data sent over a network, and includes protocols for establishing mutual authentication at the beginning of a session, as well as negotiation of cryptographic keys to use during the session. IPSec can be configured to operate in two different modes: Tunnel and Transport mode. Use of each mode depends on the requirements and implementation of IPSec. 

 

Tunnel mode is the default mode. With tunnel mode, the entire original IP packet is protected by IPSec.  This means IPSec wraps the original packet, encrypts it, adds a new IP header and sends it to the other side of the tunnel – the IPSec peer. 

 

Transport Mode is used for end-to-end communications.  

 

Further to the two modes available for connections, IPSec has two approaches to how it handles Encapsulating Security Payload, or ESP. 

 

ESP supports encryption-only and authentication-only configurations, but using encryption without authentication is strongly discouraged because it is insecure. Authentication Header, or AH, is part of IPsec; it confirms the originating source of a data packet and provides, authentication, integrity, and anti-replay for the entire packet. It does not provide confidentiality, which means it does not encrypt the data. The data is readable, but protected from modification.  

 

ESP in transport mode does not provide integrity and authentication for the entire IP packet.  

 

However, in Tunnel Mode, the entire original IP packet is encapsulated, including the original IP header, with a new packet header added. IPSec Security Associations (SA) is fundamental to IPSec.  

 

An SA is a relationship between two or more entities that describes how the entities will use security services to communicate securely.  

 

Each IPSec connection can provide encryption, integrity, authenticity, or all three services.  

 

Let’s look at some of the security issues that IPSec was designed to address. 

 

Most relate to interference with communications either listening in to data conversations and/or modifying their contents, or pretending to be a genuine participant in the conversation when in fact they are not. Many applications have their own solution for providing security to their communications, which can become problematic in its own right. 

 

As IPSec provides security at the network layer, it does not care about the originating application and simply applies its security to all traffic. This standardizes the security approach, whilst being transparent to users and applications.  

 

IPSec is mandatory for next-generation of IP, IPv6, but is optional for the current-generation, IPv4.  

 

Transport Layer Security (TLS) primarily enables secure Web browsing, application access, data transfer and most Internet-based communication. It prevents the transmitted/transported data from being eavesdropped or tampered with and is used to secure Web browsers, Web servers, VPNs, database servers and more.  

 

TLS protocol consists of two different layers of sub-protocols: 

 

TLS Handshake Protocol: Enables the client and server to authenticate each other and select an encryption algorithm and other parameters prior to sending the data. 

 

TLS Record Protocol: It works on top of the standard TCP protocol to ensure that the created connection is secure and reliable. It also provides data encapsulation and data encryption services. 

 

Two important TLS concepts are: 

 

Connection: a logical client/server link, associated with the provision of a suitable type of service; and 

 

Session: an association between a client and a server that defines a set of security parameters such as algorithms used, and a session number.  

 

Sessions are used to avoid negotiations of new parameters for each connection. 

 

 

 

SSH, also known as Secure Shell or Secure Socket Shell, is a network protocol that provides administrators with a secure way to access a remote computer.  

 

It also refers to the suite of utilities that implement the protocol. Open SSH and PuTTY  are examples of these utilities. 

 

SSH applies security at the Application layer; 

 

it provides strong authentication and secure encrypted data communications between two computers connecting over an insecure network such as the Internet.  

 

SSH is widely used by network administrators for managing systems and applications remotely, allowing them to log in to another computer over a network, execute commands and move files from one computer to another. 

 

The current set of SSH protocols is SSH-2 and was adopted as a standard in 2006. It's not compatible with SSH-1 and uses a Diffie-Hellman key exchange alongside a stronger integrity check that uses message authentication codes to improve security. SSH clients and servers can use a number of encryption methods, the mostly widely used being Advanced Encryption Standard (AES) and Blowfish. 

 

SSH has a number of features that are specifically designed to enhance its security. Authentication of the connection occurs at the beginning of the conversation, to ensure that each party is who they say they are. This authentication can be achieved in a number of ways, such as passwords, digital certificates or digital security tokens. The authentication process is repeated throughout the connection session, ensuring that even if one set of keys is compromised, this does not compromise the entire session. 

 

SSH provides an integrity checking mechanism to ensure that packets have not been altered during transit. If a packet is found to have been altered, and fails the integrity check, the connection will be terminated. 

 

SSH also uses techniques to validate the identities of the senders of any packets. Each packet is stamped with a signature which proves the identity of the sender. 

 

Firewalls are routing devices that forward packets from one logical network address space to another, but uses rules to decide whether the packet should be forwarded or dropped.  

 

Firewalls can be located in several places within a network. 

 

The most common is between the internal network, and the wider Internet. 

 

They can also be deployed purely internally to the network and used to protect access to highly sensitive data. 

 

A firewall can also be deployed on an end-users computer, controlling their access to network resources and protecting the machine from external threats. 

 

Hardware firewalls are dedicated firewall devices with a simple operating system and are designed solely to be used as firewalls.  

 

Software firewalls are often installed on multipurpose devices, such as servers or end-point computers and, as such, are not single use devices.  

 

The major function of a firewall is to connect networks with different IP network addresses and that have different levels of trust, for instance a corporate network and the Internet.  

 

Firewall administrators construct a set of rules that will allow certain kinds of traffic to cross the firewall, and block the remainder.  

 

The general content of the rule set will be defined by the organisation’s security policy. The default rule of a secure firewall should be to deny all traffic through the firewall and allow exceptions to enter or leave the more highly trusted network.  

 

Firewalls operate by inspecting the packets as they enter the device. There are several methods that are used to inspect packets that have evolved over the years. 

 

There are five different types of firewall architectures, broadly speaking: 

 

Packet-filtering firewalls 

 

Stateful inspection firewalls 

 

Circuit-level gateways 

 

Application-level gateways, also known as proxy firewalls 

 

Next-generation firewalls 

 

A firewall prohibits potentially vulnerable services from entering or leaving the network, and provides protection from various kinds of IP spoofing and routing attacks. It provides a location for monitoring security-related events, audits and alarms that can be implemented on the firewall system. A firewall can serve as the platform for IPSec using the tunnel. 

 

A firewall cannot protect against attacks that bypass the firewall.  

 

Internal systems may have a dial-out capability to connect to an ISP. An internal LAN may support a modem pool that provides dial-in capability for travelling employees and telecommuters.  

 

A firewall does not protect against internal threats, such as an employee who cooperates, intentionally or unintentionally, with an external attacker.  

 

A firewall cannot protect against the transfer of virus-infected programs or files. 

 

Packet filtering is the process of passing or blocking packets at a network interface.  

 

This is based on source and destination addresses, ports, or protocols.  

 

Packet filtering is often part of a firewall program for protecting a local network from unwanted intrusion. 

 

Packet filtering is decided on a per-packet basis, but does not take any account of the context, or state, around the packet.  

 

That is, it doesn't check that an incoming packet is a response to a known outgoing packet.  

 

Rather, the filtering is done by examining the header of each packet based on a specific set of rules, and on that basis, deciding to prevent it from passing, called a DROP, or allowing it to pass, called an ACCEPT. 

 

There are three ways in which a packet filter can be configured, once the set of filtering rules has been defined:  

 

In the first method, the filter accepts only those packets that it is certain are safe, dropping all others. This is the most secure mode, but it can cause inconvenience if legitimate packets are inadvertently dropped.  

 

In the second method, the filter drops only the packets that it is certain are unsafe, accepting all others. This mode is the least secure, but is causes less inconvenience, particularly in casual Web browsing.  

 

In the third method, if the filter encounters a packet for which it's rules do not provide instructions, that packet can be quarantined, or the user can be specifically queried concerning what should be done with it.  

 

This can be inconvenient if it causes numerous dialog boxes to appear, for example, during Web browsing. 

 

 

 

Stateful inspection firewalls, also known as dynamic packet filtering, are a firewall technology that monitors the state of active connections and uses this information to determine which network packets to allow through the firewall. 

 

They have largely replaced an older technology, static packet filtering. In static packet filtering, only the headers of packets are checked, which means that an attacker can sometimes get information through the firewall simply by indicating "reply" in the header.  

 

Stateful inspection, on the other hand, analyses packets down to the application layer. By recording session information such as IP addresses and port numbers, a dynamic packet filter can implement a much tighter security posture than a static packet filter can. 

 

It monitors communications packets over a period of time and examines both incoming and outgoing packets.  

 

Outgoing packets that request specific types of incoming packets are tracked and only those incoming packets constituting a proper response are allowed through the firewall. 

 

In a firewall that uses stateful inspection, the network administrator can set the parameters to meet specific needs.  

 

In a typical network, ports are closed unless an incoming packet requests connection to a specific port and then only that port is opened. This practice prevents port scanning. 

 

This diagram give a simple graphical representation of how a stateful inspection firewall might be implemented within a network. 

 

It shows an external facing firewall, connecting the organisation to the Internet on one side, with a De-Militarized Zone (DMZ) on the other. 

 

There is an internal facing firewall which connects the internal network to the DMZ. 

 

An application gateway or application level gateway, or ALG, is a firewall proxy which provides network security. It filters incoming traffic against certain specifications, which means that only data from the allowed applications will be able to pass.  Such network applications include File Transfer Protocol, or FTP; Telnet; Real Time Streaming Protocol, or RTSP; and BitTorrent. 

 

Application gateways provide high-level secure network system communication.  

 

For example, when a client requests access to server resources such as files, Web pages and databases, the client first connects with the proxy server, which then establishes a connection with the main server. 

 

The proxy server acts as the intermediary between the two ends of the communication, meaning that neither is actually aware of the other. 

 

Because only the traffic from certain applications is allowed to flow, ALGs can be more secure than packet filters which have to contend with all types of traffic. 

 

One disadvantage is that this type of network security places an additional processing overhead on each connection. 

 

A circuit-level gateway is a firewall that provides UDP and TCP connection security, and works at the session layer of the OSI model.  

 

Unlike application gateways, circuit-level gateways monitor TCP data packet handshaking and session fulfilment of firewall rules and policies.For example, when a user’s Web page access request passes through the circuit gateway, basic internal user information, such as an IP address, is exchanged for proper feedback.  

 

The proxy server then forwards the request to the Web server. Upon receiving the request, the external server sees the proxy server’s IP address but does not receive any internal user information.  

 

The Web server sends the proxy server a proper response, which is forwarded to the client or end user via the circuit-level gateway. 

 

One simple way to make a network more secure is to ensure that all of the internal machines in that network are not able to directly connect, and therefore expose themselves, to the wider Internet. 

 

The IP addressing scheme sets aside ranges of IP addresses which are intended to be used only within an internal network. That is to say they are not routable on the Internet.  

 

If you are able to hide your internal IP address from the outside world, it obviously makes it harder for external attackers to attack you. 

 

The most common way to achieve this is by using Network Address Translation, or NAT. When NAT is deployed, internal machines make their connections to the Internet through a NAT device. Their traffic is then sent off to the Internet with the NAT devices external IP address stamped on it. The replies from the Internet come back to the NAT device, which is clever enough to then route these replies on to the correct internal IP address. 

 

IP works by advertising routes that traffic should follow to reach the intended destination. If routes to internal hosts are advertised to the wider Internet, then attackers are able to interfere with the routing process, portraying themselves as being a shorter route to the required destination and therefore getting the network traffic sent to them. 

 

IP spoofing refers to connection hijacking through a fake IP address.  

 

IP spoofing is the action of masking a computer IP address so that it looks like it is authentic. During this masking process, the fake IP address sends what appears to be a malevolent message, coupled with an IP address, that appears to be authentic and trusted.  

 

In IP spoofing, IP headers are masked through a form of TCP in which the spoofers discover and then manipulate vital information contained in the IP header such as IP addresses as well as source and destination information.  

 

A popular misconception about IP spoofing is that it permits unauthorised access to computers; this is not the case. In fact, IP spoofing aims to hijack computer sessions through denial-of-service attacks, which aim to overwhelm the victim with traffic. 

 

A buffer overflow occurs when more data is written to a buffer than it can hold. A buffer is a small storage area for data being processed.  

 

The excess data is written to an adjacent area in the computer’s memory, overwriting the contents of that location and causing unpredictable results in a program.  

 

Buffer overflows happen when there is improper validation of data. It is considered a bug or weakness in the software. 

 

Buffer overflows are one of the worst bugs that can be exploited by an attacker mostly because they are very hard to find and fix, especially if the software consists of millions of lines of code.  

 

Even the fixes for these bugs are quite complicated and error-prone. That is why it is really almost impossible to remove this type of bug entirely. 

 

Finally, whilst network protection such as firewalls can be quite efficient at dealing with technology based threats, they are not very effective against human based attacks such as insider threats. 

 

Access Control Lists, or ACLs, are a concept that can be applied in many areas of computing where there is a requirement to control access to a computing resource. 

 

In this instance, we are talking about using ACLs to restrict the traffic that can enter or leave a computer network. 

 

The list contains a set of patterns which could potentially be found within the data in an IP packet. The contents of each packet are compared to the patterns in the list, and if a match is found then a decision is made on what should happen with the packet. If it matches a ‘bad’ pattern, then it can be blocked and dropped. A ‘good’ match will be allowed to continue on its path. 

 

That’s the end of this video on Network Security. 

About the Author

Students134
Courses5
Learning paths1

Paul began his career in digital forensics in 2001, joining the Kent Police Computer Crime Unit. In his time with the unit, he dealt with investigations covering the full range of criminality, from fraud to murder, preparing hundreds of expert witness reports and presenting his evidence at Magistrates, Family and Crown Courts. During his time with Kent, Paul gained an MSc in Forensic Computing and CyberCrime Investigation from University College Dublin.

On leaving Kent Police, Paul worked in the private sector, carrying on his digital forensics work but also expanding into eDiscovery work. He also worked for a company that developed forensic software, carrying out Research and Development work as well as training other forensic practitioners in web-browser forensics. Prior to joining QA, Paul worked at the Bank of England as a forensic investigator. Whilst with the Bank, Paul was trained in malware analysis, ethical hacking and incident response, and earned qualifications as a Certified Malware Investigator, Certified Security Testing Associate - Ethical Hacker and GIAC Certified Incident Handler. To assist with the teams malware analysis work, Paul learnt how to program in VB.Net and created a number of utilities to assist with the de-obfuscation and decoding of malware code.

Covered Topics