1. Home
  2. Training Library
  3. Module 1 - Computing and Networking fundamentals

Network computing

Developed with
QA

The course is part of this learning path

Foundation Certificate in Cyber Security
course-steps 5 certification 21 description 1
play-arrow
Start course
Overview
DifficultyBeginner
Duration2h 45m
Students27

Description

Course Description 

This course introduces the basic ideas of computing, networking, communications, security, and virtualization and will provide you with an important foundation for the rest of the course.  

 

Learning Objectives 

The objectives of this course are to provide you with and understanding of: 

  • Computer system components, operating systems (Windows, Linux & Mac), different types of storage, file systems (FAT & NTFS), memory management. The core concepts and definitions used in information security 
  • Switched networks, packet switching vs circuit switching, packet routing delivery, routing, internetworking standards, OSI model, and 7 layers. The benefits of information security  
  • TCP/IP protocol suite, types of addresses, physical address, logical address, IPv4, IPv6, port address, specific address, network access control, How an organization can make information security an integral part of its business 
  • Network fundamentals, network types (advantages & disadvantages), WAN vs LAN, DHCP 
  • How data travels across the internet. End to end examples for web browsing, send emails, using applications - explaining internet architecture, routing, DNS 
  • Secure planning, policies, and mechanisms, Active Directory structure, introducing Group Policy (containers, templates, GPO), security and network layers, IPSEC, SSL / TLS (flaws and comparisons) SSH, Firewalls (packet filtering, state full inspection), application gateways, ACL's 
  • VoIP, wireless LAN, Network Analysis and Sniffing, Wireshark 
  • Virtualisation definitions, virtualisation models, terminologies, virtual models, virtual platforms, what is cloud computing, cloud essentials, cloud service models, security & privacy in the cloud, multi-tenancy issues, infrastructure vs data security, privacy concerns 

 

Intended Audience 

This course is ideal for members of cybersecurity management teams, IT managers, security and systems managers, information asset owners and employees with legal compliance responsibilities. It acts as a foundation for more advanced managerial or technical qualifications. 

  

Prerequisites  

There are no specific pre-requisites to study this course, however, a basic knowledge of IT, an understanding of the general principles of information technology security, and awareness of the issues involved with security control activity would be advantageous. 

 

Feedback 

We welcome all feedback and suggestions - please contact us at support@cloudacademy.com if you are unsure about where to start or if would like help getting started. 

Transcript

Welcome to this video on network computing. 

 

In it you’ll learn about some of the fundamental concepts involved with network computing and introduce some of the key components of a computer network. 

 

What do I mean by a network? Simply put, a network it is a collection of computing devices that are able to communicate with each other, via whatever means. The basic reasons for having computers networked are that it allows users to share data easily, quickly and efficiently. They are also able to share access to accessories such as printers. Having one network printer service hundreds of users is far more cost-effective than each of those users having their own printer on their desk. 

 

There are two basic types of computer networks. In a Peer-to-Peer, commonly referred to as P2P network, no single provider is responsible for being the server. Each computer stores files and acts as a server. Each computer has equal responsibility for providing data. 

The client-server model is the relationship between two computers in which one, the client, makes a service request from another, the server. 

 

 The key point about a client-server model is that the client is dependent on the server to provide and manage the information. For example, websites are stored on web servers. A web browser is the client which makes a request to the server, and the server sends the webpage data back to the browser. 

 

Popular websites need powerful servers to serve thousands or millions of clients, all making requests at the same time. The client side of a web application is often referred to as the front end. The server side is referred to as the back end. We will also look at the different scales we can have for networks, depending on the geographical areas they cover. 

There are a number of advantages to be gained from the establishment of a P2P network: 

 

  • They’re easy to install and configure, 

  • Don’t require a dedicated server, 

  • Users control their own shared resources, 

  • They’re inexpensive to purchase and operate, 

  • They don’t require additional equipment or software, 

  • They don’t require dedicated administrators, 

  • And work best with 10 or less users. 

 

One example of these advantages is that P2P is ideal for sharing files amongst a small group of users. Conversely, there are a number of disadvantages: 

  • Security applies to a single resource at a time, 

  • Users may have many different passwords, 

  • The server must back up each machine individually to help ensure data is protected from loss or destruction, 

  • Machines sharing resources may suffer reduced performance, 

  • There is no centralized organization scheme to locate or control access to data, 

  • It doesn’t usually work well with more than 10 users 

One example of these disadvantages is that P2P would be unsuitable for a service such as booking tickets, as one server needs to keep track of how many tickets are left.  

 

As mentioned previously, an everyday example of a server based, or client/server network, is the Internet. The web servers hold the information we want to access. We use our web browser client program to make a request to the web server, which can then service that request. In a business environment, a company’s data will likely be held on a central server resource, rather than on individual end-user machines. This affords a greater level of control over the data, as when a client requests access, the data can be vetted to ensure that only those with the correct level of permission will be able to access the information. 

 

Servers need to have sufficient computing power and connectivity to be able to handle requests from many clients at once. In a server based network, there is the capability to share out the workload amongst several servers. Servers can be designated to handle certain types of client requests, such as requests to login to the system, and be granted access to requested data or resources. This task can be handled by one or more servers that will verify that users are who they say they are, and grant them access only to data or resources they are allowed to see. 

 

There are a number of advantages with the server based model: 

  • It simplifies network administration, 

  • Centralizes user accounts, security, and access controls, 

  • Has more powerful equipment, 

  • It provides more efficient access to network resources, 

  • Requires only a single password for network login, 

  • And is the best choice for networks with 10 or more users, or networks with heavily-used resources. 

 

There are also disadvantages: 

  • At worst, server failure renders the network unusable, 

  • Server failure causes loss of network resources, 

  • It is more expensive, 

  • It requires expert staff to handle complex server software, 

  • And it requires dedicated hardware and specialized software 

 

We have now arrived at what is known as the Client-Server paradigm. The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. 

Often clients and servers communicate over a computer network, and exist on separate hardware, but both client and server may reside in the same system.  

 

A server host runs one or more server programs which share their resources with clients.  A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are Email, network printing, and the World Wide Web. 

 

There are different scales of network. The first is a Local Area Network, or LAN. These are typically confined to a small geographical area. The Institute of Electrical and Electronics Engineers (IEEE) states that this area is 6 miles or less in radius, but most commonly you will encounter LAN’s within one building or site. 

 

In order to connect computers across greater distances, you must employ a Wide Area Network, or WAN. The IEEE suggest that the distances involved in a WAN would be in excess of 60 miles. 

 

In essence, a WAN can actually be thought of as a collection of LAN’s, which have been joined together over distance. 

 

The two elements required to join LAN’s to a larger WAN are the transmission lines – cables or some other medium, capable of carrying signals from one LAN to another; and switching elements, also known as routers. These are specialized networking equipment able to connect two or more transmission lines, and correctly route data along these from one LAN to another. 

 

Let’s consider two computers communicating via a LAN, and compare them to two computers communicating via a WAN: You want to establish which pair will be exchanging data more quickly, and which pair will be communicating in the most secure fashion. By its very nature, a WAN will be moving data over a further distance, routing it through many way-stations along that route. This tells us that a WAN is likely to be both slower and potentially less secure as the data will be travelling via cables and way-stations that are not under the local control of the data owner. 

 

Let’s look at the four basic tenets for networking. Firstly, in order to have a network, there must be a link between computers. We can join together separate networks and links, to make an internetwork. The term internetwork is the root of the term Internet. If we have multiple networks joined together, then we need to have some means of routing the communications data to firstly the correct network, and then on to the correct end computer. 

 

Applications and programs that need to communicate across a network will have specific requirements that must be met by that network. In a simple LAN, you need a way to get your data communicated from one point, to its intended destination within the LAN. You can achieve this by using a switch. 

 

A switch, in the context of networking, is a high-speed device that receives incoming data packets and redirects them to their destination on the LAN. A LAN switch operates at the data link layer (Layer 2) or the network layer of the OSI Model and, as such it can support all types of packet protocols. We will discuss the concept of packets later on in this course. 

Packet switching: Packet-switched describes the type of network in which relatively small units of data called packets are routed through a network based on the destination address contained within each packet.  

Breaking communication down into packets allows the same data path to be shared among many users in the network, or for the individual packets to choose one of many routes through the network in order to reach their destination. 

 

Circuit Switching: A type of communication in which a dedicated channel (or circuit) is established for the duration of a transmission. The most ubiquitous circuit-switching network is the telephone system, which links together wire segments to create a single unbroken line for each telephone call. 

 

A packet switched network (PSN) is one of the most commonly used computer networks. It is widely implemented on local networks and the Internet. A PSN generally works on the Transmission Control Protocol/Internet Protocol (TCP/IP) suite.  

 

For data to be transmitted over a network, it is first broken down into small packets, which depend on the data's protocol and overall size. Each packet contains various details, such as a source IP address, destination IP address and unique data and packet identifiers. 

The segregation of data into small packets enables efficient data transportation and better utilization of the network medium/channel.  

 

More than one user, application and/or node may take turns sending and receiving data without permanently retaining the underlying medium/channel, as in a circuit switched network. 

 

Previously, I mentioned Internet Protocol, or IP, addresses. These addresses are one of the ways that you can direct your communications around networks, but there are other address schemes that you need to consider. 

 

Every device that can connect to a network does so via some sort of network interface card or NIC. Every NIC in the world has a unique identifying address, known as the Media Access Control or MAC address. To route your communications to the correct device, you need to have some means of marrying up the MAC address, which as a rule does not change, to an IP address which can be somewhat dynamic in nature. 

 

Within your LAN, you will use Address Resolution Protocol, or ARP to achieve this. 

The devices on your network will use ARP to establish and maintain lists of which IP address are associated with which MAC address at any given time. ARP relies on a feature of networking called Broadcasting. 

 

On screen you can see the process of a broadcast message being issued to the LAN from one computer, to ask which MAC address is currently associated with the IP address 128.2.11.43. The reply comes back from one computer, stating that the given MAC address is currently associated with that IP address. The originating computer will update its list of MAC to IP address mappings, storing this information in its local ARP table. 

 

There are different variants of broadcasting available in our networks. 

Broadcast transmission is supported on most LANs, and may be used to send the same message to all computers on the LAN.  

Network layer protocols (such as IPv4) also support a form of broadcast that allows the same packet to be sent to every system in a logical network.  

 

The multicast process involves a single sender and multiple receivers as opposed to Systems that are designed to be connection-dependent, like a client-server system. User datagram protocol (UDP) is the most common protocol used with multicasting. Email is the best example of multicast, when a user can choose to send an email to many different addresses, rather than a complete contact list. Another example is the one-to-many multicasting of a streaming video toward many users from a single server.  

 

Unicast is a common network model where packets are sent to a single network destination with a particular address. The basic idea of unicast is that there is a specific channel created for the user. This is helpful when content transmission is based on a 'single-tenant’ model, for example, when a content or service provider needs to send personalized and accurate information to individual users.  

 

Routing is the process of selecting a path for traffic in a network, or between or across multiple networks. Routing is performed for many types of networks, including circuit-switched networks, such as the public switched telephone network (PSTN), computer networks such as the Internet, as well as in networks used in public and private transportation, such as the system of streets, roads, and highways in national infrastructure. 

 

In packet switching networks, routing is the higher-level decision making that directs network packets from their source toward their destination through intermediate network nodes by specific packet forwarding mechanisms.  

Packet forwarding is the transit of logically addressed network packets from one network interface to another.  

 

In order to facilitate the selection of a route, network devices maintain a map of routes that they discover, in a routing table. These routing tables can be shared amongst network devices, and in fact this automatic sharing of information underpins the efficiency of all networks. For the original iterations of the Internet, routing tables were maintained by hand, and regularly updated by human operators. In todays connected world, this would be an impossible task! 

 

The Advanced Research Projects Agency Network (ARPANET) was an early packet-switching network and the first network to implement the protocol suite TCP/IP. Both technologies became the technical foundation of the Internet. 

 

The main design goal of TCP/IP was to build an interconnection of networks, referred to as an internetwork, or internet, that provided universal communication services over heterogeneous physical networks.  

 

Networks must support a wide range of applications and services, as well as operate over many different types of physical infrastructures.  

As the Internet, and networks in general, evolve, there are four basic characteristics that the underlying architectures need to address in order to meet user expectations: fault tolerance, scalability, quality of service, and security. 

 

You will learn more about TCP/IP in a later course in more detail.  

 

One of the ways in which networking was codified in its early days was via the Open Systems Interconnection, or OSI, model. 

 

This model detailed seven layers involved in the communication of data across a network: 

 

The Physical Layer - This layer is responsible for transmission of digital data bits from the Physical layer of the sending, or source, device over a network-communications-media to the Physical layer of the receiving (or destination) device.  

 

The Data Link Layer - when obtaining data from the Physical layer, the Data Link layer checks for physical transmission errors and packages bits into data "frames". The Data Link layer also manages physical addressing schemes such as MAC addresses. 

 

The Network Layer - adds the concept of routing above the Data Link layer. When data arrives at the Network layer, the source and destination addresses contained inside each frame, are examined to determine if the data has reached its final destination. Layer 3 formats the data into packets to be delivered up to the Transport layer. To support routing, the Network layer maintains logical addresses, such as IP addresses, for devices on the network. 

 

The Network layer also manages the mapping between these logical addresses and physical addresses. In IP networking, this mapping is accomplished through the Address Resolution Protocol (ARP). 

 

The Transport Layer - delivers data across network connections. TCP is the most common example of a Layer 4 network protocol. Different transport protocols may support a range of optional capabilities including error recovery, flow control, and support for re-transmission. 

 

The Session Layer - manages the sequence and flow of events that initiate and tear down network connections. At Layer 5, it is built to support multiple types of connections that can be created dynamically and run over individual networks. 

 

The Presentation Layer - handles syntax processing of message data such as format conversions and encryption / decryption needed to support the Application layer above it. 

The Application Layer - supplies network services to end-user applications. Network services are typically protocols that work with user's data. For example, in a Web browser application, the Application layer protocol HTTP, packages the data needed to send and receive Web page content.  

 

This brings us to the end of this video.  

 

About the Author

Students134
Courses5
Learning paths1

Paul began his career in digital forensics in 2001, joining the Kent Police Computer Crime Unit. In his time with the unit, he dealt with investigations covering the full range of criminality, from fraud to murder, preparing hundreds of expert witness reports and presenting his evidence at Magistrates, Family and Crown Courts. During his time with Kent, Paul gained an MSc in Forensic Computing and CyberCrime Investigation from University College Dublin.

On leaving Kent Police, Paul worked in the private sector, carrying on his digital forensics work but also expanding into eDiscovery work. He also worked for a company that developed forensic software, carrying out Research and Development work as well as training other forensic practitioners in web-browser forensics. Prior to joining QA, Paul worked at the Bank of England as a forensic investigator. Whilst with the Bank, Paul was trained in malware analysis, ethical hacking and incident response, and earned qualifications as a Certified Malware Investigator, Certified Security Testing Associate - Ethical Hacker and GIAC Certified Incident Handler. To assist with the teams malware analysis work, Paul learnt how to program in VB.Net and created a number of utilities to assist with the de-obfuscation and decoding of malware code.

Covered Topics