The course is part of this learning path
Course Description
This course introduces the basic ideas of computing, networking, communications, security, and virtualization, and will provide you with an important foundation for the rest of the course.
Learning Objectives
The objectives of this course are to provide you with an understanding of:
- Computer system components, operating systems (Windows, Linux and Mac), different types of storage, file systems (FAT and NTFS), memory management. The core concepts and definitions used in information security
- Switched networks, packet switching vs circuit switching, packet routing delivery, routing, internetworking standards, OSI model, and 7 layers. The benefits of information security
- TCP/IP protocol suite, types of addresses, physical address, logical address, IPv4, IPv6, port address, specific address, network access control, and how an organisation can make information security an integral part of its business
- Network fundamentals, network types (advantages and disadvantages), WAN vs LAN, DHCP
- How data travels across the internet. End to end examples for web browsing, sending emails, using applications - explaining internet architecture, routing, DNS
- Secure planning, policies, and mechanisms, Active Directory structure, introducing Group Policy (containers, templates, GPO), security and network layers, IPSEC, SSL / TLS (flaws and comparisons), SSH, Firewalls (packet filtering, state full inspection), application gateways, ACL's
- VoIP, wireless LAN, Network Analysis and Sniffing, Wireshark
- Virtualization definitions, virtualization models, terminologies, virtual models, virtual platforms, what is cloud computing, cloud essentials, cloud service models, security amd privacy in the cloud, multi-tenancy issues, infrastructure vs data security, privacy concerns
Intended Audience
This course is ideal for members of cybersecurity management teams, IT managers, security and systems managers, information asset owners, and employees with legal compliance responsibilities. This course acts as a foundation for more advanced managerial or technical qualifications.
Prerequisites
There are no specific pre-requisites to study this course, however, a basic knowledge of IT, an understanding of the general principles of information technology security, and awareness of the issues involved with security control activity would be advantageous.
Feedback
We welcome all feedback and suggestions - please contact us at support@cloudacademy.com if you are unsure about where to start or if would like help getting started.
Welcome to this lecture on computing foundations, data storage and memory.
To really understand cyber security, it is important that you understand simple computing. As such, we’ll look at computer system components, input devices, primary memory, CPU’s, data transfer, and operating systems.
A modern computer comprises of a number of independent, yet closely related components. The most fundamental of these is the hardware. An easy way to understand what hardware is, is to ask the question ‘Can I pick it up or touch it?’ If the answer is yes, then we are dealing with hardware. Hardware is simply the physical components that make up the computer.
As users of computers, we interact with hardware in order to achieve computing tasks. Different parts of the hardware will perform different functions in order to provide us with our answer. The operating system acts as the intermediary between us and the hardware, controlling our use of the computer.
Applications and programs give us specific ways of achieving computing tasks. For instance we use a word processing program if we wanted to write a letter. The application works with the operating system, which in turn works with the hardware to achieve our end goal.
Finally, as users of a computer, we can also be considered a component of a computer system. Of course, users don't always have to be people – machines are able to deal with each other directly where required, as ‘users’.
Next, we examine some of the most common hardware devices used in everyday computing. To use a computer you need to be able to tell it what you want to achieve. Early computers relied on magnetic tapes, punch cards or the manual adjustment of switches to receive this input.
There are now many ways you can supply information to a computer. For instance, this presentation was created using a keyboard and mouse, but computers are capable of receiving and interpreting many different inputs these days. You can now talk to your devices, have them understand eye movements and breathing patterns or any other conceivable way in which we can convey our needs that the computer can be programmed to understand.
Computers take an input, and digitize it, rendering it into a mathematical representation of your information. Computers generally work with binary mathematics, or variants thereof. In binary, there are 2 possible values for any given digit - 1 or 0. This simple system can also be thought of as showing ON or OFF, allowing computers to make logic based decisions depending on the values they Computers process data by using components that store data while they are using it.
The primary storage area for this work in progress is the Random Access Memory, or RAM. This is made up of a number of electronic circuits, making it very fast to place, store and retrieve information, such as programs or documents. RAM is volatile, which means that in all but very special circumstances, information stored there is lost when the computer is shut down.
A computer will also have an area of Read Only Memory, or ROM. This component will usually store some of the most basic settings of the computer, such as its internal clock and details of attached hardware. The ROM is queried when the computer first boots up, so that it can work out how it should go about setting up in a useable state. ROM is generally fixed, but can be changed in certain circumstances, such as when it needs to be updated to fix issues that have been discovered by the manufacturer.
The central processing unit (CPU) performs most of the processing inside a computer. The CPU relies heavily on a chip set to control instructions and data flow to and from other parts of the computer, which is a group of microchips located on the motherboard. The CPU has two typical components:
The Control Unit, which extracts instructions from memory to decode and execute them and the Arithmetic Logic Unit (ALU): this component handles arithmetic and logical operations.
The two components work in tandem to handle the majority of the data processing requirements for the computer.
To function properly, the CPU relies on the system clock, memory, secondary storage, and data & address buses. The information about these is stored in ROM, as part of the Basic Input Output System (BIOS).
CMOS (complementary metal-oxide-semiconductor) is the term usually used to describe the small amount of memory on a computer motherboard that stores the BIOS settings.
Let’s look at the two major components of the CPU – the CU and ALU – in more detail.
The control unit (CU) in in charge of the fetch cycle which is the sequence by which the instructions within a program are read into the CPU from program memory and then decoded and executed. To complete even simple tasks like showing a single character on the screen the CPU has to perform multiple cycles. The computer does this from the moment it boots up until it shuts down.
The control unit requests instructions from the main memory that is stored in a memory location as indicated by the program counter (also known as the instruction counter).
Received instructions are decoded in the instruction register. This involves breaking the operand field into its components based on the instruction’s operation code (opcode).
This involves acting on the instruction’s opcode, as it specifies the CPU operation required. The program counter indicates the instruction sequence for computer. These instructions are arranged into the instructions register and as each are executed, it increments the program counter so that the next instruction is stored in memory. Appropriate circuitry is then activated to perform the requested task. As soon as instructions have been executed, it restarts the machine cycle that begins the fetch step. The whole process is known as the Instruction Cycle.
A non-maskable interrupt (NMI) is a type of hardware interrupt (or signal to the processor) that prioritizes a certain thread or process. Unlike other types of interrupts, the non-maskable interrupt cannot be ignored through the use of interrupt masking techniques.
The most widely known example of this is where a user presses control-alt-delete to create an immediate signal to the system when the computer is not responding. This is a good example because it illustrates a kind of “override” – rather than just following the general thread or process, the control-alt-delete produces a signal that the computer must and will deal with immediately.
An arithmetic logic unit (ALU), also sometimes known an integer unit (IU), is a major component of the central processing unit of a computer system. It executes all processes related to arithmetic and logic operations that need to be done. In some microprocessor architectures, the ALU is divided into the arithmetic unit (AU) and the logic unit (LU). An ALU can be designed by engineers to calculate any operation.
As the operations become more complex, the ALU also becomes more expensive, takes up more space in the CPU and creates more heat when in use. That is why engineers make the ALU powerful enough to ensure that the CPU is powerful and fast, but not so complex that it becomes prohibitive in terms of cost and other disadvantages.
A data bus transfers data to and from the memory of a computer, or into and out of the central processing unit (CPU). A data bus can also transfer information between two computers.
The use of the term "data bus" in computing is similar to the use of the term "electric busbar" in electronics. The electric busbar provides a way to transfer the current in a similar way to the way the data bus transfers data. In modern complicated computing systems, data is often in transit, running through various parts of the computer's motherboard and peripheral structures.
With new network designs, data is also flowing between many different pieces of hardware and a broader cabled or virtual system.
Data buses are fundamental tools for helping facilitate all the data transfer that allows so much on-demand data transmission in consumer and other systems.
Although it would be technically possible for us to interact directly with a computer’s hardware, placing information into the RAM, or directly requesting the CPU to perform a calculation, this would be a highly inefficient way of going about our daily computing tasks, and would be available only to highly skilled computer technicians. In today’s computer dependent world, this would be highly impractical to say the least.
Instead, we interact with the Operating System which translates our requirements to the underlying hardware of the device; asks for tasks to be completed; and finally returns the results of those operations to us, in a form which we can understand.
The operating system is a program - just a very large and complex one. Other programs, such as our word processor or calculator, work through the operating system in order to function.
The operating system can be thought of as a referee - allowing all of the competing programs and functions of a computer to have a fair turn at the computer’s hardware resources, ensuring that everyone is able to fulfil their role.
To summarise why we need an operating system:
Firstly, the lack of an operating system would make a computer virtually impossible to use, without having a deep knowledge of exactly how CPU’s work, and how to place information into RAM, or onto the BUS.
Secondly, having an operating system there to make all of the decisions about resource management for us means that the computer can be set up to work in the most efficient manner possible.
Finally, computing is always in a state of change, with new programs or ways of working appearing all the time. As long as we can make these programs interact with the operating system in the way it is expecting, we will always be able to rely on the operating system to take care of the complex background work, without having to consider whether our new program will cause issues with other programs or devices.
There are 4 basic layers involved in any computing system and with these come typical individuals who will generally be involved in interacting with each layer.
At the base of the stack there is the computer hardware. As you have learnt, we rely on the operating system to handle our interactions with the hardware layer, so it follows that anyone involved in the design of an operating system will have a particular focus on how the operating system will interact with the many components of computer hardware on which it may run.
A programmer is someone that will be creating new programs to run on the operating system. Programmers deal with the operating system, because they need to ensure that any program they create will be able to interact with the operating system. They will also be interacting with Utilities, which can be thought of as programs that run on your computer and make it work efficiently, usually with little or no user interaction needed.
The top layer is for Applications and programs, which is where the vast majority of computer users will be working on a daily basis. An end user will be using the computer as a tool, to achieve a goal. To do this, they will make use of Applications or Programs specific to the task they have at hand.
Next, let’s examine some of the components of the operating system in more detail. The central element of any operating system is the kernel, or nucleus. The kernel can be thought of as being all-powerful, and the ultimate arbiter of exactly what can and can’t happen on any computer.
The kernel is the first part of the operating system that gets loaded into the computer’s memory when it starts. It is loaded into a protected area of memory, and is subject to stringent security controls. After all, given its power we don’t want just anyone to be able to easily take control of it and thus take control of your computer!
The kernel is retained in the computer’s memory while it is running - it needs to be, as it is responsible for the interaction between the operating system as a whole, and the computer’s hardware. No kernel means no working computer!
The shell is the part of the operating system which you interact with, and allows you to ask the computer to carry out tasks. Think of it as surrounding the kernel, hence the term shell.
Generally, users will use a Graphical User Interface, or GUI, to interact with the shell but technically adept users may also want to use a Command Line Interface, or CLI, to achieve their computing goals.
In Unix/Linux operating systems there are many different kinds of CLI shell, but one of the most popular is the Bourne Again Shell, or BASH. Technical users can create BASH scripts, allowing the automation of tasks.
In Windows, the CLI shell is also known as the command prompt. Windows users can create BATCH scripts to run tasks automatically or sequentially. A microkernel is a piece of software that contains the near-minimum amount of functions and features required to implement an operating system.
It provides the minimal number of mechanisms - just enough to run the most basic functions of a system, in order to maximize the implementation flexibility. It allows for other parts of the OS to be implemented efficiently since it does not impose a lot of policies and requirements.
In computer architecture, multithreading is the ability of a CPU, or a single core in a multi-core CPU, to execute multiple processes or threads concurrently; these are appropriately supported by the operating system. This approach differs from multiprocessing: in multithreading, the processes and threads share the resources of a single core or multiple cores.
Thread: also called a lightweight process, Threads are a dispatchable unit of work in a process. It has a definite beginning and end. It runs inside a single process, sharing the same address space, allocated resources and environment of that process.
Symmetric Multiprocessing (SMP) is primarily implemented in resource-intensive computing environments that require high computing power to execute program and application tasks and processes. This involves the joint installation of two or more processors on one machine. SMP combines hardware and software multiprocessing. The hardware provides raw computing power, and the software manages the segregation, selection and distribution of the workload. The Windows operating system is by far the most popular operating system for desktop computers globally, with over 85% of market share.
The current version of the Windows operating system is Windows 10, released to the general public in July 2015.
Windows 10 released with a number of significant new features including some relating to the security of the operating system, and the computer on which it is running.
Like other major operating systems, Windows 10 now allows for the use of biometric characteristics to secure computers, such as facial recognition, iris scanning and fingerprint scanning.
It also contains a number of features and enhancements for on-line services and functionality.Microsoft created a new web browser, Edge, to replace its older Internet Explorer browser. Integration with Microsoft’s personal assistant, Cortana, allows users to browse and search the web using text or voice commands.
Windows 10 has enhance multi-media capabilities, supporting the latest formats for audio and video. FLAC stands for Free Lossless Audio Codec, an audio format similar to MP3 but lossless, meaning that audio is compressed in FLAC without any loss in quality. High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard, one of several potential successors to the widely used AVC (H.264 or MPEG-4 Part 10). It supports resolutions up to 8K UHD.
The Linux operating system was created by Linus Torvalds in 1991 so that he could use a Unix like operating system on hardware that was more suited to Windows. The program has always been developed by the computing community at large rather than one specific company, and is available for free, including all of its source code. By contrast, the source code for Windows is a closely guarded secret.
The operating system has been consistently updated since its inception, and has gained market share year on year. Whilst it is becoming more popular for home users, its market share only stands at around 2%. It has seen far greater uptake in server computing with market share figures well in excess of Windows, due to its reliability, stability and networking capabilities. Interestingly, it is also the operating system used on nearly 100% of super-computers.
Linux has hundreds of different distributions, or distros, all closely related under the hood. Distros have been created specifically for certain types of computing, such as Kali Linux which is used for computer security testing, through to distros designed for heavy duty server operation.
Unix is often regarded as one of the earliest operating systems for personal computing. It came out of attempts to develop a mainframe operating system that would allow multiple users to access a system at any one time.
Unix is very flexible and can be installed on many different types of machines, including mainframe computers, supercomputers, and micro-computers.
Unix is stable and does not crash as often as Windows does. It therefore requires less administration and maintenance.
Unix was designed with built-in security and permissions features from the beginning. By contrast, Windows only really began implementing these features with its release of Windows NT in the 1990’s.
About 90% of the Internet relies on Unix/Linux operating systems running on Apache, the world’s most widely used Web server, which is free. Mac OS is an operating system that was designed to work with computer hardware produced by the Apple Company, and was created by Apple themselves.
It is the world’s first commercial Graphical User Interface operating system.
What is now known as Classic Mac OS was released in 1984, to work on the first Apple Macintosh computers. The operating system was based on the earlier Lisa Operating System, used on Apple computers prior to the release of the Macintosh.
In 2001, Apple released a completely new version of the operating system, and named it OS X based on the FreeBSD version of Unix.
The first 9 subversions of OS X were codenamed after big cats, with subsequent versions being named after mountain ranges. One significant change to the operating system came with Tiger. Macintosh computers up to that point had always had different CPU’s to Windows based machines, and accordingly the Mac OS worked very differently to Windows in the background. From Tiger onwards, OS X was designed to run on the same types of CPU’s that power Windows computers.
Following Apple’s move into the mobile devices market, a variant of OS X, named iOS, was created specifically to work with portable Apple devices.
With the rise of the smartphone phenomenon, mobile phones became small portable computers. As such, new operating systems had to be devised which would allow these computers in our pockets to function, and provide all of the computing power we desired.
As mentioned previously, Apple created iOS to run on their portable devices.
In competition with this, Google created the Android operating system. This is based on Linux, and as such is open source and free.
Android and iOS are the dominant players in this market, but other vendors do still have small amounts of the market.
Microsoft did create a version of Windows especially for mobile devices, but it has never been a huge success. They have used some of the functionality developed for Windows Mobile in the latest versions of their main operating system.
That's the end of this video.
Paul began his career in digital forensics in 2001, joining the Kent Police Computer Crime Unit. In his time with the unit, he dealt with investigations covering the full range of criminality, from fraud to murder, preparing hundreds of expert witness reports and presenting his evidence at Magistrates, Family and Crown Courts. During his time with Kent, Paul gained an MSc in Forensic Computing and CyberCrime Investigation from University College Dublin.
On leaving Kent Police, Paul worked in the private sector, carrying on his digital forensics work but also expanding into eDiscovery work. He also worked for a company that developed forensic software, carrying out Research and Development work as well as training other forensic practitioners in web-browser forensics. Prior to joining QA, Paul worked at the Bank of England as a forensic investigator. Whilst with the Bank, Paul was trained in malware analysis, ethical hacking and incident response, and earned qualifications as a Certified Malware Investigator, Certified Security Testing Associate - Ethical Hacker and GIAC Certified Incident Handler. To assist with the teams malware analysis work, Paul learnt how to program in VB.Net and created a number of utilities to assist with the de-obfuscation and decoding of malware code.