Foundation Certificate in Cyber Security (FCCS)
The course is part of this learning path
This course provides a strong foundation on the fundamentals of cybersecurity, taking you through cyber risks, how to protect against them, and how cybercriminals can use their target's digital footprint to find exploits.
The objectives of this course are to provide you with and understanding of:
- Security Information Event Management (SIEM) processes and architecture, SIEM features, user activity monitoring, real-time event correlation, log retention, file integrity monitoring, security auditing & automation auditing, what to audit, implementation guidelines, what to collect, Windows Event Log, UNIX Syslog, logging at an application level, audit trail analysis, approaches to data analysis
- Cyber exploits, understanding malware, cross-site scripting, SQL Injection, DDOS, input validation, buffer-overflow, targeted attacks and advanced persistent threats (APT)
- Uses of encryption technology, symmetric/asymmetric key encryption, public and private keys, weaknesses, decryption, hashing, digital signatures, PKI, certificates authorities, data at rest and in transit, SSL
- Internet foundations, domain name system, whois (Inc. worked example), Traceroute, Internet analysis, search engines, Tools for finding information on people and companies, username searchers, email lookups, disposable emails, passwords, internet communities and culture, deep web directories and leaking websites
This course is ideal for members of cybersecurity management teams, IT managers, security and systems managers, information asset owners and employees with legal compliance responsibilities. It acts as a foundation for more advanced managerial or technical qualifications.
There are no specific pre-requisites to study this course, however, a basic knowledge of IT, an understanding of the general principles of information technology security, and awareness of the issues involved with security control activity would be advantageous.
We welcome all feedback and suggestions - please contact us at firstname.lastname@example.org if you are unsure about where to start or if would like help getting started.
Welcome to this video on protective monitoring.
In it, you’ll learn about protective monitoring and the technologies available to us to achieve it, including SIEM tools, Security auditing, Network analysis, Network logging, Data analysis and AI.
Firstly, what do I mean by the term protective monitoring?
Protective monitoring is the process of monitoring activity on a network, with a view to identifying potentially malicious behaviours and using this detection to protect our network from harm, or to mitigate any harm that is already occurring.
One of the technological solutions we could deploy to help us with this task is a Security Information and Event Management, or SIEM, tool.
A SIEM tool is basically a repository for any and all logging or telemetry recording systems that you have deployed within your network.
These types of tools are recording events that are happening on devices within the network, and they send this information to the SIEM tool for storage and processing.
Although logs or telemetry data can be useful in identifying suspicious activity, the process of examining them individually is both time consuming and prone to ‘slipping through the gaps’. The power of a SIEM tool lies not only in its ability to process multiple log sources at the same time, but to create correlations between events recorded in disparate logs.
Put simply, low level suspicious activity noted in one log source, combined with low level suspicious activity noted in another may indicate that something of a much higher level of suspicion is actually happening.
Creating appropriate correlations between log sources and log events requires a knowledge of statistical analysis and a flair for creative thinking. It is very easy to bolt together two disparate log sources and say that if x happens at the same time as y, then we definitely have a nation state trying to steal our intellectual property, but this does not necessarily make it the truth of the matter. Patterns must be thoroughly analyzed to try and reduce the instances of false positive alerts, or the SIEM tool simply becomes a somewhat overwhelming source of alerts for humans, or other systems to action.
Next, let’s consider the reasons why you should think about deploying a SIEM tool.
The number of threats to the information held by organizations is rising, and will continue to do so. It is becoming impossible for protective monitoring to be solely reliant on humans.
Whilst traditional security tools are very good at spotting traditional attacks, the attackers are smart and will always look to employ techniques that can bypass these traditional tools.
Sometimes it may be that the only way that an attack can be spotted is by combining activities recorded by several different security tools.
Having more and more security tools deployed within networks has led to an upsurge in the quantity of network events that get logged.
A SIEM tool is an efficient way to bring all of these logs into one big place for effective data mining. Organizations may also face regulatory or statutory requirements to be able to detect attacks and protect information.
There are a few key features that make a SIEM a useful tool.
The first thing any SIEM needs to do is to gather information. This information can be sent to it from any number of sources, and the SIEM must always be able to identify both the source of the information and the format it is transmitted in. Without knowing these, it will not understand how to process the information, and crucially what that data actually means.
Once the events have been collected, they must be stored. Most SIEMs will employ some sort of proprietary or open source storage mechanism. Importantly, whichever is chosen the SIEM tool must be able to quickly and easily find and retrieve data from this storage area for rapid processing.
The processing is carried out by the core engine of the SIEM tool. This engine is configured with sets of rules that allow it to understand the data being presented by each log source, and to appropriately correlate this data with other log sources to produce an overall picture of events occurring within the network.
Finally, the SIEM tool needs some way of conveying its findings to a human, through a user interface. Within this interface, the SIEM tool may represent alerts it has generated from its correlations of events; it may show its own telemetry, telling the user what logs it has processed and when; it may also give the user the opportunity to perform their own searches against the data sets.
This diagram shows the flow of processes within a SIEM tool.
Data is collected and processed to extract its core meaning.
The SIEM tool enriches these findings by correlating disparately sourced logs, or applying statistical analysis to spot anomalous behaviour.
Finally, this information is reported for further action.
Another approach to protective monitoring involves security auditing.
Security auditing is the assessment of a system or application.
This audit can be achieved manually by talking to staff involved with the system or by performing scans of the system and looking for any vulnerabilities. We would also look at analyzing the ways that access to the system is controlled, either through computer based controls or physical controls such as security doors.
In an automated audit a computer system is used to monitor and report on the systems that are under scrutiny.
The logs produced by systems or applications are examined to achieve automated security auditing, bringing us back to the use of a SIEM tool.
SIEM tools are able to handle a wide variety of log formats, from a wide variety of log sources. The difficulty will always be in determining exactly what the information in any given log source actually means. An application developer is free to log whichever events they choose, and to log those events in any format they choose.
Due to the increase in regulatory or statutory requirements that are being imposed on the information held by organizations, many modern logging tools and methodologies employ a standardized approach to the data they log, and the format they log it in.
These standards could be based on the Common Criteria – the Common Criteria is the international standard ISO/IEC15408 for computer security certification.
The Orange Book is a UK Government publication which establishes the concept of risk management.
I have previously discussed the possibility of eavesdropping on data that is transiting our network, but this technology can also be employed to assist us with our protective monitoring.
If we are able to capture suspicious traffic directly from the network, in a format suitable for dissection and analysis, then we can understand exactly what this traffic means, and from there formulate strategies to mitigate the threat.
Network analyzers, or packet sniffers, are software tools that allow the data being transmitted over a network to be captured and inspected.
These tools can be used to troubleshoot problems on the network or to reveal information about the network, the devices on the network and the data they are transmitting.
While sniffers do not cause network damage, they have the potential to cause personal harm because they can allow a hacker to find PINs, passwords and other confidential information, especially data that is in plain text.
The most popular and well known of these tools is probably Wireshark, formerly known as Ethereal.
Several other packet analysis tools are available but Wireshark is a free download with versions available for Windows, MAC and Unix/Linux and is built into the penetration testing Linux distribution Kali.
Wireshark is frequently updated and the current versions for a variety of different Operating Systems are available from the website wireshark.org.
The website contains many useful resources including extensive documentation, tutorial information and a library of sample captures.
During the installation process Wireshark will install a tool called Winpcap (for Windows devices) or libpcap (for Linux devices) which will capture the data the network interface device receives and enable Wireshark to analyse the data.
The pcap libraries are also used by other tools such as Nmap which copy network data for analysis.
Wireshark is more than just a simple packet capture and display tool and contains modules that allow, for example, the decryption of encrypted wireless network traffic, the decryption of TLS/SSL encrypted traffic, and analysis, including replay of VoIP traffic.
This diagram shows a simple explanation for how a sniffing tool works.
The first thing that any sniffer must achieve is the ability to see all of the traffic that is flowing through a network.
Ordinarily, network attached devices will only see the information that is directed to them. Any traffic not directed towards a specific network device will just flow past, much like someone watching the postman walk past the end of their garden path without stopping to make a delivery. They can assume that the postman is carrying some sort of communication, but won’t be able to know exactly what that communication is.
If the NIC of a computer device is placed into promiscuous mode however, it will receive a copy of every single communication that is flowing around the network, much like waylaying the postman on their round and demanding a copy of every letter they are carrying!
If the network device being used to perform the sniffing is a switch device, then it must be configured to run a Switched Port Analyser, or SPAN port. All of the traffic that comes within the switches neighborhood is copied directly to the SPAN port, which can then be forwarded onto whichever network analyzer tool is being used.
Let’s now return to logs, and the different type of log information you may want to collect for analysis for forwarding to your SIEM tool.
Having established that physical security is an integral part of an over-arching cyber security strategy, let’s consider the type of physical security information that could be useful.
Many organizations employ entry systems that require staff to swipe a card or present a fingerprint in order to gain access to the building, or areas within the building.
Access is usually granted by checking the presented credentials to a computer system which will permit or deny access according the rules it has stored. This computer system can log all of the access attempts made; where they were made; and perhaps most importantly, when they were made.
This information can provide a great deal of enrichment to other log sources – an example of this is a log source showing a particular user accessing a sensitive document in the middle of the night, yet there was no corresponding record to show them entering the building. This might be a sign of suspicious activity, particularly if the user has no means of remotely accessing the document.
The Microsoft Windows operating system captures a great deal of information relating to events that have occurred during its operations.
The number of event logs stored varies enormously, but a general rule of thumb would be that any modern Windows version could have anything between fifty to four hundred separate event logs stored.
Not all of these will be full of useful information, but there are three event logs that are amongst the most used, and therefore most useful.
System: Used by applications running under system service accounts (installed system services), drivers, or a component or application that has events that relate to the health of the computer system
Application: Events for all user-level applications. This log is not secured and it is open to any applications. Applications that log extensive information should define their own application-specific log if possible
Security: The ‘Audit’ Log, this event log is for exclusive use of the Windows Local Security Authority. User events may appear as audits if supported by the underlying application
Unix syslogs are made up of the elements on screen.
Whilst Windows stores its information in its event logs, any Unix based operating systems, which would include Linux or OS X, uses the Syslog mechanism.
Each operating system can implement its own variant of the syslog facility, with no specific requirements for consistency in formatting of the logged data. Each implementation will always have the same elements in common.
One of the reasons that modern versions of Windows seem to store so many event log files is that many Windows based applications choose to log their activities into their own event log file, rather than using the native Windows event logs.
This may cause issues in understanding exactly what has occurred with an application, and these issues could be exacerbated should the application perform its duties with any sort of privileged access rights.
This privileged access may not be recorded in any of the logs that are being actively monitored, leading to an incomplete picture of what is actually happening on the network.
Another angle to this problem is that the application may be logged to the standard event log files, but in a way that the logged information does not conform to an expected format, causing errors within the log files themselves.
If there is a desire to correctly capture log information from applications with a non-standard approach to logging, it may require a programmatic solution to capture the information and process it into an acceptable format.
All logged information will assist in creating an audit trail of the sequence of events that has occurred at any given time, within the network.
Analyzing these audit trails can assist in responding to a security incident, or fine-tuning the logging and correlation processes within the SIEM tool.
It may also assist in verifying that logging is happening as predicted, and that no log sources have suddenly disappeared from view.
An essential part of preparing any protective monitoring regime is establishing a baseline of what is normal behavior on the network. It is by making comparisons to this baseline that it is possible to identify suspicious behavior. However, a baseline is not a static entity – it cannot be based on a single point in time snapshot.
Network systems are constantly evolving, even down to the smallest changes such as adding a new user to the system, or updating a particular application. This means that the base-lining process must always be a regular item of work.
This diagram gives a basic explanation of how to perform data analysis.
With any analysis work, it is important to sieve out only the items of interest, whether that be by defining a timeframe or a particular set of events.
The baseline can assist in quick identification of events of interest, which can then be correlated against events found in other log sources.
One of the biggest opportunities afforded through carrying out protective monitoring using a SIEM tool is that of leveraging the power of machine learning and Artificial Intelligence, or AI.
AI and machine learning can help to scan the vast amount of data coming into the Security Operations Centre, or SOC and reveal patterns and inconsistencies. – If you’re looking for these patterns over time in very large volumes you can only keep up using these methods as we cannot possibly do this job – it is perfectly suited to AI.
AI will not surpass the human ability to be the best defense against cyberattacks, at least the complex ones.
It is believed that within the next five to ten years AI will be able to deal with some of the lower-level automated attacks, dealing with financial crime, but the best defense for the truly targeted government-grade attacker – where there’s a human behind the keyboard –that will still be a person.
This brings us to the end of this lecture.
Paul began his career in digital forensics in 2001, joining the Kent Police Computer Crime Unit. In his time with the unit, he dealt with investigations covering the full range of criminality, from fraud to murder, preparing hundreds of expert witness reports and presenting his evidence at Magistrates, Family and Crown Courts. During his time with Kent, Paul gained an MSc in Forensic Computing and CyberCrime Investigation from University College Dublin.
On leaving Kent Police, Paul worked in the private sector, carrying on his digital forensics work but also expanding into eDiscovery work. He also worked for a company that developed forensic software, carrying out Research and Development work as well as training other forensic practitioners in web-browser forensics. Prior to joining QA, Paul worked at the Bank of England as a forensic investigator. Whilst with the Bank, Paul was trained in malware analysis, ethical hacking and incident response, and earned qualifications as a Certified Malware Investigator, Certified Security Testing Associate - Ethical Hacker and GIAC Certified Incident Handler. To assist with the teams malware analysis work, Paul learnt how to program in VB.Net and created a number of utilities to assist with the de-obfuscation and decoding of malware code.