Module 5 - Technical Security Controls
The course is part of this learning path
This course defines the different types of malware and outlines the impact that each one can have on an organization’s computer systems. It also details the different methods through which networks can be accessed, and how the related security risks can be controlled. Finally, it defines what cloud computing is and explains the different deployment models, before looking at the security requirements of an organization’s IT infrastructure and the documentation required to support this.
The objectives of this course are to provide you with and understanding of:
- The different types of malware and the impact each one can have on an organization’s computer systems
- Methods of accessing networks and how related security risks can be controlled
- The security issues related to networking services, including mobile computing, instant messaging and voice over IP
- Cloud computing deployment models and the security implications of cloud services
- The security requirements of an organization’s IT infrastructure and the documentation required to support this
This course is ideal for members of information security management teams, IT managers, security and systems managers, information asset owners and employees with legal compliance responsibilities. It acts as a foundation for more advanced managerial or technical qualifications.
There are no specific pre-requisites to study this course, however, a basic knowledge of IT, an understanding of the general principles of information technology security, and awareness of the issues involved with security control activity would be advantageous.
We welcome all feedback and suggestions - please contact us at email@example.com if you are unsure about where to start or if would like help getting started.
Welcome to this video on IT infrastructure security.
In this video we’ll look at the security requirements of an organization’s IT infrastructure and its associated documentation. This will provide you with an understanding of the technical security controls that can be used to mitigate risk, including:
· System, application and software patching;
· Data back-up;
· Protective monitoring;
· Network intrusion detection and prevention devices; and
· Penetration testing.
Let’s start by looking at operational security.
One of the key functions in protecting systems is implementing a robust patching policy to protect the organization from attack. This applies to all systems and application software running on the managed host systems.
Some applications are more susceptible to attack than others. For example, Adobe Acrobat Reader and Adobe Flash have had many serious security flaws that have been exploited by hackers. As most computers have these two products installed, they’re obvious targets.
A patching policy should also apply to all embedded devices, like network infrastructure components and SCADA systems. Many organizations have a team dedicated to analysing emerging threats and patches offered by vendors. They establish the criticality of the patch and whether it impacts their estate.
While software vendors typically supply a severity rating for their patches, it might be that, for a given IT infrastructure, the potential impact of the vulnerability doesn’t warrant the immediate application of the patch. So, the security team should recommend the timeframe over which a patch or set of patches should be deployed – perhaps stating that a critical patch is deployed within 48 hours while standard non-critical patches are deployed within 4 weeks through a normal system update cycle.
Patches should be tested before being they’re deployed – they can cause adverse side effects if the vendor hasn’t carried out enough regression testing.
All data, including transaction logs and audit trails, should be backed up. This means making copies of data so that a system can be restored in the event of data loss.
The organization's business requirements should dictate how long it should take to recover from a failure. This is typically documented in the Business Impact Assessment, performed during the risk assessment phase of the Information Security Management System implementation.
Archiving is the planned movement of old data from online storage onto a less expensive storage tier for long-term retention. Historically, physical tape was used because it was inexpensive. Now, there are many types of storage devices and options for backing up and archiving data, including Storage Area Networks and Virtual Tape Libraries.
A backup and recovery policy should incorporate the backup strategy. The factors to consider include:
· Whether the backups are full or incremental, or a combination of both;
· The frequency of backup;
· The backup rotation – a typical method is grandfather-father-son where three generations of backup are held to provide maximum protection against a malware infection or corrupt data; and
· Storing backups in a secure offsite location.
Recovery testing should also be covered in the policy. It’s common for an organization to discover that their backups haven’t been working and this exposes them to severe risk of data or service loss. Recovery tests should be performed at least once a month and event logs related to backups checked for errors.
An offsite storage facility should have the same level of physical security regime as the primary data centre – backup tapes could hold an organization’s trade secrets, private client details and system code.
Now we’ll move on to look at auditing in relation to the collection of audit event information, the analysis of the data and protective monitoring.
Most organizations have an auditing policy that covers how each of these three aspects of the security service must be met. Audit information can be collected from almost any component in an operating system, network and application. As most applications generate audit event information this event source should also be considered for collection.
The auditing policy should include what information should be collected for each type of event. For example, for logon events, the time of the logon and the workstation used is likely to be collected.
In a large IT infrastructure with many components, managing audit trails can be difficult – so, some organizations have implemented a centralised audit collection service through a Security Information and Event Management – or SIEM – solution.
In all cases, it’s important to have an accurate time source used across the infrastructure to allow audit events to be correlated between systems.
Audit information is generally collected for two primary reasons:
· To support incident management and forensic examination – if an incident has occurred, the audit information can establish what happened and support further investigations; and
· To enable protective monitoring.
The viewing and analysis of audit logs should be documented in the auditing policy which should also state the type of reports that need to be produced.
Protective monitoring is an emerging capability offered from advanced security operations centres. It is defined as:
“Ensuring that system owners are provided with a real-time feed of information regarding the status of ICT systems, providing awareness of activities of the threat sources and enabling security incidents to be detected, investigated and effectively remediated.”
Real-time is an important concept here. A system with protective monitoring should alert operators when critical events occur. When audit events are sent to a SIEM, they can trigger real-time alerts to an operator located in the Security Operations Centre.
Closely related to auditing and protective monitoring are intrusion detection and intrusion prevention. An Intrusion Detection System – or IDS – monitors network and system activity and delivers an alert if it notices suspicious activity. In the same way as protective monitoring, the alert can be relayed in near real-time to the Security Operations Centre.
An Intrusion Detection System enables:
· Monitoring of users and systems to identify malicious or suspicious events;
· Auditing of configurations to alert actual or attempted configuration changes; and
· Recognising known attacks.
An Intrusion Prevention System – or IPS – has all the capabilities of an IDS but can also attempt to prevent possible incidents. Most products in this area can be configured to operate as an IDS or an IPS.
There are two basic types of intrusion detection and prevention systems:
· A host based IPS relies on agents installed directly on the operating system which is being protected. This service binds closely with the operating system kernel and associated services, monitoring and intercepting system calls to the kernel or APIs to prevent and log attacks.
· A typical network-based IDS combines features of a standard IDS and a firewall. It monitors all traffic on a network segment, so it won’t be able to view communications traffic travelling over other network segments.
There are two forms of network intrusion devices.
· A network intrusion detection system – or NIDS; and
· A network intrusion prevention system – or NIPS.
In the de-militarized zone, you’ll see a network intrusion sensor. This is attached to a network switch which is also in the DMZ. All other components are attached to this.
The DMZ includes the external webserver. However, one of the network switch ports is configured to be a spanning (or mirror) port. This means that all traffic received by the network switch is forwarded to the switch port on which the network intrusion sensor is present. Hence, the sensor sees all traffic being transmitted within the DMZ LAN segment. In this case the sensor can be part of a NIDS to detect network intrusions.
On the connection between the inner firewall and the internal network you can see an inline network intrusion sensor. This means that all traffic going between the firewall and the internal network passes through the sensor. In this configuration, the sensor can be part of a NIPS because, if it senses abnormal traffic, it can block it.
Network intrusion prevention systems can also be configured as network intrusion detection systems.
As well as having different intrusion detection and prevention system configurations, there are some differences on how they detect attacks. The two basic approaches are:
· Signature-based which detects known intrusion attempts and maintains a database of attack signatures supplied by the vendor. The signature database must be current and complete for this to be effective, in the same way as an antivirus product must be up to date; and
· Knowledge-based which detects anomalous intrusions. This method builds a profile of what’s considered normal system activity over time, then triggers on thresholds that are outside the normal baseline.
Implementation of an intrusion detection system requires considerable skill. When first installed, false positives will often be displayed as the system stabilises and becomes familiar with the environment.
The critical IDS alerts need to be established – even though a product may be configured in either IDS or IPS mode, it’s not recommended to operate in IPS mode to begin with.
Most implementations require a settling in period to allow the NIDS to be live for some time, and all false positives and false negatives tuned out before blocking mode is enabled, i.e. turning it into a NIPS.
Baseline controls assist in the selection of appropriate security controls for a system.
These are the starting point for the security control selection process and are based on the security category and associated impact level of the information system.
Baseline controls are the minimum set of security controls for an organization’s information systems. They’re intended to be a broadly applicable starting point so, following a risk assessment, it may be necessary to implement further controls to achieve adequate risk mitigation for the system.
Many organizations select their baseline controls from ISO 27002.
Configuration management is focused on establishing and maintaining the integrity of products and systems. This includes processes for initialising, changing and monitoring deployed configurations.
A configuration item – CI – is an identifiable part of a system. For example, hardware, software, firmware and documentation which are all under configuration management control.
A configuration management plan is a description of the roles, responsibilities, policies and procedures for maintaining the configuration of products and systems. Configuration change control is the process for managing updates to CIs. A configuration – or change – control board is the group responsible for controlling and approving changes throughout the development and operational lifecycle of a system. It’s good practice for a security representative to be on that panel as change often impacts the security posture of a system.
Finally, configuration monitoring relates to the process and technologies used to assess or test the configuration status of CIs placed under configuration management control. For example:
· Validating that all workstations have the latest antivirus signatures deployed; and
· Confirming if network routers have the correct configuration installed.
Penetration testing – sometimes referred to as ethical hacking or an IT health check is a method of obtaining independent assurance that the security controls implemented in a system are doing their job. The term pen test is often used as an abbreviated way of referring to a penetration test, especially in the security community.
External companies often carry out pen testing because full time specialists are not available in the organization. Pen test teams perform vulnerability analysis of the networks and system components, looking for system errors that could allow a hacker to get through the defences, and then exploit those vulnerabilities to show that they can be exploited.
Typical questions the pen testing team will ask include:
· Is all the software patched and up to date? Many vulnerabilities are due to software not having the latest patches installed;
· Is everything securely configured? Having default accounts removed and strong passwords enforced is a good idea; and
· Are unnecessary services still running? An open file share on a web server could be misused.
There are many other issues that pen testing might detect, including the potential for denial of service attacks being effective, and thorough scoping of the test is imperative.
Before a pen test is commissioned, the external testing company will need to sign a Non-Disclosure Agreement. They’ll be familiar with this process and are likely have their own documents for both parties to sign. The following items should be defined in the scoping document:
· The systems and applications that are in scope;
· Any areas that are out of scope;
· Permission or explicit banning of attempts to perform Denial of Service attacks;
· Whether the third party is permitted to exploit vulnerabilities;
· Whether social engineering attacks are allowed;
· The tools and techniques to be used; and
· Reporting requirements.
A common approach is to ask the pen test team not to exploit any vulnerabilities without express permission. Although most pen testing tends to be technical, it’s possible to ask the testers to conduct social engineering attacks as part of the exercise, such as phoning the service desk to try to acquire a password.
The content and structure of the testing reports should be agreed, as well as the timescales for delivery. If a critical vulnerability is identified it should be raised immediately rather than waiting for the formal report to be issued.
And finally, documentation. Security documentation refers to policies, standards and procedures, as well as design documentation, audit reports and compliance matrices.
ISO 27002 requires that:
“System documentation should be protected against unauthorised access.”
When implementing secure system documentation, consideration should be given to:
· Secure storage;
· Keeping the access list short and authorized by the application owner; and
· Making the documentation available through a public network with appropriate access protection.
In Government departments this means that design documentation and documentation describing sensitive security mechanisms should be protected to the same level as the system. For example, If the system is classified as ‘Secret’ then the system documentation should also be marked ‘Secret’ and handled in the same way.
That’s the end of this video on IT infrastructure security.
Fred is a trainer and consultant specializing in cyber security. His educational background is in physics, having a BSc and a couple of master’s degrees, one in astrophysics and the other in nuclear and particle physics. However, most of his professional life has been spent in IT, covering a broad range of activities including system management, programming (originally in C but more recently Python, Ruby et al), database design and management as well as networking. From networking it was a natural progression to IT security and cyber security more generally. As well as having many professional credentials reflecting the breadth of his experience (including CASP, CISM and CCISO), he is a Certified Ethical Hacker and a GCHQ Certified Trainer for a number of cybersecurity courses, including CISMP, CISSP and GDPR Practitioner.