This section of the SysOps Administrator - Associate learning path introduces you to the core storage concepts and services relevant to the SOA-C02 exam. We start with an introduction to the AWS storage services, understand the options available, and learn how to select and apply AWS storage services to meet specific requirements.
- Obtain an in-depth understanding of Amazon S3 management and security features
- Get both a theoretical and practical understanding of EFS
- Learn how to create an EFS file system, manage EFS security, and import data in EFS
- Learn about EC2 storage and Elastic Block Store
Hello, and welcome to this lecture dedicated to the security of the Elastic File System, where I shall be looking at access control and the permissions required to both operate and create your EFS file system. I will also dive into encryption, as this topic is always of importance when storing data, and so if your data is of a sensitive nature, then I'll explain how EFS manages data encryption for you. Of course, there are other elements of security that are touched on in the previous lecture, where I covered the necessary security groups that need to be in place.
Before you can create and manage your EFS file system, you need to ensure that you have the correct permissions to do so. To initially create your EFS file system, you need to ensure that you have allow access for the following services:
As you can see, there are five permissions, two of which relate to EFS, and three relate to EC2. The EFS permissions allow you to create your file system in addition to any mount targets that are required. The EC2 permissions are required to allow actions carried out by the CreateMountTarget action. When applying these permissions to your policies, the resource for the elastic file system actions will point to the following resource:
where your AWS account ID should replace the text in red ("account-id"). A resource is not required for the EC2 actions and, as a result, the value will be represented via a wildcard. The below shows the full example of what this policy should look like.
"Sid" : "PermissionToCreateEFSFileSystem",
"Sid" : "PermissionsRequiredForEC2",
In addition to these policies, you'll also need the following permissions to manage EFS using the AWS management console:
"Sid" : "Stmt1AddtionalEC2PermissionsForConsole",
"Sid" : "Stmt2AdditionalKMSPermissionsForConsole",
These permissions allow the console to view EFS resources, query EC2, allowing it to display VPCs, availability zones, and security groups, and enable KMS actions if encryption is enabled on the EFS file system. For more information on creating IAM policies in addition to roles, groups, and users, please see our existing course here.
If your data contains sensitive information or if your organization has specific policies regarding the protection of data, requiring the implementation of encryption, then you need to be aware of how EFS handles its process. EFS supports both encryption at rest and in transit. Let's take a look at how both of these are achieved.
Encryption at rest. You may remember during the demonstration I gave earlier when I created an elastic file system that there was a checkbox for encrypting the file system. This checkbox enables you to create an EFS file system that maintains encryption at rest. This uses another AWS service, the key management service known as KMS, to manage your encryption keys. As you can see in the image, a KMS master key, a CMK, is required. A customer master key is the main key type within KMS. This key can encrypt data of up to four kilobytes in size, however, it is typically used in relation to your data encryption keys. The CMK can generate, encrypt, and decrypt these data encryption keys, which are then used outside of the KMS service by other AWS services to perform encryption against your data, for example, EFS.
It's important to understand there are two types of customer master keys. Firstly, those which are managed and created by you and I as customers of AWS, which can either be created using KMS, or by importing key material from existing key management applications into a new CMK, and secondly, those that are managed and created by AWS themselves. In the example in the image, the CMK selected is an AWS managed master key.
The CMKs which are managed by AWS are used by other AWS services that have the ability to interact with KMS directly to perform an encryption against data, for example EFS, to perform its encryption at rest across its file systems. These AWS managed keys can only be used by the corresponding AWS service that created them within the particular region, as KMS is a regional service. These CMKs that are used by the services are generally created the first time you implement encryption using that particular service. For more information on KMS and how it works, please see our existing course here.
Encryption in transit. If you need to ensure your data remains secure between the EFS file system and your end client, then you need to implement encryption in transit. The encryption is enabled through the utilization of the TLS protocol, which is transport layer security, when you perform your mounting of your EFS file system. The best way to do this is to use the EFS mount helper as I did earlier in a previous demonstration. The command used to implement the use of TLS for in-transit encryption is as follows:
sudo mount -t efs -o tls fs-12345678:/ /mnt/efs
This will ensure that the mount helper creates a client stunnel process using TLS version 1.2. 'Stunnel is an open-source multi-platform application used to provide a universal TLS/SSL tunneling service. Stunnel can be used to provide secure encrypted connections for clients or servers that do not speak TLS or SSL natively.' (Wikipedia.) This stunnel process is used to listen out for any traffic, using NFS, which it then redirects to the client stunnel process. That brings me to the end of this lecture. Next, I'll be focusing on how to import data into your EFS file system.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.