In this section of the AWS Certified: SAP on AWS Specialty learning path, we introduce you to the various Storage services currently available in AWS that are relevant to the PAS-C01 exam.
Learning Objectives
- Identify and describe the various Storage services available in AWS
- Understand how AWS Storage services can assist with large-scale data storage, migration, and transfer both into and out of AWS
- Describe hybrid cloud storage services and on-premises data backup solutions using AWS Storage services
- Identify storage options for SAP workloads on AWS
Prerequisites
The AWS Certified: SAP on AWS Specialty certification has been designed for anyone who has experience managing and operating SAP workloads. Ideally you’ll also have some exposure to the design and implementation of SAP workloads on AWS, including migrating these workloads from on-premises environments. Many exam questions will require a solutions architect level of knowledge for many AWS services, including AWS Storage services. All of the AWS Cloud concepts introduced in this course will be explained and reinforced from the ground up.
Hello, and welcome to this lecture dedicated to the security of the Elastic File System, where I shall be looking at access control and the permissions required to both operate and create your EFS file system. I will also dive into encryption, as this topic is always of importance when storing data, and so if your data is of a sensitive nature, then I'll explain how EFS manages data encryption for you. Of course, there are other elements of security that are touched on in the previous lecture, where I covered the necessary security groups that need to be in place.
Before you can create and manage your EFS file system, you need to ensure that you have the correct permissions to do so. To initially create your EFS file system, you need to ensure that you have allow access for the following services:
elasticfilesystem:CreateFileSystem
elasticfilesystem:CreateMountTarget
ec2:DescribeSubnet
ec2:CreateNetworkInterface
ec2:DescribeNetworkInterfaces
As you can see, there are five permissions, two of which relate to EFS, and three relate to EC2. The EFS permissions allow you to create your file system in addition to any mount targets that are required. The EC2 permissions are required to allow actions carried out by the CreateMountTarget action. When applying these permissions to your policies, the resource for the elastic file system actions will point to the following resource:
Resource": "arn:aws:elasticfilesystem:us-west-2:account-id:file-system/*
where your AWS account ID should replace the text in red ("account-id"). A resource is not required for the EC2 actions and, as a result, the value will be represented via a wildcard. The below shows the full example of what this policy should look like.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid" : "PermissionToCreateEFSFileSystem",
"Effect": "Allow",
"Action": [
"elasticfilesystem:CreateFileSystem",
"elasticfilesystem:CreateMountTarget"
],
"Resource": "arn:aws:elasticfilesystem:region-id:file-system/*"
},
{
"Sid" : "PermissionsRequiredForEC2",
"Effect": "Allow",
"Action": [
"ec2:DescribeSubnets",
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces"
],
"Resource": "*"
}
]
}
In addition to these policies, you'll also need the following permissions to manage EFS using the AWS management console:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid" : "Stmt1AddtionalEC2PermissionsForConsole",
"Effect": "Allow",
"Action": [
"ec2:DescribeAvailabilityZones",
"ec2:DescribeSecurityGroups",
"ec2:DescribeVpcs",
"ec2:DescribeVpcAttribute"
],
"Resource": "*"
}
{
"Sid" : "Stmt2AdditionalKMSPermissionsForConsole",
"Effect": "Allow",
"Action": [
"kms:ListAliases",
"kms:DescribeKey"
],
"Resource": "*"
}
]
}
These permissions allow the console to view EFS resources, query EC2, allowing it to display VPCs, availability zones, and security groups, and enable KMS actions if encryption is enabled on the EFS file system. For more information on creating IAM policies in addition to roles, groups, and users, please see our existing course here.
If your data contains sensitive information or if your organization has specific policies regarding the protection of data, requiring the implementation of encryption, then you need to be aware of how EFS handles its process. EFS supports both encryption at rest and in transit. Let's take a look at how both of these are achieved.
Encryption at rest. You may remember during the demonstration I gave earlier when I created an elastic file system that there was a checkbox for encrypting the file system. This checkbox enables you to create an EFS file system that maintains encryption at rest. This uses another AWS service, the key management service known as KMS, to manage your encryption keys. As you can see in the image, a KMS master key, a CMK, is required. A customer master key is the main key type within KMS. This key can encrypt data of up to four kilobytes in size, however, it is typically used in relation to your data encryption keys. The CMK can generate, encrypt, and decrypt these data encryption keys, which are then used outside of the KMS service by other AWS services to perform encryption against your data, for example, EFS.
It's important to understand there are two types of customer master keys. Firstly, those which are managed and created by you and I as customers of AWS, which can either be created using KMS, or by importing key material from existing key management applications into a new CMK, and secondly, those that are managed and created by AWS themselves. In the example in the image, the CMK selected is an AWS managed master key.
The CMKs which are managed by AWS are used by other AWS services that have the ability to interact with KMS directly to perform an encryption against data, for example EFS, to perform its encryption at rest across its file systems. These AWS managed keys can only be used by the corresponding AWS service that created them within the particular region, as KMS is a regional service. These CMKs that are used by the services are generally created the first time you implement encryption using that particular service. For more information on KMS and how it works, please see our existing course here.
Encryption in transit. If you need to ensure your data remains secure between the EFS file system and your end client, then you need to implement encryption in transit. The encryption is enabled through the utilization of the TLS protocol, which is transport layer security, when you perform your mounting of your EFS file system. The best way to do this is to use the EFS mount helper as I did earlier in a previous demonstration. The command used to implement the use of TLS for in-transit encryption is as follows:
sudo mount -t efs -o tls fs-12345678:/ /mnt/efs
This will ensure that the mount helper creates a client stunnel process using TLS version 1.2. 'Stunnel is an open-source multi-platform application used to provide a universal TLS/SSL tunneling service. Stunnel can be used to provide secure encrypted connections for clients or servers that do not speak TLS or SSL natively.' (Wikipedia.) This stunnel process is used to listen out for any traffic, using NFS, which it then redirects to the client stunnel process. That brings me to the end of this lecture. Next, I'll be focusing on how to import data into your EFS file system.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.