CloudAcademy
  1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Solution Architect Professional for AWS - Domain Six - Security

Designing protection of Data at Rest controls

play-arrow
Start course
Overview
DifficultyAdvanced
Duration1h 15m
Students1051

Description

Course Description

In this course, you'll gain a solid understanding of the key concepts for Domain Six of the AWS Solutions Architect Professional certification: Security.

Course Objectives

By the end of this course, you'll have the tools and knowledge you need to successfully accomplish the following requirements for this domain, including:

  • Design information security management systems and compliance controls
  • Design security controls with the AWS shared responsibility model and global infrastructure
  • Design identity and access management controls
  • Design protection of Data at Rest controls
  • Design protection of Data in Flight and Network Perimeter controls

Intended Audience

This course is intended for students seeking to acquire the AWS Solutions Architect Professional certification. It is necessary to have acquired the Associate level of this certification. You should also have at least two years of real-world experience developing AWS architectures.

Prerequisites

As stated previously, you will need to have completed the AWS Solutions Architect Associate certification, and we recommend reviewing the relevant learning path in order to be well-prepared for the material in this one.

This Course Includes

  • Expert-led instruction and exploration of important concepts.
  • Complete coverage of critical Domain Six concepts for the AWS Solutions Architect - Professional certification exam.

What You Will Learn

  • Designing ISMS systems and compliance controls
  • Designing security controls
  • Designing IAM management controls
  • Identity and Access Management
  • Designing protection of Data at Rest controls
  • Designing protection of Data in Flight and Network Perimeter controls

Transcript

 Okay, protecting data at rest. Now, AWS provides a number of ways to protect your data at rest. Let's keep in mind that setting up data at rest is generally the responsibility of us the customer. So, we need to select the method and to actually implement it. That isn't something that's done automatically for us by AWS. So, AWS provides three options for encrypting data at rest, and these options are quite different and allow you to choose the approach that works best for you and your organization. So, the first option is you control the encryption method and the entire key management infrastructure. So, you basically do all of the encryption yourself. The second option is you control the encryption method and AWS provides the storage component of the key management infrastructure while you continue to provide the management layout of that key management infrastructure. And the third option is that AWS controls the encryption method and the entire key management infrastructure. So, AWS does it all for you basically. So, let's just delve into a couple of these options, the most common ones. The second option where AWS stores keys using CoudHSM. So, this model is quite similar to Model A where you do it all yourself, in that you manage the encryption method. But it differs in that the keys are stored in an AWS CloudHSM appliance rather than in a key storage system you manage on Primus. So, while the keys are stored in the AWS environment, they are inaccessible to any employee at AWS. So, when determining if using AWS CloudHSM is appropriate for your deployment, it's important to understand the role that CloudHSM plays in encrypting data. So, an HSM can be used to generate and store key material, and it can perform encryption and decryption operations. But, it does not perform any key life cycle management functions. So, it doesn't look after access control policies and it doesn't look after key rotation. So, this means you may also need a compatible KMI in addition to to the AWS CloudHSM appliance before deploying your encryption solution. The KMI you provide can be deployed by the on premise or within Amazon EC2. And that can communicate with the AWS CloudHSM instance securely over SSL to help protect data and the encryption keys. Because the AWS CloudHSM servers use the SafeNet Luna appliances, any key management server that supports the SafeNet Luna platform can also be used with AWS CloudHSM. So, you might use that solution when you need to put an extra layer of key management and access control in, as well as using Amazon CloudHSM. Now, the third option is where AWS controls the encryption method and the entire key management infrastructure. So, in this model, AWS provides server-side encryption of your data, transparently managing the encryption method and also the keys. Now, to do that, they use envelope encryption, and how that works is first, a data key is generated by the AWS servers at the time you request your data to be encrypted. Second, data key is used to encrypt your data. And then third, that data key is then encrypted with a key-encrypting key that is unique to the service storing your data. And then fourth, the encrypted data key and the encrypted data are being stored by the AWS storage servers on your behalf. So, let's look at the encryption options per AWS storage type. Amazon EBS encryption provides an encryption solution for your EBS volumes without the need for you to build, maintain and secure your own key management infrastructure. When you create an encrypted EBS volume and attach it to a supported instance type, you can encrypt data inside the volume and all snapshots created from that volume. Also noting here, all traffic in transit between the volume and the instance in encrypted using SSL. That encryption occurs on the servers that host EC2 instances providing encryption of data in transit from EC2 instances to EBS storage. Anyway, back to data at rest. Amazon EBS encryption uses AWS key management service or KMS and customer master keys, or CMK. Encryption is supported with all EBS volume types which is good to remember. And you can expect the same IOPS performance on encrypted volumes as you would with unencrypted volumes. You can access encrypted volumes the same way tat you access existing volumes and encryption and decryption is handled transparently. You can also leverage most standard file system level or block level encryption tools. An important point to remember with both block level and file system level encryption tools is that they can only be used to encrypt data volumes that are not Amazon EBS boot volumes. This is because these tools don't allow you to automatically make a trusted key available to the boot volume at startup. Encrypting Amazon EBS volumes attached to Windows instances can be done using BitLocker or encrypted file system or EFS as well as open source applications like TrueCrypt. Okay, for Amazon S3, server-side encryption used 256 bit advanced encryption standard or AES keys for both object and master keys. Each object is encrypted with a unique key. For Amazon Glacier, data is always automatically encrypted before it's written to disk using 256 bit AES keys unique to the Amazon Glacier service and they are securely stored in separate systems under AWS's control. For Redshift, when creating an Amazon Redshift cluster, you can optionally choose to encrypt all data in user-created tables. For Microsoft SQL Server, you can provision transparent data encryption. The SQL Server encryption module creates data and key-encrypting keys to encrypt the database. For running Oracle on RDS, you can enable the Oracle Advanced Security option for Oracle to leverage the native transparent data encryption in native network encryption features. Okay, let's look at a sample question. Question reads, "You are building a website "that will retrieve and display highly sensitive "information to users. "The amount of traffic the site will receive is known "and not expected to fluctuate. "The site will leverage SSL to protect "the communication between clients "and the web servers. "Due to the nature of the site, "you are very concerned about the security "of your SSL private key and want to ensure "that the key cannot be accidentally "or intentionally moved outside of your environment. "Additionally, while the data the site will display "is stored on an encrypted EBS volume, "you are also concerned that the web servers' logs "might contain some sensitive information. "The logs must be stored "so that they can only be decrypted "by employees of your company. "Which of these architectures "meets all of these requirements?" Right, so firstly, let's identify the core components of the question. Highly sensitive information distributed over the internet, the SSL key cannot be moved accidentally or otherwise out of the environment, so we need a key management system, right. Logs need to be encrypted and only viewed by, read between the lines, authenticated users or employees, and that the solution needs to meet all these requirements. Okay, so, second, let's look for patterns in the question. All the options propose using ELBs so, we can take that out as a given, I think. So, let's look at option one. So, "Use Elastic Load Balancing to distribute traffic "to a set of web servers. "To protect the SSL private key, "upload the key to the load balancer "and configure the load balancer to offload "the SSL traffic." Okay, "Write your web server logs "to an ephemeral volume that has been encrypted "using a randomly generated AES key." Okay, right, so this option doesn't propose any storage for our key other than uploading the key to the load balancer. So, that isn't really managing the key, is it? This key management would be adequate for 90% of use cases, and here's why. When you upload the private key to the ELB, the key is stored in IAM. As we know, AWS provides a high level of security and compliance, and keys stored in IAM are stored in a managed environment in AWS. So, this protection is adequate for most use cases, but may not be enough access control for some corporate customers who need to meet auditing and compliance standards as AWS still has access or control of the writ. So, this is where categorization of data sensitivity can come into play. If we had to provide total control over private keys, then this may not be adequate enough, and you would need to consider implementing a key management infrastructure like CloudHSM. A middle ground could be using Amazon Key Management Service, KMS, which can create managed certificates, but with Amazon still in control of the rooted trust. There's a good case study from re:Invent on how Netflix determined the level of control required for their keys. So, we just have to assume that in this scenario, we've been asked to find that high level of key mangement. So anyway, back to our scenario. I don't really think that this option is going to meet our requirements. Writing server logs to ephemeral storage isn't on the face of it, okay. And as ephemeral storage is not persistent so it's hardly used for as use case for something like log file storage. I mean, we can encrypt an ephemeral volume with a KMS manage key, but here, we are proposing encrypting the volume with a random AES key without any mention of how the encrypted key is going to be accessed or stored. So, straight off I think this option lacks a bit of detail and doesn't really meet the requirements. So, option B. "Use Elastic Load Balancing to distribute traffic "to a set of web servers." Yep. "Use TCP load balancing on the load balancer," okay, "And configure your web servers to retrieve "the private key from a private Amazon S3 "bucket on boot. "Write your web server logs to a private "Amazon S3 bucket using Amazon S3 server-side encryption." Okay, so, retrieving the key from S3 is borderline acceptable. But doing that suggests that we would be storing that key in Amazon S3 which doesn't meet the requirements that we've been given. S3 with server-side encryption is adequate for log files with AWS providing at least some management around the encryption method of those logs. But overall, the main problem I've got is that the transport of how we're accessing the key, there's no actual mention of how the key will be managed. So, while sure, S3 is durable and has an encryption option, I don't think that's enough for the requirements that we've been given. Okay, so, option C. "Use Elastic Load Balancing to distribute traffic..." Okay, "Configure the load balancer to perform TPC "load balancing, "use an AWS CloudHSM to perform the SSL transactions "and write your web server logs to "a private Amazon S3 bucket using "Amazon S3 server-side encryption." Okay, so, this is the first option that proposes using some kind of key management service, so, already, that's good. Let's just refresh ourselves on what CloudHSM is, shall we? When you use the AWS CloudHSM service, you receive dedicated single tenor access to each HSM appliance. So, each appliance appears as a network resource in your VPC. You and not Amazon manage the partitions of that HSM. So, as part of provisioning, you receive administrative credentials for the appliance and you may create the HSM partition on that appliance. And then after creating the partition, you can configure a client on your EC2 instance that allows your applications to use the API as provided by the HSM. Now, that cryptographic partition is a logical and physical security boundary that restricts access to your keys, so only you control your keys and operations performed by the HSM. Amazon administrators will manage and monitor the health of the actual appliance, but they don't have access to the cryptographic partition. Your applications use standard cryptographic APIs in conjunction with HSM client software installed on the application instance. So, that sends cryptographic requests to the HSM. And the client software transparently sets up a secure channel to the appliance using credentials that you create, and it sends requests on that channel. So, the HSM performs the operations and returns the results over that secure channel and the client that returns the results of the application through the cryptographic API. So, overall, that's going to give us the type of key management that I think the question's asking for. Certainly appears possible to integrate CloudHSM with third party applications. There's a link in the HSM user guide that describes how to connect to the Apache web server to a private AWS CloudHSM. Now, I don't like the way it's worded to say "CloudHSM to perform the SSL transactions," that's kind of a slightly incorrect way of describing what we're going to do here. But on the face of it, this option is looking like the best we've had so far for this part of the solution. But, anyway, let's have a look at the next part two. So, they're proposing, "Write your web server logs to a private "Amazon S3 bucket." Yep, okay, using Amazon S3 server-side encryption, okay. So, it's encrypted in S3, so yeah, okay. So far that's looking the best of the bad bunch, but let's have a look at option D before we pop open the champagne cork. So option D, "Use Elastic Load Balancing "to distribute traffic to a set of web servers, "configure the load balancer to perform TCP load balancing, "use an AWS CloudHSM to perform the SSL transactions, "and write your web server logs to "an ephemeral volume that has been encrypted "using a randomly generated AES key." Okay, so, you know we need a KMS, we've got two options that include a KMS. So, the question now becomes which option is better for logs, Amazon S3 or encrypted ephemeral volume. Okay, so we can encrypt an instant store volume, but does that really suit the use case? Okay, so, let's remember, ephemeral storage is not persistent. And if we remind ourselves of those nuances, local instance store volumes are not really intended to be used as durable disk storage. And like Amazon EBS volumes, data on instance store volumes persists only during the life of the associated Amazon EC2 instance. So, this functionality means that data on instance store volumes is persistent across orderly instance reboots, but if the Amazon EC2 instance is stopped and restarted or terminates or fails, all data on the instance store volumes is lost. Now, a log file use case suits Amazon S3 or Amazon EBS, but I don't think it really suits that ephemeral instance store volume. And another thing to consider here is access to the EC2 instance is controlled by the guest operating system, and we already had outlined that we're concerned about the privacy of sensitive data stored int the log files. Now, if that's gonna be stored on an instance store volume, we would have to use our own encyrption tools or a third party tool to provide that what's described only as random AES encryption. So, that's another thing we have to set up and do. And then further, there is no mention of how the encryption key would be accessed or managed. The log file storage, I mean, there may be a merit to having the log files stored in the ephemeral volume if they're only going to be kept for a day or two, and that they're used for security purposes only and there's no need for an audit trail, then having an encrypted ephemeral store for those logs may also be adequate. Option C presents a good option with Amazon S3. But if we think this through a bit more, how are we going to get our log files from our web server to S3? It's not possible for us to map to S3 as a logical drive for example, so, we'd have to write some routine to actually transfer any log file from ephemeral EDS storage first before we were to put it to S3. That would be slightly messy. So, I think looking at the face of it, our best option in this scenario would be option D. Okay, while we're on the topic, let's just digress and walk through how SSL would handle this request if we did use offloading on the ELB. So, ELB manages SS offloading which means the ELB terminates the secure connection between the end user, thereby encrypting traffic between the load balancer and the client that initiated the SSL session. So, to create an HTTPS Listener, you deploy your SSL server certificate on the load balancer. The load balancer uses that certificate to terminate the connection and to decrypt requests from the client before sending it on to the target. Now, the certificate is made up of three parts, the public key which is used to decrypt traffic, the private key which we must keep safe and a key chain depending on who and how this certificate was created. You can request a certificate from one of the CA signing authorities, or you can use AWS certificate manager to do that for you. Once you have the certificate and loaded, then the company would be ready to go. So, the ELB uses the secure socket layer security policy to negotiate an SSL connection between the client and the load balancer. Now, that security policy is a combination of SSL protocol, SSL cipher, and the server cipher preference. The SSL cipher is an encryption algorithm that uses the encryption key to create a coded message. The SSL protocol uses several SSL ciphers to encrypt data over the internet. During the SSL connection negotiation, the client and the load balancer present a list of ciphers and protocols that they each support in order of preference. And by default, the first cipher on the servers' list that matches any one of the client's ciphers is selected for the SSL connection.

About the Author

Students59004
Courses73
Learning paths23

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.