1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Data Security for Solutions Architect Associate on AWS

Shared Responsibility Model

Start course
Duration1h 7m


Course Description

This course is focused on the details you need to know for the 20% of the Solutions Architect – Associate for AWS exam that covers data security. You will learn to recognize and explain platform compliance for AWS, and be able to both recognize and implement secure procedures for optimum cloud deployment and maintenance, including understanding the shared responsibility security model, and knowing what that looks like in practice.

Course Objectives

  • Recognize and explain the AWS shared security responsibility model
  • Recognise and implement IAM users, policies and roles
  • Recognize and explain how AWS enables you to protect data and rest and in transit

Intended Audience

This course is for anyone preparing for the Solutions Architect–Associate for AWS certification exam. We assume you have some existing knowledge and familiarity with AWS, and are specifically looking to get ready to take the certification exam.


Basic knowledge of core AWS functionality. If you haven't already completed it, we recommend our Fundamentals of AWS Learning Path.

This Course Includes:

  • 7 Video Lectures
  • Everything you need to know about data security to prepare for the Solutions Architect–Associate for AWS certification exam

What You'll Learn

Lecture What you'll learn
Shared Responsibility Model What's managed by AWS vs. customers
Identity and Access Management How to use IAM to keep your data secure
Platform Compliance  Best practices for platform compliance
Data at Rest and in Transit How to secure your data at rest and in transit
Identity Federation Web identity federation
CloudFront Security How to secure Amazon CloudFront

If you have thoughts or suggestions for this course, please contact Cloud Academy at


The exam guide outlines that data security could comprise 20% of questions. So this is an important domain to get right and to understand for the Solution Architect-Associate Exam. So, here's our agenda for this module. We'll look at security in the cloud. We'll look at the AWS shared responsibility security model. We'll look at AWS security best practices. Then we'll go into the AWS Identity and Access Management, or IAM. We'll look at IAM best practices, and then we'll walk through some common delegation and identity federation use cases that can crop up from time to time. And, let's look at protecting data at risk, protecting data in transit, and we'll do a walk through of detective controls, such as threat mitigation, DDoS mitigation, and how to ensure we test them and implement our security plan. We'll wrap up with a quick walk through of some Amazon CloudFront security, which does always seem to crop up. All right. Security in the Cloud is composed of four key areas. Number one, data protection. Where we're protecting data in transit and at rest. Number two, privilege management. Controlling who has access to what and when. Number three, infrastructure protection. Ensuring the network and the base infrastructure is protected from compromise. Number four, detective controls. Monitoring what happens at all levels of the environment and be able to detect and inform of any erroneous or unusual activity. So AWS provides a shared responsibility security model for infrastructure services. It's important you recognize and understand the shared responsibility model for the exam. Now questions might be around identifying which security tasks would be completed by AWS and which may be the responsibility of the customer? Now, a simple way I like to remember who does what, is that AWS manages security of the cloud and customers manage security in the cloud. So AWS provides a secure infrastructure and foundation for compute, storage, network, and database services. Regions, availability zones, and endpoints are some of the components of the AWS secure global infrastructure. Now that includes the facilities, the physical security of the hardware, network, and the virtualization infrastructure. So everything else run on top of it is the responsibility of the customer. Let's break this down a bit. AWS manages the security of facilities, physical security of hardware, network infrastructure, virtualization infrastructure. If we are defining an information security management plan, for example, we could consider AWS the owner of those assets for the purpose of our ISMS asset definitions. Now, customers are responsible for the security of Amazon Machine Images, operating systems, applications, data in transit and data at rest, data stores, credentials, and policies and, most importantly, configuration. Now the shared responsibility model means AWS customers are responsible for protecting the confidentiality, integrity, and availability of their data in the AWS cloud, and for meeting any business requirements for information protection. When we apply the shared responsibility security model to those four areas we looked at for cloud security, three out of the four tasks will be tasks customers need to do. Data protection, privilege management, and monitoring will be the responsibility of the customer, with AWS managing infrastructure protection out of that grid. Now AWS provides a range of security services and tools that customers can use to secure assets within AWS services. Services such as server-side encryption, HSM keys, CloudWatch and CloudTrail, to name a few. Customers retain control of what security they choose to protect their own content, platform, applications, systems and networks. So, it's by choice. AWS manages the regions, availability zones, and edge locations. While AWS manages security of the cloud, security in the cloud is the responsibility of the customer. So let's go through some common areas where people tend to get tripped up. Data at rest and in transit is the responsibility of? The customer. Yes. It's easy to assume that because AWS manages the infrastructure, they should surely manage security of your data at rest and in transit, right? Well, customer data is the responsibility of the customer. AWS does not audit or read data volumes. It is our responsibility to ensure any data we store in AWS is encrypted and secure. So that means, in transit and at rest. Network traffic protection is the responsibility of? The customer. It is up to us to encrypt traffic in and out of our instances. So, you need to enable Elastic Load Balancing to terminate or pass through SSL connections, for example. Route 53 and Elastic Load Balances support SSL, so it's not difficult to set up HTTPS communications to protect data in transit. However, you do need to do it. It's not something that's done automatically for you by AWS by default. So it's the responsibility of the customer. Now server-side encryption is the responsibility of? The customer. Yes. AWS encrypts S3 objects as part of providing a managed service with S3, however, you need to implement EBS encryption to protect your data in volumes. So client-side data and data integrity is the responsibility of the customer. Operating systems are the responsibility of? The customer. Yes. AWS provides machine images and they go to great lengths to ensure that those images have the latest patches and security ciphers, et cetera. But, once you provision and start that machine image, it becomes your responsibility to keep it patched and secure. So AWS provides services like security groups and network access control lists, however, you also need to consider running firewall appliances to protect those servers from the public domain. Platform and application management is the responsibility of? The customer. Yes. AWS provides a secure platform. However, it is our responsibility to ensure it stays that way. So any platform patch or update is your responsibility unless you are running RDS, which is a managed service. So AWS maintains things like Oracle and SQL server patches and versions for you. For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, the platforms, and you access the endpoints to store and retrieve data. So Amazon S3 and DynamoDB are tightly integrated with IAM, and you are responsible for managing your data and for using IAM tools to apply access control level-type permissions to individual resources at the platform level, all permissions based on user identity or user responsibility at the IAM user or group level, For Amazon S3, you can also use platform provided encryption of data at rest or platform provided HTTPS encapsulation to protect data in transit to and from the servers. Now platform compliance, data encryption at rest and in transit, auditing tools, such as Amazon CloudWatch, CloudTrail, and AWS Config enable detective controls inside of that. Let's have a look at a sample question just to keep our flavor high for answering questions in the correct way. The sample question we've got is, which is an operational process performed by AWS for data security? Very much on topic of what we've been talking about. Option A is AES-256 encryption of data stored on any shared storage device. Now what we need to do here, is really read these questions carefully, okay? The key words in this question are operational process. So, it's an operational process. Who is doing this operational process for data security? Now, we already ascertained that protecting data at rest or in transit is the responsibility of the customer. So, providing AES-256 encryption of data stored on any shared storage device is our responsibility, right? Not that of AWS. It's our responsibility to protect our data. Option B, decommissioning of storage devices using industry standard practices. Now, I'm tentatively saying that's correct, because that is exactly what AWS is responsible for in my view. Let's go through the other options first before we decide that this is the right one. Option C, background virus scans of EBS volumes and EBS snapshots. Now this is quite tricky 'cause you do think, "Well, isn't that the responsibility of AWS "to make sure that the environment is virus free?" No. That's our responsibility to ensure that the data on those volumes, the EBS volumes and EBS snapshots, is free of any virus. That's the responsibility of the customer. It's our data that we're protecting. D, replication of data across multiple AWS Regions. This is a very interesting option because it's kind of confusing. Now, just remember that AWS does not replicate data between regions unless you specifically tell it to do that. So, yes, data is replicated between availability zones within a region for Amazon S3, that's part of 11 nines durability that AWS is able to provide. But, they won't replicate data from one region to another without you specifically telling them to do that. S3 is replicated across AZs within a region if you have an EBS volume that's saved as a snapshot. Snapshots are stored in S3. So, because S3 is automatically replicated across availability zones on your behalf, EBS snapshots are more durable by default because of that. EBS volumes themselves are replicated within a availability zone. So, getting a bit distracted there. But, that's a very interesting option they've thrown in there. Option E, secure wiping of EBS data when an EBS volume is unmounted. Again, that's kind of tricky. You think, "Well, shouldn't that be AWS doing that?" But, it's wiping our data, all right? Secure wiping of EBS data is the responsibility of us, the customer. If you unmount a volume, it's not deleted. It's kind of a trick question in a way. It's the wording that you need to pay attention to here. Remember, that EBS is persistent storage. If an EBS volume is not wiped, then it is unmounted or detached from an AWS instance. When you select a volume and click delete volume from the AWS console, the EBS volume is wiped. AWS then replaces the data on the volumes with zeroes, so it wipes it out based on those industry best practices that we talked about earlier. So that data cannot be recovered or read. You just have to watch the wording on those. Reading back through them, my option on this one, I'm going for option B, that decommissioning of storage devices using industry-standard best practices is the only answer that's actually correct out of those options.

About the Author

Learning paths23

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.