1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Solution Architect Professional for AWS - Domain Six - Security

Designing security controls

Start course
Duration1h 15m


Course Description

In this course, you'll gain a solid understanding of the key concepts for Domain Six of the AWS Solutions Architect Professional certification: Security.

Course Objectives

By the end of this course, you'll have the tools and knowledge you need to successfully accomplish the following requirements for this domain, including:

  • Design information security management systems and compliance controls
  • Design security controls with the AWS shared responsibility model and global infrastructure
  • Design identity and access management controls
  • Design protection of Data at Rest controls
  • Design protection of Data in Flight and Network Perimeter controls

Intended Audience

This course is intended for students seeking to acquire the AWS Solutions Architect Professional certification. It is necessary to have acquired the Associate level of this certification. You should also have at least two years of real-world experience developing AWS architectures.


As stated previously, you will need to have completed the AWS Solutions Architect Associate certification, and we recommend reviewing the relevant learning path in order to be well-prepared for the material in this one.

This Course Includes

  • Expert-led instruction and exploration of important concepts.
  • Complete coverage of critical Domain Six concepts for the AWS Solutions Architect - Professional certification exam.

What You Will Learn

  • Designing ISMS systems and compliance controls
  • Designing security controls
  • Designing IAM management controls
  • Identity and Access Management
  • Designing protection of Data at Rest controls
  • Designing protection of Data in Flight and Network Perimeter controls


Security in the cloud is composed of four key areas. Number one, data protection, where we are protecting data in transit, and at rest. Number two, privilege management. Controlling who has access to what, and when. Number three, infrastructure protection. Ensuring the network and the base infrastructure is protected from compromise. And number four, detective controls. Monitoring what happens at all levels of the environment, and being able to detect and inform of any erroneous or unusual activity. AWS provides a shared responsibility security model for infrastructure services. It's important you recognize and understand the shared responsibility model for the exam. Questions might be around identifying which security task would be completed by AWS, and which may be the responsibility of the customer. Now, a simple way I like to remember who does what, is that AWS manages security of the cloud, and customers manage security in the cloud. AWS provides a secure infrastructure and foundation for compute storage network and database services. Regions, availability zones, and in points are some of the components of the AWS secure global infrastructure. Now, that includes the facilities, the physical security of the hardware network, and the virtualization infrastructure. Everything else run on top of that is the responsibility of the customer. Let's break this down a bit. AWS manage this the security of facilities, physical security of hardware, network infrastructure, virtualization infrastructure. If we are defining an information security management plan, for example, we could consider AWS the owner of those assets for the purpose of our ISMS asset definitions. Now, customers are responsible for the security of Amazon machine images, operating systems, applications, data in transit, and data at rest, data stores, credentials and policies, and most importantly, configuration. Now, the shared responsibility model means AWS customers are responsible for protecting the confidentiality, integrity, and availability of their data in the AWS cloud, and for meeting any business requirements for information protection. When we apply the shared responsibility security model to those four areas we looked at for cloud security, three out of the four tasks will be tasks customers need to do; data protection, privilege management, and monitoring will be the responsibility of the customer, with AWS managing infrastructure protection out of that grid. Now, AWS provides a range of security services and tools that customers can use to secure assets within AWS services. Services such as server-side encryption, HSM keys, CloudWatch, and CloudTrail, to name a few. Customers retain control of what security they choose to protect their own content, platform, applications, systems, and networks, so it's by choice. AWS manages the regions, availability zones, and age locations. While AWS manages security of the cloud, security in the cloud is the responsibility of the customer. Let's go through some common areas where people tend to get tripped up. Data at rest and in transit is the responsibility of the customer. Yes, it's easy to assume that, because AWS manages the infrastructure, they should surely manage security of your data at rest and in-transit, right? Well, customer data is the responsibility of the customer. AWS does not audit or read data volumes. It is our responsibility to ensure any data we store in AWS is encrypted and secure, so that means in transit, and at rest. Network traffic protection is the responsibility of the customer. It is up to us to encrypt traffic in and out of our instances, so you need to enable elastic low balancing to terminate or path through SSL connections for example. Route 53 and elastic low balances support SSL, so it's not difficult to set up HTTPS communications to protect data in transit, however, you do need to do it. It's not something that's done automatically for you by AWS by default. It's the responsibility of the customer. Now, server-side encryption is the responsibility of the customer. AWS encrypts S3 objects as part of providing a managed service with S3, however, you need to implement DBS encryption to protect your data in volumes. Client-side data, and data integrity is the responsibility of the customer. Operating systems are the responsibility of the customer. AWS provides machine images and they go to great lengths to ensure that those images have the latest patches, and security ciphers, etc., but once you provision and start that machine image, it becomes your responsibility to keep it patched and secure. AWS provides services like security groups, and network access control lists, however you also need to consider running firewall appliances to protect those servers from the public domain. Platform and application management is the responsibility of the customer. AWS provides a secure platform, however, it is our responsibility to ensure it stays that way. Any platform patch or update is your responsibility, unless you are running IDS, which is a managed service. AWS maintains things like Oracle, and SQL server patches, and versions for you. For abstracted services, such as Amazon S3, and Amazon Dynamo DB, AWS operates the infrastructure layer, the operating system, the platforms, and you access the in points to store and retrieve data. So, Amazon S3, and Dynamo DB are tightly integrated with IAM, and you are responsible for managing your data, and for using IAM tools to apply access control level type permissions to individual resources at the platform level. All permissions based on user identity, or user responsibility, at the IAM user or group level. For Amazon S3, you can also use platform provided decryption of data at rest, or platform provided HTTPS encapsulation, to protect data in transit, to and from the servers. Now, platform compliance, data encryption at rest, and in transit, auditing tools such as Amazon CloudWatch, CloudTrail, and AWS config enable detective controls inside of that. Imagine you operate a web application in one AWS region. The application runs on an auto scaled layer of EC two instances, and you have an RDS multi-AZ database. Your IT security compliance officer wants evidence we have a reliable and durable logging solution to track changes made to our EC two dot IAM, and to our RDS resources. Setting up CloudTrail logging to an Amazon S3 bucket could achieve this level of reporting for us. Amazon CloudTrail can provide deep visibility into API calls, including who, what, when, and from where any calls were made. We can also define log aggregation options, to streamline investigations and compliance reporting. We can then configure Amazon CloudWatch to monitor the incoming log entries for any desired symbols or messages, and to surface the results as CloudWatch metrics. Using CloudTrail logs, we could track the invalid user messages, so that we could see how often spiritless login attempts were made against any instance. To do this we could first request the list of log groups, selecting the stream, and clicking on create metric filter. We could then create a filter that would look for a string invalid user. We could also monitor our web service log files for 404 errors, to detect bad inbound links, or 503 errors, to detect a possible overload condition. We'll need to create a new bucket for our logs to be stored in. If we want to keep these logs secure, we could use IAM roles, ACL's, our bucket policies, and we could also consider using multifactor authenticated delete on the bucket that stores these logs, that should keep our IT security compliance officer happy, and give them the reporting that they need. A few points to keep in mind about CloudWatch. CloudWatch logs will store your log files indefinitely, but CloudWatch alarms are only stored for 14 days. Now, you can publish your own metrics to CloudWatch, using the put metric data command, and basically we've got two options for this. You can either aggravate your data report before you publish to CloudWatch, or send single metrics. The point to keep in mind is that CloudWatch uses one minute boundaries when aggregating data points, so although you can publish data points with time stamps as granular as one thousandth of a second, CloudWatch aggregates the data to a minimum granularity of one minute. CloudWatch records the average, so it sums all items divided by the number of items, of the values received for every one minute period, as well as the number of samples, maximum value, and minimum value for the same time period. For example, this page view count metric contains three data points, with time stamps just seconds apart, but CloudWatch aggregates three data points, because they all have the same time stamps, within a one minute period. When you have multiple data points per minute, aggregating data minimizes the number of calls you will need to make to the put metric data. For example, instead of calling put metric data multiple times for three data points that are within three seconds of each other, you can aggregate the data into a statistic set that you publish within one call. To publish a single data point for a new or existing metric, use the put metric data command with one value and time stamp. The put metric data command can only publish one data point per call, so if you need to do more with it, you specify time stamps within the past two weeks.

About the Author

Learning paths23

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.