4 Practices that Should Be Driving Your Security Strategy in 2018
Securing your data and applications in the cloud has never been more important.The headlines are a constant reminder of the disruptive (or cala...Learn More
Updated: September 2017 – Inclusion of additional models
Over my next several posts, I’ll be discussing AWS security best practices from different perspectives and covering different AWS services. The overall goal is to help you improve the security of your cloud environments. We’ll start with the AWS Shared Responsibility Model, which lies at the very foundation of AWS Security.
By the very nature of the phrase “AWS Shared Responsibility Model,” we can see that security implementation on the AWS Cloud is not the sole responsibility of any one player, but is shared between AWS and you, the customer.
The AWS Shared Responsibility Model dictates which security controls are AWS’s responsibility, and which are yours. In short, you decide how you want your resources to sit ‘in’ the cloud (in other words, how much access you choose to give to and from your resources), while AWS guarantees the global security ‘of’ the Cloud (i.e., the underlying network and hardware they provide to host and connect your resources).
We will briefly describe a number of configurable security elements later in this post, but first, we’re more interested in understanding our responsibilities rather than how to implement them.
In my experience, a solid understanding of the AWS Shared Responsibility Model makes it easier to build and maintain a highly secure and reliable environment. Without knowing where I needed to step in and take control of data security, I was never able to properly define just how secure my environment really was.
Security is AWS’s number-one priority in every sense. It’s an area into which AWS pours huge capital and energy and devotes near-constant attention.
There’s a reason for this. After speaking with my business contacts in various sectors, it seems that security is still one of the main reasons corporations are reluctant to adopt a cloud presence. Overcoming this hesitation requires AWS to be at the very top of security excellence and governance.
Having served over a million customers in the past month alone, AWS’s most stringent security standards are already being used for audit purposes by the most security-sensitive customers around. Facing so many requirements, AWS is certified and compliant across a huge range of security standards, including PCI DSS, ISO, and SOC.
AWS Services are deployed and distributed in exactly the same way throughout their entire global infrastructure. This means a single user accessing a simple S3 bucket for document backups is covered by the same strict security standards as the largest and most demanding corporations.
To help provide a clear definition of the boundaries of responsibility, AWS has devised 3 main models, each representing where AWS and customer responsibilities start and end:
By taking a look at each of these models, we will be able to clearly see the differences. Let’s start by looking at the first model, based on an infrastructure that includes services such as EC2. Then, we’ll look at how the level of responsibility shifts as we move into containers and abstract services.
For more information on the differences between container and abstract services within AWS, please see our course AWS Security Best Practices: Abstract and Container Services.
As we said, AWS is responsible for what is known as Security ‘of’ the cloud. This covers their global infrastructure elements including Regions, Availability Zones, and Edge Locations, and the foundations of their Compute, Storage, Database, and Network services.
AWS owns and controls access to their data centers where your customer data resides. This covers physical access to all hardware and networking components and any additional data center facilities including generators, uninterruptible power supply (UPS) systems, power distribution units (PDUs), computer room air conditioning (CRAC) units, and fire suppression systems. Some of the security compliance controls mentioned previously are based upon this physical access entry and control. Essentially, AWS is responsible for the components that make up the cloud, any data put ‘into’ the cloud then becomes your responsibility.
With the basic Cloud infrastructure secured and maintained by AWS, the responsibility for what goes into the cloud falls on you. This covers both client and server side encryption and network traffic protection, security of the operating system, network, and firewall configuration, followed by application security and identity and access management.
How much of this additional security you wish to implement is entirely your decision. What you choose may depend on the nature of your business or on existing controls that you may already have in place. I recommend tightening security as much as possible to minimize exposure to external threats that could compromise your environment. The important point to remember is that, while AWS provides many powerful security controls, how and when to apply them is not AWS’s responsibility.
Examples of AWS container services include:
Straight away, we can see that both Platform and Application management along with any operating system or system and network configuration has shifted to being the responsibility of AWS and is no longer down to us as the customer to manage. This is a huge difference from that of infrastructure-based services.
However, not all responsibility has shifted. You should note that firewall configuration remains the responsibility of the end user, which integrates at the platform and application management level. For example, RDS utilizes security groups, which you would be responsible for configuring and implementing.
Examples of abstract services include:
You will notice that even more responsibility has been shifted to AWS, specifically Network Traffic protection, which AWS will manage via the platform protecting all data in transit using AWS’s own network. You are also responsible for using IAM tools to apply the correct permissions both at the platform (such as S3 Bucket policies) and IAM user/group level.
As we progress through each of these models, it’s clear that the level of control and responsibility shifts more toward AWS than to the customer.
When you create your AWS Account, you will have an AWS Administrator account and credentials that will allow you to create other users, groups, and roles within the Identity & Access Management (IAM) Service. Right from the get-go, you are in control of who can access your resources, and it’s up to you to manage this access properly. IAM is a very powerful tool that you can use to create a very specific set of access permissions and private security keys for the resources you deploy. Within IAM, you can also implement Multi Factor Authentication, something I strongly recommend for ALL of the administrator accounts that you create, and especially the admin account.
To learn how to use IAM and understand all of its features, take a look at our course, AWS: Overview of AWS Identity & Access Management (IAM).
Once you launch an EC2 instance, the responsibility for properly applying the latest security patches to the operating system is yours as we can see from the infrastructure model. AWS will not notify you when a new patch is released for your EC2 instance OS; you must manage EC2 OS security.
Whether you’re running Windows or some flavor of Linux (like CentOS, Ubuntu, or SUSE), you must manage the operating system’s security settings. Do not assume that the latest AMIs (Amazon Machine Images) have the very latest security patches. Always check for updates, for example using “yum update” (or “aptitude safe-upgrade”) for Linux, and the Windows update program for Windows.
To fully secure your instances, I can’t stress enough the importance of configuring your security groups as tightly as possible. Security groups act as an instance-level firewall with rules, filtering traffic into and out of your instance. They work at a protocol and port level, restricting source traffic at an IP and security group level. This allows you to grant access to your instances using specified protocols and port numbers, opening access from only a single IP address (x.x.x.x/32), from anywhere in the world (0.0.0.0/0) or from addresses in another, pre-configured security group.
For security within your Virtual Private Cloud (VPC) at the subnet/network level, you can implement Network Access Control Lists (NACLs). The NACL is similar to security groups in that it is comprised of rules, but it monitors traffic at the subnet level. It’s important to note that security groups are stateful, while the NACL is stateless. You must remember this when setting up your NACL, as it means you will need to specify rules for both inbound and outbound traffic. NACLs are associated with specific subnets, and so present a great way to help protect against DDOS attacks. Understanding these groups is crucial to controlling who or what can access the resources within your VPC.
Should you need greater security for your data when it’s at rest, you could implement client- or server-side encryption and traffic integrity and protection. For example, you can use 256-bit AES encryption methods with S3 buckets, or enable EBS encryption for your EC2 storage volumes.
This has been the first of a series of articles based on AWS Security and AWS security best practices. I hope this has provided a better understanding of what is expected from you (the customer) with regards to security, compared to what is supplied and managed by AWS. With this, you can now begin deploying a strong and effective security policy within your environment from the ground up.
With this first post from our security series, I hope you’re clear on the division of roles created by the AWS Shared Responsibility Model. Next, I will be covering security concepts related to Amazon’s Virtual Private Clouds, including the best practice use of security groups and how they should be used to achieve the highest instance security possible. I will also discuss the differences between dedicated and multi-tenant instances, and provide an overview of secret and public access keys when using API calls to access EC2 instances.
Got feedback? Please leave a comment below.
AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...
As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in the cloud.As the market leader and most mature p...
The announcements at re:Invent just keep on coming! Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. If you're not too familiar with Amazon EC2, you might want to familiarize yourself by creating your first Am...
Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...
In order to understand AWS VPC egress filtering methods, you first need to understand that security on AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructur...
Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...
Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...
There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...
Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...
One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...
A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...
The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....