3 Tools to Start Deploying Your Amazon Web Services Infrastructure
The Amazon Web Services stack is by and large vast and complex, and deploying your application or your infrastructure might be quite scaring if you...Learn More
When I worked in a data center environment in a previous role, our team knew that, at certain times of the year, external auditors would be coming on site to analyze our environment. This could have been for a number of different compliance controls, such as for PCI DSS (Payment Card Industry Data Security Standard) for example. In addition, some compliance controls were not always external, and we had stringent internal requirements that stipulated specific do’s and don’ts when it came to the configuration of hardware resources.
These internal and external compliance requirements meant that there was a huge emphasis on ensuring that all controls were being met and proving that they had been met. This often meant that vast amounts of spreadsheets and other change management systems had to be manually kept up to date for all changes within the data center. This might include installing additional RAM into a server or decommissioning entire storage area networks (SANs).
If these records were incorrect, we would risk the chance of failing the audit. As a result, a lot of man hours had to be invested in resource management on a weekly basis to ensure that the team was compliant across a range of controls.
Compliance in a cloud environment is different. One of the fundamental elements of cloud computing is that resources can rapidly change, which is very different from a data center environment. A typical cloud environment will scale up and down and in and out depending on demand and other thresholds, which allows it to elastically evolve. Trying to maintain compliance on resources in an environment that is forever changing can be a huge headache.
For the purposes of an audit and other compliance requirements for your resources, at any given time, you will need to know certain information:
Trying to maintain a record of this information within your AWS environment can be achieved but at a big cost of effort. You could perform a ‘describe’ or ‘list’ using the AWS CLI against your resources to find some of this information, but developing a system to output those results into a readable and easy to manage format is another matter altogether.
AWS soon realized this, and to help rectify the problem that many customers were experiencing, AWS introduced a service called AWS Config. AWS Config is a managed service that can do all of this for you, and more, by performing the following tasks on your behalf:
When it comes to resource management, AWS can be a great help. However, in this post, I’d like to focus on the last bullet in the list above: Config Rules.
Config Rules allow you to manage resource compliance by acting as an automatic resource compliance checker. When a change is made to a resource, AWS Config will check to see if the resource matches a rule (with the help of a Lambda function). If it does, AWS Config will check the compliance of that resource against the rule once the changes have been made.
There are two types of Config Rules within AWS Config:
AWS Managed Rules are predefined and cover best practices and common compliance checks. These rules currently operate over the following topic areas:
For many of these Managed Rules, you can alter specific parameters to fit your requirements as we will see coming up.
Custom Rules allow you to set your own compliance checks with your own Lambda functions, which is where the logic of the rule itself is evaluated. If you can write your own Lambda functions, then you can truly take advantage of these Config Rules. This will allow you to optimize your environment by ensuring that all compliance requirements have been fulfilled, which may not be possible within the limited AWS Managed Rules.
Let’s look at a sample scenario to see how Config Rules can help you meet compliance requirements:
Scenario: You have a number of fleets of EC2 instances with EBS volumes running a number of different applications within auto scaling groups. Internal standards and compliance requires that the EC2 instances MUST be either c3.4xlarge or m1.xlarge instance types. In addition, the EBS volumes MUST be EBS optimized for efficient I/O throughput and ALL EC2 and EBS resources MUST be tagged with an ‘ApplicationName’ and ‘ProjectName’. External compliance controls also dictate that data MUST be encrypted at all times.
This can easily be achieved during the initial deployment as you can ensure that the correct configuration and settings are deployed. However…
Once the initial deployment was carried out, you then handed the environment over to Support & Operations to maintain and look after it. Over time, the environment would be subject to general maintenance, the removal and adding of resources, and other incidents.
While they would have been aware of the compliance requirements, the Support & Operations team may not have maintained it at all times. This could have been due to human error or lack of knowledge. These things happen.
For example, they may have updated the launch configuration of an auto scaling group and selected the incorrect instance type, or they may have forgotten to enable encryption on the EBS volumes or failed to select an optimized volume. As applications were rolled out, they also may have forgotten to tag those instances.
As a result, your environment is now in a state of non-compliance, failing both internal and external requirements and controls.
This situation can easily be avoided with the use of AWS Config Rules. In this example, we could have used a number of AWS Managed Rules to notify us that non-compliant resources were in operation, allowing us to take the necessary action. It’s important to note that non-compliant resources still function as normal; AWS Config simply flags them as non-compliant. These are some of the rules that could have been used:
Let’s look at how just a couple of these rules would have been configured, starting with desired-instance types:
Select the rule from the list of AWS Managed Rules within AWS Config:
This will allow you to edit specific parameters of that rule. In the screenshot below, you will see the ‘Managed rule name.’ This is the name of the AWS Lambda Function that is used to evaluate the compliance of the resource against the rule.
The ‘Resources’ listed shows which resource type I want the rule to be applied against. In this case, it is all EC2: instances.
Finally, the Key Value pair allows me to indicate which instance type(s) the resource must adhere to, and in our scenario, I have set this to c3.4xlarge and m1.xlarge.
Now, let’s take a look at the Rule used for checking the tagging compliance.
Again, the rule is selected from the list of AWS Managed Rules within AWS Config:
In the screenshot below, we have the AWS Lambda function listed, along with the resource types that the rule should be applied to. I have included EC2: Instances and EC2: Volumes as our requirements indicated that both of these resource types required tagging.
The parameters at the bottom allow me to add both tags required: ‘ApplicationName’ and ‘ProjectName.’ In addition, I can add the values that are available for each of the tags.
Once the rules have been configured and are up and running, AWS Config will identify if there are any non-compliant resources against your rules.
The example below shows two instances that are non-compliant with the desired-instance-type rule.
From this simple scenario, you can understand the value of AWS Config for providing a continuous monitoring solution for compliance, as well as a wide range of uses.
Maintaining compliance does not have to be a huge manual and resource intensive operation. AWS Config monitors your resources and performs a lot of these evaluations for you. This not only saves you time and money, but it also reduces your risk of non-compliance.
If you would like to learn more, check out our new course on AWS Config. We will cover how AWS Config works, how to configure it, how to put it to use in your environment, and we’ll talk about the other functions it provides from a resource management perspective. In AWS Config: An Introduction we will cover.
Get started with AWS Config: An Introduction today.
Amazon Web Services’ resource offerings are constantly changing, and staying on top of their evolution can be a challenge. Elastic Cloud Compute (EC2) instances are one of their core resource offerings, and they form the backbone of most cloud deployments. EC2 instances provide you with...
AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...
As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in cloud computing.As the market leader and most ma...
The announcements at re:Invent just keep on coming! Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. If you're not too familiar with Amazon EC2, you might want to familiarize yourself by creating your first Am...
Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...
In order to understand AWS VPC egress filtering methods, you first need to understand that security on AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructur...
Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...
Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...
There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...
Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...
One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...
A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...