1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Understanding of AWS Authentication, Authorization & Accounting

Authorization in AWS

The course is part of these learning paths

Solutions Architect – Professional Certification Preparation for AWS
course-steps 45 certification 5 lab-steps 19 quiz-steps 5 description 2
SysOps Administrator – Associate Certification Preparation for AWS
course-steps 34 certification 4 lab-steps 30 quiz-steps 4 description 5
Security - Specialty Certification Preparation for AWS
course-steps 22 certification 2 lab-steps 12 quiz-steps 5
AWS Advanced Networking – Specialty Certification Preparation
course-steps 18 certification 1 lab-steps 8 quiz-steps 4
AWS Access & Key Management Security
course-steps 6 certification 2 lab-steps 2 quiz-steps 2
more_horiz See 3 more

Contents

keyboard_tab
Introduction
Authentication, Authorization & Accounting
Summary
6
Summary3m 20s
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 26m
Students2761

Description

Cloud Security is a huge topic, mainly because it has so many different areas of focus. This course focuses on three areas that are fundamental, AWS Authentication, Authorisation and Accounting.

These three topics can all be linked together and having an understanding of the different security controls from an authentication and authorization perspective can help you design the correct level of security for your infrastructure. Once an identity has been authenticated and is authorised to perform specific functions it's then important that this access can be tracked with regards to usage and resource consumption so that it can be audited, accounted and billed for.

The course will define and discuss each area, and iron out any confusions of meaning between various security terms. Some people are unaware of the differences between authentication, authorization and access control, this course will clearly explain the differences here allowing you to use the correct terms to describe your security solutions.

From an AWS authentication perspective, a number of different mechanisms are explained, such as Multi-Factor AWS Authentication (MFA), Federated Identity, Access Keys and Key Pairs. With the help of demonstrations, you can learn how to apply access keys to your AWS CLI for programmatic access and understand the differences between Linux and Windows authentication methods using AWS Key Pairs.

When we dive into understanding authorization we cover IAM Users, Groups, Roles and Policies, providing examples and demonstrations. Within this section, S3 authorization is also discussed, looking at access control lists (ACLs) and Bucket Policies. Moving on from S3, we look at network and instance level authorization with the help of Network Access Control Lists (NACLs) and Security Groups.

Finally, the Accounting section will guide you through the areas of Billing & Cost Management that you can use to help identify potential security threats. In addition to this, we explain how AWS CloudTrail can be used to track API calls to analyse what users are doing and when. This makes CloudTrail a strong tool in tracking, identifying and monitoring a user's actions within your AWS environment.

Transcript

Hello, and welcome to this lecture, discussing how authorization can be granted within AWS.

In an earlier lecture, I discussed the differences between authentication and authorization, and I just want to reiterate what they were. So I'll go over the definition of the two again. Authentication is the process of defining an identity and the verification of that identity. For example, a username and password. Authorization, that determines what an identity can access within a system once it's been authenticated. An example of this would be an identity's permissions to access specific AWS resources. As we have already seen, the main service that is responsible for managing and maintaining what an AWS identity is authorized to access is governed by IAM, identity and access management.

So let's start with IAM, and how these permissions are implemented and associated with different identities, allowing the authorization to use specific services and carry out certain functions. When an identity is authenticated to AWS, the way in which permissions are given to the identity varies depending on the identity's own user permissions and its association with other IAM groups and roles.

Let's take a quick recap on users, groups, and roles. IAM users are account objects that allow an individual user to access your AWS environment with a set of credentials. You can issue user accounts to anyone who needs to view or administer objects and resources within your AWS environment. Permission can be applied individually to a user, but the best practice for permission assignments is to add the user to an IAM group.

IAM groups are objects that have permissions assigned to them via policies, allowing the members of the group access to specific resources. Having users assigned to these groups allows for a uniform approach to access management and control.

IAM roles are objects created within IAM, which have policy permissions associated to them. However, instead of just being associated with users as groups are, roles can be assigned to instances at the time of launch. This allows the instance to adopt permissions given by the role without the need to have access keys stored locally on the instance.

Permissions are granted to users, groups, and roles by means of an AWS IAM policy. This policy is in the form of a JSON script. There are a number of pre-written AWS policies, which are classed as AWS managed policies. You can also create your own customer managed policies, too. The AWS managed policies cover a huge range of AWS services at different authorization levels, from read-only to full access. And at the time of this course production, there are currently 218 AWS managed policies in place. If your security requirements fit with one of these AWS managed policies, then that's great and you can start using it right away by associating users, groups, or roles to it. However, it's more than likely that these AWS managed policies are not a perfect match for permissions you want to assign to an authenticated user. In this instance, you can copy and tweak the policy and make it fit for your requirements exactly. When it comes to security, you can't be lazy, as this leads to mistakes and vulnerabilities. You can't afford to take shortcuts, and you need to define your permissions, ensuring they only allow authorized access to services and features that are required.

IAM policies are made up of statements following a set syntax for allowing or denying permissions to an object within AWS. Each policy will have at least one statement with a structure that resembles the following. Statement: this defines the main element of a policy, and groups together the permissions defined within it via the following attributes.

Effect: this will either be set to Allow or Deny. These are explicit. By default, access to your resources are denied, and so therefore if this is set to Allow, it replaces the default Deny. Similarly, if this was configured as Deny, it would override any previous Allow.

Action: this corresponds to API calls to AWS Services that authenticate through IAM. This example represents an API call to delete a bucket, the action with an S3. You are able to list multiple actions if required by using a comma to separate them. Wildcards are also allowed. So for example, you could create an action to carry out all APIs relating to S3.

Resources: this specifies the actual resource you wish the permission to be applied to. AWS uses unique identifiers known as ARNs, Amazon Resource Names, to specify resources. Typically, ARNs follow the following syntax. Let's break this down and take a look at each of these segments. Partition: this relates to the partition that the resource is found in. For standard AWS regions, this section would be AWS. Service: this reflects the specific AWS service. For example, S3 or EC2. Region: this is the region where the resource is located. Now remember, some services do not need a region specified, so this can sometimes be left blank in those circumstances. Account-ID: this is your AWS account ID without hyphens. Again, there are some services that do not need this information, and so it can be left blank. Resource: the value of this field will depend on the AWS service you are using. For example, if I were using the action s3:DeleteBucket, then I could use the bucket name that I wanted the permission to delete, and in this example, cloudacademy is the name of the bucket.

Condition: this element of the IAM policy is an optional element that allows you to specify when the permissions will be activated based upon set conditions. Conditions use key value pairs, and all conditions must be met for the permissions to be activated. For example, there may be a condition only permitting requests from a specific source IP address. A full listing of these conditions can be found here.

Now we have a basic understanding of how JSON scripts are put together and their general flow. Let's see how we can modify existing policies to tweak them to your needs. To copy and edit an existing AWS managed policy is a very simple and easy thing to do, and can save you a lot of time trying to recreate your own if you just need a few small tweaks.

So I'm currently within the AWS management console, at the dashboard of IAM. So from here, you just need to go down to Policies, and then up to Create Policy. Now you can see you got three options here: Copy an AWS Managed Policy, Policy Generator, or Create Your Own. For this demonstration, we want to copy an existing AWS managed policy, and then we can customize it to fit our needs. So we can select that. Now you can filter from this policy list and save you scrolling through the 10s or 100s that there are. So what I'll be inclined to do is to search for roughly what you're looking for. Let's have a look at S3. Let's take a look at the S3ReadOnlyAccess. So select that, and now you can see what the policy looks like. So this is the JSON document, and you can see that it allows the s3:Get and s3:List actions, which will essentially give you read-only access to S3, to any resource. So let's modify this to include an additional permission, for example, CreateBucket. So I can directly edit this policy document and add in our own, so s3:CreateBucket. So now, we have read-only access, and also, we are allowed to create buckets as well.

If we click on Validate Policy, and that will just confirm that the entries we have made are okay, and you see on the top here, it says this policy is valid. If you did edit it, and it wasn't quite correct, then it would let you know. For example, if I removed this comma here and tried Validate Policy again, it would let us know that this policy contains the following JSON error on the specified line, tells you what it expected instead of what it actually has. So if we go back to line 8, add back in our comma, and say Validate, and it can say this policy is valid.

And then from here, all we need to do is give this a new policy name. We can call it S3-Custom-Policy. And then all we need to do is click on Create Policy. And that's it. Now we can verify that that policy exists. We can click on the filter here and say Customer Managed. Because we've edited the AWS managed policy, it now becomes a customer managed policy. And we can see, down here is our policy, S3-Custom-Policy. We can click on it, we can see the JSON document. And that's it.

If you don't feel confident enough to edit existing AWS managed policies, then you could use a tool provided within IAM called the IAM Policy Generator. This allows you to create an IAM policy using a series of dropdown boxes without the need of editing a JSON document itself. The following demo will quickly show you how to access this policy generator and create an example policy.

Okay, so to create a policy using the AWS Policy Generator is, again, very simple, like we've done previously. I'm starting on the screen within IAM, and I'm under Policies at the moment. So from here, all you need to do is click on Create Policy, and again we have the three options, but this time, we want to use the policy generator. So click on Select, and we've got a number of dropdown boxes and options here. So we've got an effect, which we can either have as Allow or Deny. For this example, we're going to have Allow. We then have a list of AWS services. As you can see, there's quite a lot in the list. And we'll select Amazon S3. And now we can pick all the actions associated with S3. If we tick this one here, All Actions, then we get everything, or we can just pick specific permissions. Let's go for Create and DeleteBuckets. And then we have to supply the Amazon Resource Name. So for S3, that will be arn:aws:s3:: and then all resources, Add Statement, and you can see here at the bottom, we have an Allow effect for the s3:Create and DeleteBuckets to all resources within S3, and then we click on Next Step, and we can see here that it's created the JSON policy document for us. So based on those dropdown selections, we now have a full policy document that we can use.

And then we can click on Validate Policy, and as before, you can now see that this policy is valid, so there's no errors in this policy. And now we can give this policy a name. Let's call it S3CreateDelete, and then click on Create Policy. And again, we can have a look and verify that our policy is there by filtering on Customer Managed, and here we have our S3CreateDelete policy, and there you go. That's how you create a policy using the policy generator.

So far, we have covered how to create IAM policies from both an AWS managed perspective and via the policy generator. However, if you are completely at ease writing your own JSON scripts, and want to define their own tight and well-written IAM policies, then you have this option available to you as well. All you need to do is to give your policy a name and a description, and then start writing your permission statements, authorizing any associated identities to access or restrict access to AWS resources. Once you get used to the syntax and benefits of writing your own policies, you'll be able to effectively and efficiently lock down access to your resources to ensure they are only accessed by authorized API calls. There are many, many commands that can be applied and controlled through an IAM policy, but they're a bit beyond the scope of this course. However, AWS does provide great API listings for the different services through their extensive documentation for advanced policy writers.

Let's now take a step away from IAM and move our attention to S3, Simple Storage Service. This is one of AWS' most common storage services, and is used by a multitude of other AWS services. So it's worth devoting some time to see how S3 handles its own authorization. There are multiple ways an identity can be authorized to access an object within S3, which overlap with the IAM mechanisms we have already discussed. So how does a user or service get the correct level of authorization? First, let's define the different methods that permissions can be applied within S3: S3 bucket policies, and S3 ACLs, Access Control Lists.

Bucket policies are similar to IAM policies, in that they allow access to resources via a JSON script. However, as the name implies, these bucket policies are only applied to buckets within S3, whereas IAM policies can be assigned to users, groups, or roles as we previously discussed. In addition, IAM policies can also govern access to any AWS service, not just S3. When a bucket policy is applied, the permissions assigned apply to all objects within that bucket. This policy introduces a new attribute called principles. These principles can be IAM users, federated users, another AWS account, or even other AWS services, and it defines which principles should be allowed or denied access to various S3 resources. Principles are not used within IAM policies as the principle element is defined by who is associated to that policy via the user, group, or role association. As bucket policies are assigned to buckets, we need to have this additional parameter of principles within the policy.

As you can see from this example, a bucket policy is very similar in terms of layout and syntax to that of an IAM policy. However, we do have the Principal attribute added. This value must be the AWS ARN of the principal, and in this example, we can see cloudacademy, as a user within IAM, is allowed to delete objects and put objects within the cloudacademy bucket identified under the resource parameter. S3 bucket policies also allow you to set conditions within the policy, allowing a fine-grain permission set to be defined. For example, you could allow or deny specific IP subnets to access the bucket, or perhaps even restrict a specific IP address. This is another level of access control taking place at the network level that helps to tighten access, ensuring only authorized access is permitted.

I now want to move on to S3 ACLs to show you how these differ. This access mechanism predates IAM, and so is quite an old access control system. S3 ACLs allow identities to access specific objects within buckets; a different layout approach than bucket policies, which are applied at the bucket level only. ACLs allow you to set certain permissions on each individual object within a specific bucket. These ACLs do not follow the same format as the policies defined by IAM and bucket policies. Instead, they are far less granular, and different permissions can be applied depending if you are applying an ACL at the bucket or object level.

The grantee is the resource owner, and is likely to have full control over that object and on new bucket creations. This is typically the AWS account owner. The grantees are defined by the following categories. Everyone: this would allow access to this object by anyone, and that doesn't just mean any AWS user, but anyone with access to the internet if the object is public. Any Authenticated AWS Users: this option will only allow IAM users or other AWS accounts to access the object via assigned requests of authentication. Log Delivery: this allows logs to be written to the bucket when it is being used to store server access logs. Me: this relates to your current IAM AWS user account. From within S3 via the AWS management console, these permissions can be applied via a series of checkboxes, and if all options are selected, then that grantee is considered to be authorized to have full contol of the object. You can have up to 500 grantees on any object.

We have spoken about a number of ways an identity or principal can be authorized access to a resource or object within AWS, but what happens if a principal who belongs to a group and accesses an object in a bucket with S3 ACLs, bucket permissions and their own IAM permissions? Within all of this authorization applied to the principal, how is this access governed if there are conflicting permissions to the object in the bucket that they are trying to access?

Well, AWS handles this permission conflict in accordance with the basis of least-privileged. Essentially, by default, AWS dictates that access is denied to an object, even without an explicit Deny within any policy. To gain access, there has to be an Allow within a policy that the principal is associated to or defined by within a bucket policy or ACL. If there are no Denies defined, but there is an Allow within a policy, then access will be authorized. However, if there is a single Deny associated with a principal to a specific object, then even if an Allow does exist, this explicit Deny will always take precedence, overruling the Allow, and access will not be authorized.

I'd now like to just give a quick demo of how to create S3 ACLs and S3 bucket policies. Okay, for this demo, I'm going to show you how to look at the S3 ACLs and edit those, and also how to create an S3 bucket policy. So I've created a bucket here from within S3 called cademobucket. And looking at the properties of this bucket, if we go down to Permissions, here you'll see the permissions related to the ACL, the access controllers. The grantee is the account owner. So if you wanted to add more permissions to this ACL, we can click on Add More Permissions. Select another grantee. I'll just select Me, and then we can just use the tickboxes to select the permissions that we want, so List and Upload/Delete, and then click on Save. And that'll now give my user List and Upload/Delete permissions to this bucket. And for S3 ACLs, it's as simple as that, really.

So moving on to bucket policies. Let's just delete this. So let's add a bucket policy. Now you can either write your own policy here if you're confident enough, or you can select a sample bucket policy, or use the AWS Policy Generator. So let's go ahead and use the generator. Type of policy will be an S3 bucket policy. The effect we'll have is Allow. So the principal is going to be an AWS user in this demonstration. So if we go ahead and look at our user, the one we created earlier was CAuser1. Here's the ARN of this user, so we shall copy that. And if you notice the permissions that this user's got, it's only read-only access to S3, it's one of the AWS managed policies that was assigned to that user. So we'll put in the ARN of the principal. Service is S3 and the action we will have will be PutObject. And the ARN of the bucket will be arn:aws:s3::cademobucket/ and then any resource. Let's add conditions as well for this. So on the condition of an IpAddress with the SourceIp, being mine, which is 90.198.222.3, so we'll add that condition. We'll add the statement. So here we can see that the principal is the CAuser1, is allowed to put objects within the cademobucket on the condition that the source IpAddress is 90.198.222.3, which is my IP address. Click on Generate Policy. We can then copy that and paste it into our Bucket Policy Editor, click on Save. And that's it, that's the bucket policy applied.

So what I'm going to do now is log out of this account, and log in with the CAuser1 account and try and put an object in that bucket. Okay, so I've logged back in as CAuser1. So I want to try and test that bucket policy now by putting an object within this bucket. So as you can see, I'm within the cademobucket. So if I go to Upload, Add Files, pick a random file, and say Start Upload, and there you can see, the object has been uploaded. So with the use of a bucket policy, I was able to grant additional permissions to this user to allow them to add objects to this bucket, with the inclusion of the conditions as well using the source IP address.

Permissions and authorization can exist at multiple layers within the AWS framework. We have looked at specific user and principal permissions, and how the authorization process is managed.

When we discussed S3 bucket policies, we briefly touched on conditions, and how this can be configured to allow or deny access based on IP addresses, for example.

This network level access control can also be used within your virtual private cloud, VPC, to authorize network traffic in and out of a particular subnet. It's managed differently and offers greater control through the use of network access controllers, or NACLs.

In the beginning of this course, we listed AWS NACLs as an access control mechanism, and indeed they are. However, they provide permission at the network layer. NACLs provide a rule-based security feature for permitting ingress and egress network traffic at the protocol and subnet level. In other words, ACLs monitor and filter traffic moving in and out of your subnet, either allowing or denying access dependent on rule permissions. These NACLs are attached to one or more subnets within your virtual private cloud. If you haven't created a custom NACL, then your subnets will automatically be associated with your VPC's default ACL, and in this instance, the default allows all traffic to flow in and out of the network, as opposed to denying.

The rule set itself is very simple, and has both an inbound and outbound list of rules, and these rules are comprised of just six different fields; these being Rule Number: ACL rules are read in ascending order, and as soon as a network packet is received, it reads each rule in ascending order until a match is found. For this reason, you'll want to carefully sequence your rules with an organized numbering system. I would suggest that you leave a gap of at least 50 between each of your rules to allow you to easily add new rules in sequence later, if it becomes necessary. Type: this dropdown list allows you to select from a list of common protocol types, including SSH, RDP, HTTP, and POP3. You can alternatively specify custom protocols, such as varieties of ICMP. Protocol: based on your choice for type, the protocol option might be grayed out. For custom rules like TCP and UDP, however, you should provide a value. Port Range: if you do create a custom rule, you'll need to specify the port range for the protocol to use. Source: this can be a net or a subnet range, a specific IP address, or even left open to traffic from anywhere. Allow/Deny: each rule must include an action specifying whether to find traffic where we're permitted, to enter or leave the associated subnet or not. So looking at these rules, authorization is permitted or denied by the associated subnet, depending on the verification of the parameters identified in points 2 to 5. This data is analyzed from within the network packet itself. So we are not authorizing a principal here, like we have been looking at with IAM and S3. Instead, we are authorizing the network packet itself.

It's important to note that NACLs are stateless. Therefore, when creating your rules, you'll need to apply an outbound reply rule to permit responses to inbound requests.

I have seen NACLs used very effectively to prevent DDOS, distributed denial of service, attacks. If traffic somehow manages to get past AWS' own DDOS protection undetected, and you're being attacked from a single IP address, you can create a NACL rule that will deny all traffic from that source right at the subnet level, and the traffic will not be authorized to go any further. Just a small point, and this applies to all the authentication and authorization mechanisms I've mentioned thus far: your NACLs will require updating from time to time, and you should regularly review them to ensure they are still optimized for your environment. Security is an ongoing effort and needs regular attention to ensure its effectiveness.

Having the ability to authorize or deny network packets at a network level is great, but can the same be accomplished at an instance level? The answer is yes. Let's see how this level of authorization works. AWS security groups are associated with instances, and provide security at the protocol and port access level, much like NACLs, and as a result, they also work much the same way. Containing a set of rules that filter traffic coming into and out of an EC2 instance. However, unlike NACLs, with security groups, there isn't a Deny action for a rule. Instead, if there isn't a rule that explicitly permits a particular packet, it will simply be dropped. Again, the rule set is made up of two rule sets, inbound and outbound.

But security groups are stateful, meaning you do not need the same rules for both inbound and outbound traffic, unlike, NACLs, which are stateless. Therefore, any rule that allows traffic into an EC2 instance will allow any response to be returned without an explicit rule in the outbound rule set.

Each rule is comprised of four fields: type, protocol, port range, and source. Let's take a look. Type, the dropdown list allows you to select common protocols like SSH, RDP, HTTP. You can also choose custom protocols. Protocol, this is typically grayed out, as it's covered by most type choices. However, if you create a custom rule, you can specify your protocol here. Port Range, this value will also usually be pre-filled, reflecting the default port range or port range for your chosen protocol. However, there might be times when you prefer to use custom ports. Source, this can be a net or subnet range, a specific IP address, or another AWS security group. You can also leave access open to the entire internet using the Anywhere value. We can clearly see here that authorization to the instance can only be permitted if the packet meets conditions within the four parameters. Again, we are not authorizing a principal here, it's the network packet itself. Security groups are a great way to authorize the use of particular ports for communication, whilst restricting all other communication over denied ports.

For example, you could have a number of SQL RDS instances that you want to write to from a group of EC2 instances. In this case, you could create a security group for the SQL RDS instances, and another for the EC2 instances. You will then authorize communication to happen over specified permitted ports, such as 1433 and 1434, used by SQL, between the two groups. All other communication will be dropped and denied, which in turn enhances security on your AWS infrastructure.

That brings us to the end of this lecture on authorization within AWS. Coming up next, we'll look at how we can track and order identities that have been authenticated and are authorized to access specific resources.

About the Author

Students48374
Labs1
Courses51
Learning paths31

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data centre and network infrastructure design, to cloud architecture and implementation.

To date Stuart has created over 40 courses relating to Cloud, most within the AWS category with a heavy focus on security and compliance

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.