Authentication, Authorization & Accounting
Cloud Security is a huge topic, mainly because it has so many different areas of focus. This course focuses on three areas that are fundamental: AWS Authentication, Authorization, and Accounting.
These three topics can all be linked together and having an understanding of the different security controls from an authentication and authorization perspective can help you design the correct level of security for your infrastructure. Once an identity has been authenticated and is authorized to perform specific functions it's then important that this access can be tracked with regards to usage and resource consumption so that it can be audited, accounted, and billed for.
The course will define and discuss each area, and iron out any confusion of meaning between various security terms. Some people are unaware of the differences between authentication, authorization, and access control, this course will clearly explain the differences here allowing you to use the correct terms to describe your security solutions.
From an AWS authentication perspective, a number of different mechanisms are explained, such as Multi-Factor AWS Authentication (MFA), Federated Identity, Access Keys, and Key Pairs. With the help of demonstrations, you can learn how to apply access keys to your AWS CLI for programmatic access and understand the differences between Linux and Windows authentication methods using AWS Key Pairs.
When we dive into understanding authorization we cover IAM Users, Groups, Roles, and Policies, providing examples and demonstrations. Within this section, S3 authorization is also discussed, looking at access control lists (ACLs) and Bucket Policies. Moving on from S3, we look at network- and instance-level authorization with the help of Network Access Control Lists (NACLs) and Security Groups.
Finally, the Accounting section will guide you through the areas of Billing & Cost Management that you can use to help identify potential security threats. In addition to this, we explain how AWS CloudTrail can be used to track API calls to analyze what users are doing and when. This makes CloudTrail a strong tool in tracking, identifying, and monitoring a user's actions within your AWS environment.
- Obtain a strong grasp of the difference between authentication, authorization, access control, and accounting
- Understand various authentication mechanisms used in AWS such as MFA, Federated Identity, Access Keys, and Key Pairs
- Learn about IAM Users, Groups, Roles, and Policies and how they tie into authorization in AWS
- Learn about billing and cost management, and how to use it to identify potential security threats
- Understand how AWS CloudTrail can be used to track, identify, and monitor users' actions within AWS
This course has been created for anyone with an interest in cloud security, and/or who may hold a position of cloud solutions architect, cloud security specialist, or similar.
To get the most out of this course, you should have a basic understanding of identity and access management (IAM), Amazon EC2, Amazon S3 storage, networking fundamentals, and the virtual private cloud service.
Please note that it is now possible to change the role associate to an EC2 instance at any time, not just at launch. See here for more information
Hello and welcome to this lecture on AWS Authentication Mechanisms. Authentication is the first component of authentication authorization and accounting. It's the first process that takes place. Without authentication, authorization and accounting simply can't happen. In the previous lecture, we briefly covered authentication methods. But here, I will drill down into a wider variety and at a deeper level, so you can understand how each of them work. Starting with AWS identity and access management accounts.
IAM is used to securely manage users who require access to your AWS environment. During the creation of identities of users, there are components and features that you can use that affect how these users authenticate. The most basic is that of a simple username and password. These usernames need to be unique, as they identify you as an individual and therefore can't be duplicated. The password however can be duplicated between different users. IAM allows you to specify your own password policy. It gives you complete control on how secure you want the passwords in your environment to be. Custom changes can be enforced within the password policy such as requiring uppercase, lowercase, numeric, non-alphanumeric characters. Additional attributes can also be set to ensure users' passwords are changed after a set period of time and restrict the use of previous passwords. A minimum password length can also be determined. I would always suggest enforcing a tight security policy and adopting as many as these parameters as possible.
With this authentication mechanism, all that is required to determine the identity is the username, which is then verified by the correct password that it is to any set password policy. IAM will then verify if the authentication is successful. If the credentials do not match those held by IAM, an access will be refused and you'll be asked to reenter the correct information. If the authentication is successful, then the user will be allowed into your AWS environment management console with the authorization to access any resources as specified by the permissions given by IAM associated to that user.
Usernames and passwords aren't generally considered hugely secure. As the users are allowed to set their own passwords which can often be guessed and laziness can lead to standard passwords being entered such as password in all lowercase, 123456, qwerty and Letmein. Although AWS allows you to enforce password policies, combinations of the above still make their way in such as password123!, which is both uppercase and lowercase letters along with numerics and non-alphanumeric characters.
So with this in mind, tighter security can be achieved with the use of additional authentication methods which requires additional credentials. IAM allows for multi-factoral authentication, MFA. This means that any user configured with MFA must use an additional level of authentication as well as a password to be authenticated, giving an additional layer of security. This additional authentication utilizes a random six digit number that is generated by an MFA device that is only available for a very short time period before the number changes again. There is no additional charge for this level of authentication, however you will need your own MFA device which can be a physical token or a virtual device. AWS provides a summary of all supported devices here. Personally, I use Google Authenticator on my phone because it's simple and easy to set up and configure. Before a user can authenticate with MFA, it must be configured and associated to the user. As we know, as a part of the authentication process, we need to ensure that the verification part confirms the identity of the user. This configuration and association is done from within IAM. And I will now show you how quick and easy this is to do for a quick demonstration.
Okay, so I've already logged into my AWS account and the first thing I need to do is go to IAM which is under security, identity, and compliance. So we need to go to enter IM to select the user that we want to give multi-factor authentication to. So if we go into the users, and I have a user here, CAuser1, cloud academy user one. So if we select that user, if I go across to security credentials, we can see down here that the signed MFA device is no. So we want to give this user an associated MFA. So if we click on the edit button there.
Now, I'm going to be using Google Authenticator on my phone as the MFA device. So that's a virtual MFA. So let's select that. And click on next step. Now this is a quick splash screen, just sign that. If you do have a virtual MFA, you need to install a compatible application on your phone or PC, etc., which I've already done. Like I said, I'll be using Google Authenticator. So we can just skip past that step. Now as a part of the setup of the MFA with this associated user, we can scan that QR code using Google Authenticator. And that will give us two activation codes. The first one, which will go in this box here, and then after a few seconds, it will give us a second code that we enter here. So this is just to synchronize. So if I go ahead and do that. So the first number it's given me is 745559. Then in just a few seconds that will present me a second number as a part of the synchronization process and that's 443393. And then we simply click on activate virtual MFA. So that's given us a message saying that the MFA device was successfully associated.
So if we finish that and try to log in as this user. So the user name was CAuser1. I'll put in the first authentication method which is a username and password, and you'll see here that the MFA users need to enter their code on the next screen. So by clicking sign in, it should present us with another authentication screen. And here we are. So here, we need to enter our MFA code. So if I go back into my Google Authenticator, and take a look at the number issued by the user, 061695. And click on submit. And there we go, we're authenticated into the AWS management console where I could then access any resources that I have permissions to.
Sometimes, logging into the AWS management console isn't required by an individual or identity. Instead access is required from a programmatic perspective for a user or service who may be using the AWS command line interface, the CLI, AWS APIs, tools for Windows Powershell or Software Development kits. When using any of these methods to access AWS resources, access is not granted to the AWS management console. Instead you are accessing the resources programmatically and so you must authenticate differently. This is typical of any application that calls upon functions of other services.
When an identity is making programmatic calls to your AWS resources, their identity is not prompted to log into the environment first. So how do your applications authenticate to AWS resources? As we know, this has to happen before being authorized to perform any actions. What credentials are used and how is this authentication achieved? Identities, human or system, requiring this type of programmatic access to the AWS environment use access keys to authenticate. These access keys are composed of two elements. Access key ID and a secret access key ID.
The access key ID is made up of 20 random uppercase alphanumeric characters such as the one displayed on screen. The secret access key ID is made up of 40 random upper and lowercase alphanumeric and non-alphanumeric characters as displayed.
These access keys can be created for any IAM user who requires authentication from a programmatic perspective. When the keys are created, you are prompted to download and save the details as the secret access key ID will only be displayed once and if you lose it, then you'll have to delete the associated access key ID and recreate new keys for the user. It's worth noting that it's not possible to retrieve lost secret access key IDs as AWS does not retain copies of these for security reasons in case they were compromised. These access keys must then be applied and associated with your system and application that we were using that requires the relevant access. For example, if you are using the AWS CLI to access AWS resources, you would first have to instruct the AWS CLI to use these access keys to authenticate and provide authorization. The method of performing this association varies based on the application and system that you are using. However, once this association has taken place, it ensures that all API requests made to AWS are signed with this digital signature.
Before we move on, I want to give a quick demo on how we apply access keys to AWS command line interface. So let's take a look. Okay, so let's look at how we apply access keys to the AWS CLI. If we go back to our CAuser1, we used earlier for multi-factor authentication, we can look at current permissions which is defined as S3 read-only access. Now to create some access keys for this user, we need to go across to security credentials and go down to access keys and create access key. So here we have the two parts, the access key ID and the secret access key. So now, what we need to do, we need to add these details to AWS CLI.
So if I flip across to the terminal, and the command we type in is AWS configure. And here asks for the access key ID. So, let's go ahead and copy that. Paste that in, and now the AWS secret access key. And as for a default region name, we'll just keep these as all the default settings. Same again with the output format. And there we go. So we've now configured our AWS CLI to use these new secret access key IDs.
So now, let's just clear the screen. And we know this user had S3 read only access. So let's test that out. So if we do AWS S3 LS. So that will list any buckets we have in S3 which should work because we know this user has read only access. And there you go, you could see the buckets that are currently in S3. Now if we wanted to make sure this was working correctly, we can try and create a bucket. Now we know this user hasn't got access to do that. They only have read only access. Should this should fail. So AWS S3 MB to make bucket and then we'll call the bucket stus image files. And there you go, it says access denied. So that shows the access keys have been applied to the AWS CLI correctly.
As we discussed earlier when talking about password and how it's best practice to change a password after a set period of time. The same principle applies when it comes to access keys. By rotating your access keys, you are decreasing the likelihood of your environment or identities being compromised and gaining access to your environment. When rotating these keys, you should follow a simple five point process, to ensure you don't lose access to your resources. Firstly, you should create a second access key. Now that includes both the access key ID and a secret access key ID for the same user. So now the user has two access keys associated to them. In structural applicational system to start using the new access keys depending on what you are using will depend how you do this. For example, if you are using the AWS CLI, then you would issue the command AWS configure to add the new key information as we have just seen in the demonstration. At this point, you need to mark the existing access keys as inactive. Before deleting this now inactive key, you should test your new access keys and verify that programmatic access is being allowed as expected. Finally, delete the old access keys from the associated user. As an authentication method from a programmatic perspective access keys are a great solution. However if you are applying access keys to applications running on EC2 instances to gain access to other AWS resources with the new environment, then there is a more efficient and secure solution available.
IAM roles provide an efficient and secure solution in authenticating and authorizing access. And their usage is considered a best practice. IAM roles are objects created within IAM and have a defined set of permissions associated to them much like a normal user or group would have. However they do not represent an identity like users. They are simply an object with a list of authorized permissions associated. When an EC2 instance is created, you have the opportunity to associate the EC2 instance with an IAM role. This option is only available during EC2 creation. If you have an existing EC2 instance that you want to have an IM role assigned, then you must create an AMI image of that instance and recreate it from the new AMI and then select the appropriate role.
Once you have an EC2 instance within an associated IAM role you can now install your application that will need to make API calls within AWS. By having a role associated to the EC2 instance, there's no need to apply access key IDs as we did previously. Instead, when your application attempts to access an AWS resource, dynamic temporary access keys will be supplied by the role to determine if access is authorized.
Roles are not just for instances, they can also be assumed by a user allowing them to switch from their current set of permissions to take the permissions given by the role. It's important to note that permissions from the user and the IAM role are not amalgamated. The permissions are swapped from the user to the IAM role. To switch to a role, permissions must first be given to the user allowing them to switch to the role. Authorizing them to use this new set of temporary permissions.
Roles also take care of access key rotation as well so there is no need to perform your own rotation of access keys. Again, as per our previous five step example. Recall that earlier, I said it's more efficient to use roles for ECT when requiring programmatic access over an IAM access key ID. This is because if you have multiple applications running on multiple EC2 instances, then when it comes to key rotation, you'd have a number of applications to associate the new keys to.
Okay, so moving on from IAM roles, but sticking with EC2 instances, I want to talk about how we authenticate to newly created instances. And the process is slightly different for Linux and Windows operating systems. But the method is the same. The method uses AWS keypads. You may think this is similar to what we discussed previously with the use of an access key ID and a secret access key ID essentially a pair of keys. But key pairs are something completely different and provide an authentication method for a different purpose.
When you create an EC2 instance, you are asked to select of create an EC2 key pair. So what is it and what does it do? A key pair as the name implies is made up of two components, a public key and a private key. These keys are 2048-bit SSH-2 RSA keys. The function of key pairs is to encrypt the login information for Linux and Windows EC2 instances. And then decrypt the same information allowing you to authenticate onto the instance.
The public key uses public key cryptography to encrypt data such as the username and password. For Windows instances, the private key is used to decrypt this data allowing you to gain access to the log in credentials including the password. For Linux instances, the private key is used to SSH onto the instance. The public key is held and kept by AWS. The private key is your responsibility to keep and ensure that it is not lost. So going back to when you create your EC2 instance, you are given the opportunity to download the key pair in the form of a pem file. Once you have done this, you must keep that file safe until you are ready to log in into the associated EC2 instance. It's worth noting that you can use the same key pair on multiple instances. As I said previously, the process of how we authenticate to a Linux OS and a Windows OS is slightly different. And the best way to demonstrate this is if I show you via demonstration. So let's take a look at how I use the pem file containing the private key for each.
Okay, in this demonstration, I'm going to show you how to connect to a Linux box and a Windows box using your key pair or your pem file that you would've downloaded on creation. As you can see, I've created two boxes here. One of them is a Linux box, and another is Windows. So let me show you how to connect to a Linux box first. So if we select it there and then click on connect, this screen will give you a couple of bits of information to allow you to connect to your Linux box over SSH. Firstly, you need to ensure that your key is not publicly viewable. And to do that, you need to issue this command against you pem file, which I have already done. And we have your public DNS name of the Linux box there. And then simply to connect, we run an SSH command, and it gives us the whole command here. So, we run our SSH on our pem file and I've named it linux.pem for ease. And we'll connect with the username EC2user which is the default use for Linux boxes at the public DNS name. So let's go ahead and copy this command. And see if we can connect. And there you go. We are up and running on the Amazon Linux AMI box here. So it's very simple and easy to connect to a Linux box. We simply use the pem file within the connection string of an SSH command.
So let's see how this differs to a Windows box. So if we go back to the management console, select Windows, connect. And here you can see, you got a couple different options. We still have the public DNS name. Our username is administrator, as opposed to EC2user for Linux boxes. And then where we use the pem file is to decrypt the password. So if we click on get password, then we need to select our key pair, so our pem file. Where I've named it windows.pem for ease. We then click on decrypt password. And there you can see it's decrypted our password for the Windows box. So now what I'll do, I'll RDP onto the Windows box to makes sure we can connect. So I'll copy our public DNS name. And I'm using Microsoft Remote desktop. So create a new session, just call it a Windows box. Connect via public DNS. We know our username is administrator and then if we copy the password. And then see if we can connect. Click on continue, negotiating credentials. And here we are, it's connected to our Windows box. So as you can see, the process was slightly different. For the Windows box, we use the pem file to decrypt the password and for a Linux box, we use the pem file in a connection string to connect to it.
Once we have authenticated to the EC2 instance the first time, you can then set up your own local authentication methods, such as like a Windows accounts allowing other users to connect and authenticate to or even use Microsoft active directory.
AWS allows you to access and manage AWS resources even if you don't have a user account within IAM, through the use of identity federation, which is the next authentication mechanism I want to discuss. Put simply, identity federation allows users from identity providers, IDPs, which are external to AWS to access AWS resources securely without having to supply AWS user credentials from a valid IAM user account. An example on an identity provider can be your own corporate Microsoft Active Directory. Federated access would then allow the users within it to access AWS. Other forms of identity providers can be any open ID connect web provider. Common examples of these are Facebook, Google, and Amazon. If you want to understand more about open ID connect, please see the following website.
So what does this mean? Well this means that if you need users to access AWS resources that already have identities that fit into these categories then you could allow access to your environment using these existing accounts, instead of searching each of them up a new identity within AWS IAM. Effectively allowing for a single sign on solution. As the vast majority of organizations today are using Microsoft Active Directory, this is an effective way of granting access to your AWS resources without going through the additional burden of creating IAM user accounts.
As a part of the configuration process to implement federative authentication, a trust relationship between the identity provider and your AWS account must be established. AWS supports two types of identity providers, web ID federation and SAML two based federation.
Web identity federation allows authentication between AWS resources and any public open ID connect provider such as Facebook, Google or Amazon. When it's set up and configured, and access is made by a user to an AWS resource, perhaps an application, then the identity provider will exchange an authentication token for temporary authentication credentials. These credentials are associated to an IAM role with correct preconfigured permissions allowing authorized access to the resource as defined by that role. For this example, the process can be managed more effectively with the use of Amazon Cognito which helps manage user sign in to mobile and web apps through federated access. For more information on Amazon Cognito please visit this link.
SAML two based federations can allow your existing Active Directory users to authenticate to your AWS resources allowing for a single sign on approach. SAML stands for Security Assertion Markup Language and allows for the exchange of security data including authentication authorization tokens to take place between an identity provider and a service provider. In this case, the identity provider is Microsoft Active Directory service and the service provider is AWS.
Once configured, let's take a look at how the active directory authentication mechanism is established. This example will assume a user within an organization requires API access to S3, EC2, and RDS. This scenario will also include the use of an AWS service called security token service, STS. The security token service allows you to gain temporary security credentials for federated use via IAM which are associated with existing IAM roles and policies. More information on STS can be found here.
Let's run through this example. A user within an internal organization initiates a request to authenticate against the active directory federated service via our web browser using an SSA URL. If their authentication is successful by using their active directory credentials, SAML will then use a successful authentication assertion back to the users client requesting federated access. The SAML assertion is then sent to the AWS security token service to assume a role within IAM using the assume role with SAML API. STS then responds to the user requesting federated access with temporary security credentials with an assumed role and associated permissions allowing S3, EC2, and RDS access in our example. The user then has federated access to the necessary AWS services as per the role permissions. This is a simple overview of how federation is instigated from the user for API access to specific AWS services. Corporate identity federation is always authenticated internally first by active directory before AWS.
This brings us to the end of this lecture. Next I will touch on how authorization is permitted across different AWS services.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.