The AWS account - and the shared security responsibility model

play-arrow
Start course
Overview
DifficultyBeginner
Duration46m
Students17473
Ratings
4.8/5
star star star star star-half

Description

The AWS Technical Fundamentals course (AWS 110) is the introduction to Cloud Academy's comprehensive Amazon Web Services learning tracks series. While subsequent courses in this series will explore individual AWS service categories (like networking or data management) and broader skills (like design principles or application deployments), this course offers a brief summary of everything that AWS has to offer. Technical Fundamentals is also the introduction to the 100 level courses (the AWS Technical Foundation Track) which, in turn, lays the groundwork for our 200 series (intermediate level skills) and 300 series (advanced skills).

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Another key concept to understand is the AWS account. You must have an account with AWS to be able to use services and you will be billed accordingly for the services that you utilize. When you create a new account, by default, you will have access to the nine core AWS regions. The exceptions are gov cloud and Beijing AWS regions as they have special requirements for access. To open an account, you need a valid credit card for billing and verification purposes.

After opening the account however, you can contact AWS to request invoicing if you prefer. The AWS account is associated with the email address you use when signing up and also requires a password. These account credentials have access to all AWS services and are therefore very important to protect. A good approach would be to never use the actual account credentials but rather create individual users with appropriate levels of authorization using the AWS identity and access management IAM service. And moving forward, only log in using those credentials. For additional protection, you can add multi-factor authentication to your accounts as well.

You will learn more about this in future courses. It is possible to have multiple AWS accounts for different purposes. For example, some companies may have an AWS account for each team or application, or indeed several for each application.

Accounts for development, testing and production, for instance. You can combine all of these accounts into a single bill if you like using the AWS consolidated billing functionality. We will go into greater detail about this in later courses. But suffice to say at this point that you can have multiple accounts.

There is one, often misunderstood, thing to be aware of when using multiple accounts, however. The designation of availability zones within a AWS region might not be the same for different AWS accounts even if those AWS accounts are owned by the same company or person. An example will probably help to illustrate this better. Let us imagine a company with two AWS accounts, one per application. Let us call them AWS account one and account two. They're both used to deploy an application in the Singapore AWS region.

Using account one, the company deploys an application to the first availability zone called ap-southeast-1a. Account two wants to deploy their application in the same availability zone to have the lowest possible latency between the two applications. They therefor assume that this will be ap-southeast-1a for account two as well. However, this may not be the case as the physical data centers may have different letters designating their availability zone. You actually need to contact AWS to get the mapping between physical data centers and availability zones for each of your accounts. Why would you think this is the case? Would it not be better to always keep the same mapping between physical data centers and availability zones? The answer lies in the distribution of resources. AWS has hundreds of thousands of customers and they need to keep their capacity balanced across all of their data centers in AWS regions. If all customers chose the same data center as the default location for their virtual machines and storage, there would quickly be a disparity between the different locations and by default, most users would chose availability zone a, simply because it is the top choice in the drop down list.

This is one the reasons why AWS balances out the availability zones across different accounts. When you first start using AWS services, there are, by default, limits to what you can do and how many resources you can use. Theses AWS service limits cover almost every service. To give a couple of examples, by default, an account is limited to 20 running, on demand, EC2 instances in any one region.

There may be further restrictions on specific instance types, too. To increase this number, you have to contact AWS.

There are some limits that AWS typically will not be able to increase. The limit of 100 S3 buckets per AWS account is one example.

The AWS shared security responsibility model. Security is a very large topic and we will be going into a lot more detail over several courses. But it is important to get a basic understanding of the core security principle that AWS supplies to its services often called the AWS Shared Security Responsibility Model. Quite simply, it explains what parts of the security AWS takes responsibility for and which parts AWS expects you to manage.

We'll use an example to clarify. AWS takes responsibility for the physical security of its data centers. Things like physical access controls and monitoring will therefor be AWS' headache. However, if you run a virtual machine, EC2 instance inside that data center, you are given tools to filter the network traffic that you allow to and from that virtual machine. By default, no network traffic is allowed.

All ports are blocked. But, you have the authority to change those network rules to allow traffic on specific ports. So if you choose open all TCP ports to be accessible from anywhere in the world, you are responsible for ensuring that your application is adequately protected.

The exact delineation of what AWS is responsible for and what you have to be responsible for varies by service. For example, the EC2 service provides you with full administrator rights to your virtual machine operating systems, root on Linux, administrator on Windows. This provides you with a lot of flexibility but it also means that you are responsible for ensuring that the operating system and any applications you are running are secured and properly patched. If you choose to install a database on that virtual machine, you can do so. But, you will also need to take care of backing up any important data. AWS provides tools to help you do this, but it is your responsibility to ensure that it gets done.

Other services like the relational database service, RDS, are what could be refereed to as AWS managed services. In this case, AWS does the database backup and patches the database as well as the underlying operating system for you.

However, you no longer have operating system admin or root privileges. In other words, you access the RDS database through the AWS management console or one of the other ways of accessing the services we talked about earlier in this course. You can also communicate to the database using your traditional database management tools and SQL directly. But you cannot log in as administrator or root to the underlying operating system. As mentioned, this is a large topic and we recommend you take at least the AWS security fundamentals course, AWS 190, to get a basic understanding of the AWS security shared responsibility model.

About the Author

Students37268
Labs11
Courses4

Antonio is an IT Manager and a software and infrastructure Engineer with 15 years of experience in designing, implementing and deploying complex webapps.

He has a deep knowledge of the IEEE Software and Systems Engineering Standards and of several programming languages (Python, PHP, Java, Scala, JS).

Antonio has also been using and designing cloud infrastructures for five years, using both public and private cloud services (Amazon Web Services, Google Cloud Platform, Azure, Openstack and Vmware vSphere).

During his past working experiences, he designed and managed large web clusters, also developing a service orchestrator for providing automatic scaling, self-healing and a Disaster Recovery Strategy.

Antonio is currently the Labs Product Manager and a Senior DevOps Engineer at Cloud Academy; his main goal is providing the best learn-by-doing experience possible taking care of the Cloud Academy Labs platform.