1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Getting Started with an Amazon Web Services Solution: Real World Practices


The course is part of these learning paths

Solutions Architect – Professional Certification Preparation for AWS
course-steps 47 certification 6 lab-steps 19 quiz-steps 4 description 2
Get Started Building Cloud Solutions
course-steps 14 certification 5 lab-steps 1


Course Introduction
AWS Solutions
Pass, Hold, Adopt
PREVIEW12m 25s
8m 32s
Course Conclusion
Start course
star star star star star-half


Getting Started with an Amazon Web Services Solution Real World Practices: In this course, we will untangle the AWS landscape and teach what you need to know to build applications on AWS. This includes understanding what AWS has to offer you if those prepackaged services make sense for your use case, and best practices around scaling, monitoring, security, and cost.


Hello and welcome to the getting started with an AWS solution real world practices course from CloudAcademy. My name is Adam Hawkins, and I am your instructor for this lesson. This lesson covers different aspects of security. It's a two-pronged approach. First, we'll cover securing your application itself using the AWS services. Second, we'll cover using the all powerful IAM to control your AWS account. The primary objective is to learn best practices around security, or more specifically to learn to secure application resources with security groups, prepare a human access strategy with IAM and prepare a machine access strategy with IAM. Let's get off by discussing application security practices.

Each application needs different levels of security. Let's discuss some best practices to apply to the majority of applications. First, use privat IPs. There's no need to expose your EC2 instances or other AWS resources to the public internet. I recommend you keep your general purpose application resources private and use a dedicated ingress, such as an ELB, to handle traffic coming from the public internet. This is an easy thing to do and eliminates many concerns. Also use the Bastion server. This is a follow-up to the previous point. You may also notice there's a jump server. You will need two SSH and EC2 instances at some point. A Bastion server allows you to, quote, jump into a private network from a public one. Your private instances can be configured to only allow SSH traffic from this instance and/or security group. Also, group your resources with security groups. EC2 security groups are powerful constructs. They control network ingress and egress. They may also be used as sources for ingress and/or egress rules. I recommend you bake security groups into your application design from the start. Especially if you're deploying multiple applications into the same AWS account. Here's an example. Say you have application A and application B.

Create a security group for each application and associate it with everything for that application. This way, you can easily say that these resources only accept traffic from application A or application B. Finally, only allow the traffic that you need. Consider you're deploying a web application. Add an ingress rule for port 80 and/or port 443 and that's it. Do not open unused ports. This lowers your attack surface. You can go a step further and secure your ingress points with web application firewall for stricter control over HTTP traffic. If that's also not enough, you can go even farther with Trusted Advisor to find specific, unrestricted ports and other security best practices.

These practices apply to your EC2 instances, RDF databases, and other things that you may create using AWS. They cover users accessing your application.

Now we need to turn our attention to securing access to AWS itself. IAM, or Identity and Access Management, is the answer to authentication and authorization. In a nutshell, it controls who can access your AWS account and what they can do with it. IAM is, arguably, the most critical service because you'll need it to grant other people access to your account and use it to control what your EC2 instances or other things may do on your behalf. Let's take a quick walk through all of what IAM can do before covering the best practices. IAM gives you the capacity to manage individual accounts. These are called IAM users. You can create unique username password combos and also create AWS keys for all of these unique IAM users. It also gives you role-based access controls.

You may create roles in an associated policies to grant or revoke access to a specific API calls. You can then associate IAM user with the groups to manage what people can do. You're also gonna get instance profiles. Instance profiles are IAM roles associated with EC2 instances or other AWS resources. These allow AWS resources to make AWS API calls according to the role without needing AWS API keys. Finally, you have policy management. Policies define the different grant and revoke rules. You may create your own from scratch or use AWS's precreated managed policies.

The managed policies are a great place to start, because they cover common use cases such as global read-only access or read/write access to EC2. That's enough for our IAM overview for now. I recommend you brush up on your IAM skills through the introduction to IAM course if this is new information to you. It covers all you need to know to get started. I cannot stress these skills enough. You cannot build a robust solution without understanding IAM. So IAM covers human and machine access. Let's turn our attention to the best practices for securing AWS for the most unpredictable of the two, people.

Every application needs people to access and manage it. Different people have different technical skills and different responsibilities. For example, you do not want your front-end engineer deleting demo DB tables nor do you want your finance department terminating instances. The goal is to ensure that each person has the appropriate permissions, nothing more and nothing less. Let's see how that's done. Start with creating an IAM user for each person in your team or organization. This provides each person with unique keys and makes it easy to add or remove people from your AWS account. You should also commit to rotating AWS keys and console passwords. This increases the overall security of your application and ensures that AWS keys are not used for unexpected things.

Most importantly, you need to define different access levels using IAM roles. You can use STS, or the session token service, to programmatically change hats. I recommend you start off by using the managed policies to separate at least read and write levels. This is a minimum recommendation, but it can get you quite far. You can use the managed policies as a starting point and grow out from there. Naturally, your different access levels have different levels of significance. I recommend doing enforce MFA for sensitive actions. You may want to allow anyone to log in just for read access, but you may want to enforce MFA on write or destructive actions. Also prefer whitelisting over blacklisting. Grant exactly what's needed instead of what is not allowed. This ensures that people are doing exactly what's required. Requests for new access may be handled appropriately. Finally, do not use the root account. Every AWS account has a root account. This should only be used for first-time set up. Use that root account to create an IAM user for your team and other appropriate roles, and then use those roles for everything. Also, remember to follow the AWS guide for properly securing the root account.

Now humans are done and sorted. Time to focus on machines. Machine access generally falls into two different categories. One category runs in your AWS account. This is your EC2 instances or your RDS, et cetera. The other category does not. This is a CI service for example. Let's start with the first category. Use the instance profiles for everything. You can be as broad or as granular as you like. Just use instance profiles for everything, and it will make your life easier. If you are passing the AWS keys to an EC2 instance, this is a red flag on your design. Also, create IAM user integration accounts for things outside AWS. Not everything runs in your AWS account, meaning you cannot use instance profiles. For this you'll need AWS keys. I recommend you create dedicated IAM users and associated roles for the external systems you'll need to integrate with. Also, prefer whitelisting over blacklisting. The same practice applies here. Grant exactly what is needed instead of what is not allowed. This ensures systems are doing exactly what's required. Requests for new access may be handled appropriately.

That wraps up machines. Let's conclude the lesson with a summary of security and a grab bag on other recommendations. Use private networks with public ingress. It's in your best interest to keep your AWS resources off the public internet. Protect them via secured ingress point like an ELB or a Bastion server. Also configure RBAC, a role-based access control, using IAM. Define user groups and their associated policy whitelists and enforce MFA on more sensitive applications. The AWS managed policies are a great place to start. Also, use instance profiles exclusively inside AWS. If it's running in AWS, then an instance profile may be applied to it. Also remember to exclusively prefer whitelisting over blacklisting. It's always better to declare explicitly what people can do rather than what they cannot.

Finally a point we've not yet touched on, but you may consider multiple AWS accounts. This falls into the grab bag since we've not covered it yet. No one says you must use a single AWS account. You may create multiple accounts and link them together via IAM policies. This enables you to create a dedicated account for your production environment or even per application. You can configure different roles in each account, who can access them and enforce MFA on different access rights. AWS made this even easier with their new organizations feature.

These recommendations should give you a solid foundation. This stuff is usually really hard to back port into an existing system so getting it right in the beginning really pays off. We're shifting gears for the next lesson. The next lesson covers metrics and monitoring with CloudWatch. Join me and learn how to keep an eye on your application. Catch you there, cheers.

About the Author

Adam is backend/service engineer turned deployment and infrastructure engineer. His passion is building rock solid services and equally powerful deployment pipelines. He has been working with Docker for years and leads the SRE team at Saltside. Outside of work he's a traveller, beach bum, and trance addict.