Please note that this course has been replaced with a new version that can be found here: https://cloudacademy.com/course/networking-saa-c03/networking-saa-c03-introduction/
This section of the Solution Architect Associate learning path introduces you to the core networking concepts and services relevant to the SAA-C02 exam. We start with an introduction to the AWS Virtual Private Network (VPC) and networking services. We then understand the options available and learn how to select and apply AWS networking services to meet specific design scenarios relevant to the Solution Architect Associate exam.
Want more? Try a lab playground or do a Lab Challenge!
Learning Objectives
- Get a foundational understanding of VPCs, their security, and connectivity
- Understand the basics of networking including Elastic IP addresses, Elastic Network Interfaces, networking with EC2, VPC endpoints, and AWS Global Accelerator
- Learn about the DNS and content delivery services Amazon Route 53 and Amazon CloudFront
- [Stuart] So you've now finished the theory section of networking for the AWS SAA. And some people find networking confusing and complicated but hopefully you now have a much better understanding of how AWS networking works. We covered everything that could potentially come up in the exam. But in this course I want to reiterate some of the main things to remember to ensure that you feel more prepared for any networking questions that could make an appearance. Now remember I'm here to make sure that you know what you need to know and that you feel confident to pass this certification. So please reach out to me on LinkedIn, Twitter or drop us an email and I'll happily discuss any questions you have. Anyway, let's take a look starting at VPCs. So first and foremost, you have to have solid grasp of VPCs. You'll definitely come across questions covering VPCs and then networking components and how they all fit together. So let's break this down in its simplest form. So the VPC is your own networking space of AWS that resides within a single region. And within your VPC you can create both public and private subnets. And each subnet resides in a single availability zone. You can control network traffic between subnets using network access control lists or NACLs and to control access between resources such as EC2 instances we use security groups. And these both work at the port and protocol level. If we need to connect our VPC to the outside world we must use an internet gateway. And when attached, we can add a route from a subnet to the internet gateway to make that subnet public. If we need private instances to initiate a connection to the internet, then we need to use a NAT gateway and this resides in the public subnet. So if you can grasp those basic principles of network connectivity it will put you in very good stead in breaking down many questions that come up relating to VPCs. Half of the battle is remembering which networking component is used for what purpose. If you get that right you can usually eliminate at least two wrong answers. I find that most people get confused between NACLs and security groups and also when to use internet gateways over NAT gateways. So let's look at a couple of networking scenarios where this might come up. So the first question, you are asked to add an application to an EC2 instance within your company's VPC. Because it's a Windows 2012 instance, you RDP into it. And when you attempt to connect via RDP you receive the following message, error connecting to your instance, connection timed out. How could you fix the issue? Choose three answers. So here we have a Windows instance residing within a VPC and when you're trying to connect to it from outside of your VPC you'll get a timed out message.
So we need to think about what elements of networking could be missing or the elements that we could check to help resolve this. So, A, you could SSH into the instance rather than using RDP. Well because it's Windows, it does use the RDP protocol. SSH is only used for the Linux-based instances. So that's not an option. B, add a security group to allow inbound traffic from your public IPv4 address on port 3389. So if we don't have any security groups set up to get to our instance then we might not be able to connect to it. So this could potentially be an issue. And port 3389 is the RDP port, so I would highlight this one. C, disable the source destination check on your internet gateway. So although we do need an internet gateway it's not possible to disable the source and destination check on it in any way, so that's not even a thing. D, add a route to your internet gateway attached to the VPC. Again, if we need resources inside our VPC to access the internet and return traffic back out to the internet, then we certainly need to have a route from our subnet that points to that internet gateway. So that is definitely an option as well. And lastly, make sure your network ACL's allow inbound and outbound traffic from your local IP address on port 3389. So this is very similar to B where we need to have that security in place. Whereas with option B, it focuses on security at the instance level because it's a security group, here we're looking at the network level. So we need to make sure that the ports aren't blocked at the network level as well. So the answers to this question would be B, D and E. Make sure we have our security group routes in place with the right ports, make sure we have a route through our internet gateway so traffic and get out of our VPC and E, make sure we have our security controls in place at the network level as well. Let's take a look at one more question. An application development team is building a multi-tiered web application and has configured an Amazon VPC that includes a public and private subnet. There is a web server running on an EC2 instance in the public subnet and in the private subnet there is an application server running on an EC2 instance. The EC2 instance in the private subnet needs to occasionally connect to the internet to apply software updates to applications running on the instance.
Which of the following solutions can enable an instance in the private subnet access to the internet while preventing connections from the internet? So this is where we really need to understand our networking components and what they are useful. So basically in this question we need to work out how we can enable an instance in a private subnet access to the internet to download new software updates. So let's go through each of our answers to see what would be the most appropriate. So A, attach an internet gateway to the VPC and add a route to the private subnet's route table to direct traffic to the internet gateway. Well, we have our private subnet, if we add a route in that private subnet route table to the internet gateway then that private subnet would become a public subnet. And we don't want to do that. We need this instance to remain private, so that's not an option. B, create a NAT gateway in the public subnet and update the private subnets route table to include an entry to direct internet traffic to the NAT gateway. Now, this is, I would say the correct answer because if you need to enable your private instances access to the internet then it needs to go via a NAT gateway which resides in the public subnet. Let's read the remaining two answers anyway, C, create a bastion host in the public subnet and update the private subnets network ACL to allow inbound SSH traffic from the CIDR range of the public subnet on port 22. So we don't need to use a bastion host here. A bastion host is not used to allow private instances access to the internet. D, create a Gateway endpoint and update the private subnets route table to include an entry to direct internet traffic to the Gateway endpoint. Now, Gateway endpoints don't allow traffic out to the internet. Gateway endpoints are used to direct traffic across the internal AWS network to services such as S3 and DynamoDB. So that is not gonna help us with our solution here. So the answer here is B, to create a NAT gateway. So just remember if it's a VPC question relating to security, it would likely be about NACLs or security groups.
You would use NACLs to control access at the subnet level and security groups at the instance level. Also remember security groups are stateful and NACLs are stateless. And again, to reiterate you need an internet gateway to create public subnets and allow sources in the public subnet to access the internet and NAT gateways allow instances in your private subnets to access the internet. Okay, so let's now look at some of the connectivity options when working with VPCs, in particular VPN gateways and Direct Connect. Now, both options provide connectivity from your own corporate network to the AWS Cloud but it's a difference differences between them that are important as to when you would use one over the other. So you'd use a VPN solution if you are looking for a solution to connect your corporate network to your VPC that was relatively easy to implement where security didn't really require the use of a private network. And so it could be run across the internet instead. Now with minor configuration of a customer gateway in your network and a virtual private gateway in your VPC it would be set up and running fairly quickly. However, if you require this connection to be fast, stable and private, then a VPN wouldn't be the right choice. Instead, you'd need to use Direct Connect which would provide a private connection between your data center and an AWS region not just your VPC. Now, this uses dedicated lease lines with an AWS partner and you could connect one interface to a virtual gateway in your VPC and another interface to connect to an AWS region allowing access to public AWS resources such as Amazon S3. So let's take a look at another example question covering network connectivity with these two options. So the question reads, an IT department needs to connect its on-premises data center to the AWS Cloud. The project requires low to moderate bandwidth from the network connection and they need a fast and easy to implement solution. The project can tolerate varying performance of their internet connection and minimizing cost is very important. Which of these solutions is the most cost-effective for this scenario?
So again, we need to read that last sentence, which of these solutions is the most cost-effective? So let's read through our options. So let's pick up a couple of key words from the question. So we're looking to connect on-prem to our VPC, it only requires low to moderate bandwidth that needs to be fast an easy to implement and it can have varied performance. Okay, so with that in mind we're looking for a solution that fits all those elements that's cost-effective. So let's go through, A, use an Amazon site-to-site VPN connection to connect AWS with remote on-premises network. Now we know VPNs are relatively easy to implement. They use the public internet so it can have varied performance, is not guaranteed and has low to moderate bandwidth. So that could be a solution but let's keep reading. B, use AWS Direct Connect to connect AWS with the remote on-premises network. Now Direct Connect requires dedicated lines and this comes at a cost and it guarantees certain high connection speeds as well. So this would be too much of an overkill for this solution especially when we're looking for the most cost-effective, so I'd rule out Direct Connect. Use a transit gateway to connect AWS with a remote on-premises network. Again, transit gateways use full multi-site connectivity and we're just looking to connect one data center to the AWS Cloud. So again, we don't really need to use the transit gateway option here. D, use Amazon S3 with Transfer Acceleration. This doesn't actually relate to the question at all because S3 with Transfer Acceleration simply enables you to get data into S3 faster but that's not what we're talking about in this situation so we can rule out D. So I would say the most cost-effective solution for this scenario is A, to implement a site-to-site VPN connection. Now, we also covered VPC endpoints in this course. Again, this is connection related but it looks at connectivity between your VPC and other AWS services across a private network without exposing data to the internet using AWS private link. Now, this means that you can connect to the services without configuring an internet gateway or a NAT gateway. For the exam, be aware of interface endpoints and Gateway endpoints. Now interface endpoints are effectively ENIs with a private IP address within your subnet and this acts as an entry point to a supported AWS service. Whereas a Gateway endpoint is added as a target in your route table of your subnets which points to either Amazon S3 or DynamoDB. So let's take a look at an example question where this might come up.
So the question, a development team is building an application hosted in an Amazon VPC that consists of a public and private subnet. One of the application services runs on an Amazon EC2 instance in the private subnet and needs access to data stored in an S3 bucket associated with the same account and in the same region. which of the following solutions is the most cost-effective way to access the data stored in S3? So again, we're looking for a cost-effective data solution to access data in S3 from an instance within a private subnet. So let's take a look. A, create a NAT gateway in the public subnet and route request from the private subnet to the S3 bucket through the NAT gateway. Now a private instance could access the internet via the NAT gateway but you would incur the cost of the gateway and also the data transfer, so not necessarily the most cost-effective solution. Launch the EC2 instance running the service needing S3 access in a public subnet instead of a private subnet. Now again, it's not really asking for a redesign of architecture. It's asking how to find the most cost-effective solution for the current setup. C, configure a gateway VPC endpoint to route requests from the private subnet to the S3 bucket. So here you'd simply add a route in the private subnet route table with a Gateway endpoint pointing to S3. And any connectivity would use the internal AWS network. And the great thing about Gateway endpoints is that they are free to use. You just have to pay for the data transfer. Then lastly D, create a NAT instance in the public subnet and route requests from the private subnet to the S3 bucket through the Nat instance. Now you wouldn't really use a NAT instance, you'd generally use a NAT gateway 'cause that's a managed service. And again, you have the cost of running that instance as well, so I'd rule that out. So we've ruled out a NAT gateway 'cause the cost of the gateway and ruled out B 'cause we don't wanna start moving instances around, the Gateway endpoint looks to be the best option here. So I would go with C, the Gateway endpoint. So we've covered VPC and its components and also network connectivity, but let's now look at some of a smaller networking components, these being ENIs, EIPs and ENAs. They all sound very similar but all perform very different functions. You don't need to know the inner workings of each of them but you do need to know when there might be used and what they are. So you might get asked questions about network latency and how to resolve it or questions relating to persistent public IP addresses to help mask instance failures or the requirements to set up a management network between your EC2 instances, for each of these you would use either an ENA an EIP or ENI. So ENAs are used to provide enhanced networking features to high speeds for your Linux compute instances. So if you receive any questions on enhancing network performance for Linux instances and ENA is an option, it's certainly worth taking note of. EIPs provide persistent public IP addresses that you can associate with your instance which can be attached to an instance or an elastic network interface, an ENI. And these can be detached from one instance and reattached to another. And this can mask the failure of a publicly accessible instance. And ENIs are used to give your EC2 instances an additional network interface. And this allows the instance to connect to two different subnets at once, each interface configured with an IP address of each subnet. So this is great if you're creating a management subnet you can then add management network interfaces to each EC2 instance you want to be apart of that management network. Okay, so example question time again, so let's take a look. You need to run tests on a small portion of an existing EC2 fleet that is already supporting a web application, the existing instances are located in multiple subnets within a single VPC and assigned to a single security group. The test requires connecting to the instance on port 8080 and so you have configured a new security group that allows inbound and outbound traffic on that port. What additional changes would allow to test the new application with minimal additional cost or disruption to your current infrastructure? Choose two answers. So you're looking to run some tests on some EC2 instances over port 8080 and you wanna run these tests with the minimal cost and disruption to infrastructure. So let's take a look at what our options are. A, attach an ENI, an elastic network interface to the selected test instances with port 8080 opened. Okay, so that would certainly work. So we'd have a new interface that we could quickly and easily attach to our instances over port 8080. Launch a test instance in the new subnet. Well, we don't really wanna launch any more instances 'cause we're trying to keep our costs and disruption down. C, assign the new security group to the newly attached ENI. So again, if we have our ENI attached from option A we can then open up the security group to allow that port 8080 connectivity. So that's certainly an option as well. And then D, assign the test instances primary network interface to the new security group. Now we don't wanna change the primary network interface 'cause that would be the interface that it's using for its normal traffic within the subnet. And we don't wanna cause any disruption to a current infrastructure. So the best thing to do here will be to attach a new ENI to all our test instances and then assign the new security group to that ENI. Now we just spoke about networking performance with the ENA but you should also take note of the AWS Global Accelerator too which effectively allows you to get UDP and TCP traffic from your end user clients to your applications faster, quicker and more reliably by using the AWS global infrastructure. And it does this by intelligently routing customer requests across the most optimized path.
So the ENA provides high-speed performance for your instance, whereas the Global Accelerator provides high speed performance from an end client to your application using the AWS network. So you might receive a question like the following that will assess your understanding of this. So the question might read, which of the following AWS network components reduces the latency of network traffic between external users and applications hosted on AWS by directing traffic over AWS global infrastructure instead of the public internet? So here we're looking at enhancing performance between an end client in the real world and an application running in our VPC infrastructure. So let's take a look. A, use enhanced network adapters. Well, I just explained that ENAs are only really used to enhance performance on a single instance. It doesn't cover performance between an end user out on the internet and your application. So we can rule that out. Elastic IP addresses, well, elastic IP address is just a static public IP address. It doesn't help us with our network performance between two endpoints at all, so we can rule that out. The AWS Global Accelerator. Well, this is exactly what the Global Accelerator does. It creates enhanced performance between an end user and our application running in AWS using the AWS global infrastructure. So I would say it is most definitely C, but anyway let's look at D, elastic network interfaces. So again, an ENI is simply an additional network card that can be added to an instance. So again, this wouldn't really help with increasing performance between two endpoints. The last two services I want to highlight are Route 53 and CloudFront. So the key points at the high level for Route 53 include, it's a highly available and scalable DNS service that provides secure and reliable routing of requests. Now you have public hosted zones which determine how traffic is routed on the internet and then private hosted zones which determine how traffic is routed within a VPC. Now it uses different routing policies to route traffic and this is important. You need to be aware of those different routing policies. It also supports the most common resource record types as well. An alias records act like a CNAME record allowing you to route your traffic to other AWS resources such as ELBs, VPC interface endpoints, et cetera. So make sure you're aware of what an alias record is. So what sort of questions might you see relating to Route 53? Well, I expect your see something relating to routing policies, you will be expected to select the most appropriate routing policy given a particular scenario. So know the difference between the following policies. There's simple, failover, geo-location, geoproximity, latency, multivalue answer and weighted. So as a quick example, let's look at this question. So in Amazon Route 53, something lets you use DNS to route end user requests to the EC2 region that will give your users the fastest response. So here we are expected to know the difference between a number of different routing policies and we have failover, simple, latency and weighted. So let's go through each of these, first off failover routing. So this allows you to route traffic to different resources based upon their health. So we're not really talking about health of resources here.
So I wouldn't say it's that one. Then we have simple based routine. And this is the default routing policy of Route 53. And it's good for single resources that perform a given function. So it doesn't really relate to anything to do with the fastest response times. Then we have latency based routing and this is really suitable when you have resources in multiple regions and you want to provide the lowest latency. So this could be a really good option to ensure that your users get the fastest response depending on where they are. And then lastly, we have weighted routing, and you use this when you have multiple resource records that perform the same function. Now you might have one resource record weighted at 80% of traffic and another at 20. So 80% of the traffic goes to one resource and 20 goes to another. So it helps you with those blue-green deployments. It's not really used to work out the fastest response. So the most appropriate option here is latency based routing. Okay, so moving on to CloudFront. So CloudFront is used to speed up the distribution of your static and dynamic content by storing cache data through its global network of edge locations. Now it's fault-tolerant and globally scalable by design and it's AWS's own content delivery service.
So normally when a user requests content from a web server that you're hosting without a CDN the request is routed back to the source web server which could actually reside in a different country to the user initiating the request. However, if you use CloudFront, the request is routed to the closest edge location to the user's location which would likely provide the lowest latency and therefore deliver the best performance using cached data. So when you're looking at questions that ask you about distributing traffic or enhancing the performance for your end users, perhaps to your website you need to think about the different network and solutions available to help you to do this. And CloudFront will usually be one or part of the answers. You should be familiar with the configuration of CloudFront distributions and the information they contain such as the origin information, what an Origin Access Identity is, known as OAI, also brush up on your caching behavior options as well which define how you want the data at the edge location to be cached using various methods and policies. Now let's take a look at the final question in this summary. Now, you're building a system to distribute documents to employees across the world, using CloudFront, what method could be used to serve content that is stored in S3 but not publicly accessible from S3 directly. So here we're talking about the security of CloudFront, what features could be put into place to protect that data to ensure that people can only access that by going through CloudFront, so let's take a look, A, add the CloudFront account security group to the appropriate S3 bucket. Now we know that security groups aren't used for S3 buckets. So we can rule this one out. Create an S3 bucket policy that lists the CloudFront distribution ID as the principal and the target bucket as the Amazon resource name. Now, if you remember from the CloudFront course that is not how you control access. You wouldn't put a distribution ID as the principal. So let's move away from those bucket policies. Create an identity and access management user for CloudFront and grant access to the objects in your S3 bucket to that IAM user, that doesn't really help us because anyone that needs access to that bucket will have to inherit the permissions from the IAM user. So lastly, create an Origin Access Identity, an OAI for CloudFront and grant access to the objects in your S3 buckets to the OAI. Now that is the option that we want 'cause when you create your distribution you can create a special CloudFront user not an IAM user or anything like that. It is associated with the distribution and it's called an Origin Access Identity. And then you configure your S3 bucket permissions to ensure that CloudFront can use the OAI to access files in that bucket and serve them to your users. So the answer here is D. And that now brings me to the end of another section. So you should now have a solid understanding of AWS networking components and concepts. So let's crack on and tackle the next steps. Again, if you have any questions about any of this please do reach out to me and I'll be more than happy to explain any topic further with you.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.