Networking on AWS
Monitoring your resources
Practical example: building the infrastructure for a Web App
Content Delivery and DNS Management
In this lesson, you will be introduced to AWS networking, specifically, EC2 networking. You will delve deeper into what EC2 is used for: configuring, provisioning, securing, and managing virtual machines.
We will start an EC2 instance to look at how device identifiers and traffic forwarding policies work. You will select a VPC network rather than a classic, if available, and select the NAT network subnet.
We will look at regions available, and let AWS make the choice for us. You will also learn the reason for selecting specific regions over others, and how they are affected by private or public IP addresses.
We will explain why network designers developed network address translation (NAT), and how they have freed up IP addresses for the masses. We will give our security groups some attention, and limit the SSH traffic to a local IP.
After that, we will review and launch the instance. We will perform several functions from the dashboard including:
- Clicking on elastic IPs to allocate new addresses
- Clicking on load balancers automatically share out the load among a number of Instances
- Configuring how the balancer will query Instances
- Selecting a security group for your balancer
- Adding at least two Instances to your balance
- Assigning an identifying tag to the balancer, reviewing the tag, and starting it up
Then we will cover how to create auto scaling groups, and how they save your company money over time. We will walk-through the steps required to complete our launch configuration, and then finish up by configuring the auto scaling group.
Welcome to the second course in cloudacademy.com's video series on preparing to take and pass the AWS solutions architect associate level certification exam. By joining us for these courses, you'll be introduced to all of the basic skills you'll need to master AWS administration. This course will focus on AWS networking in this video, specifically, on EC2 networking. EC2, of course, is AWS's tool box for configuring, provisioning, securing and managing the virtual computers that are the foundation of AWS projects. Networking is about ensuring that the flow of digital traffic between Instances or services is fast and reliable where necessary and impossible wherever it's dangerous. I suppose it's not too much of an oversimplification to say that besides the physical infrastructure like cables and switches, networks are made of device identifiers, often IP addresses, and traffic forwarding policies. Let's fire up an EC2 Instance and take a look at how both of these elements work. We'll select a standard [inaudible 00:01:01] and chose a small TC2 micro Instance type.
We'll now select the only network currently available to us, default VPC, Virtual Private Cloud. By the way, as you mentioned before, people with older AWS accounts will sometimes have to chose between launching a new Instance into an EC2 classic or EC2 VPC network. While the EC2 classic option doesn't show up in this account, if there was such an option, it would appear here in the network drop down menu. EC2 Classic effectively places Instances within the larger AWS network structure. While VPCs are discreet networks all to themselves. It's important to remember that security groups created for one network type cannot be used for the other. EC2 Classic security group usage is also more restricted. You can't change an Instance's group once it's launched, and you can't apply a group from a different region. We can now select the subnet, which is a local NAT network that's designed to pass along Internet bound requests coming from devices on a private network. NAT stands for Network Address Translation. It's a protocol for bridging local devices with resources on public networks like the Internet. You can chose a subnet ad risk range associated with any availability's own within a selected region, which, in our case, is in Virginia, or just leave the choice up to AWS. In our case, if we were to chose availability's own US East 1B, our Instance would receive a private IP address somewhere between 172.31.320 and 172.31.47.254. Since this is a private IP address, it would have no meaning for devices outside our VPN. They will access our Instance using a public IP or end point that we'll let Amazon assign. But all Instances running within our VPN will always talk to each other using these private addresses. By way of further explanation, when it became apparent that the number of Internet connected devices was growing so quickly that we were in danger of running out of IPV4 addresses, network designers developed network address translation, NAT, to dynamically re-map IP references as packets move between public and private networks. Using NAT, a private network with 100s or even 10s of thousands of devices could all share a single public facing IP address, relying on the local router to send everything to the right place based on a strictly local addressing scheme. It became accepted practice to restrict all local NAT addresses to three limited ranges and to avoid using any IP from these ranges in public. The reserved address ranges are between 10.000 and 10.255.255.255,172.16.00 and 172.31.255.255 and 192.168.00 and 192.168.255.255. Well this freed up literally billions of IP addresses for public use. The downside, of course, is that local addresses will make no sense in a public network. We'll accept AWS defaults for storage and choose not to create a tag. But because we're now looking at this from a networking perspective, and since this is what determines the policies controlling in and outbound traffic to this Instance, we should give our security groups some attention. As it currently sits, our group allows incoming SSH traffic from anywhere on the Internet. At the very least, we should limit that to our own local IP. Now let's say we want to open up a port to allow my SQL traffic from a customer who needs access to this data. Selecting my SQL from the drop down will automatically populate the port value with 3306, my SQL default. But since you want to allow just anyone in, you can select custom IP from the source drop down and then enter, say, 18.104.22.168/32, assuming that, that is your customer's IP address. The /32, by the way, will limit access to only this exact address. We can now click review and launch and then launch to boot the Instance. Let's now go to the Instance dashboard. Once we're up and running, the public IP address that AWS assigned to this Instance is displayed. However, if we ever shut down or, in some cases, reboot the Instance, this IP will change. If you require a permanent public IP address, you can allocate an elastic IP and associate it with your Instance. From the EC2 dashboard, click on elastic IPs then allocate new address, then click on associate address. Click once inside the Instance box, and a list of all your current Instances should appear. By clicking on the one you're after, the IP will be assigned and access will now be persistent. If the services your Instances are providing can sometimes be subject to traffic loads heavy enough to bring down a single server, creating a low balancer with elastic load balancing can automatically share out the load among a number of Instances. From the EC2 dashboard, click on load balancers then create load balancer. Give your balancer a name, chose a VPC, making sure that it's the same VPC where the Instances you want to balance live. And, as part of your listener configuration, select a protocol and port for incoming traffic. The load balancer protocol sets a specific protocol and port that you want your balancer to listen for. In other words, you might want the balancer to simply ignore non-secure HTTP traffic, allowing only HTTPS through.
Instance protocol tells the balancer how and where you'd like the incoming traffic to be forwarded to your Instances. Your next job is to configure how the balancer will query Instances to determine which ones are fit enough to receive traffic.
There's no point sending your valuable clients to a dead server, right? Next, you select a security group for your balancer, because no less than your Instances themselves, you want to carefully control what comes into your network's outer wall.
Think of it as a side benefit of balancing. You got an extra layer of protection. Now you'll add at least two Instances to your balancer. You will, of course, have to make sure that the services you want to offer exist on each these Instances. Otherwise, some of your visitors may be in for a rather unpleasant surprise. Finally, you're going to assign an identifying tag to the balancer, review and fire it up. The balancer details available from the ELB dashboard will provide you with the balancer's single public IP address, which, if you like, you can associate with either a permanent elastic IP or with a proper domain name.
When your users go to this address, the traffic will be shared between all the Instances you've included in the balancer.
In many scenarios, the loads on your servers might significantly rise and fall over time. Perhaps you're running an online store which experiences peak demand only a few times a month or year. You certainly don't want to pay for all kinds of Instances that, for most of the year, will just sit idle. AWS allows you to create auto scaling groups that will automatically launch additional copies of primary Instance when demand increases and shut unused Instances when the demand falls. This is obviously a subject, but it's a bit too big to fully cover in a single video. But just to illustrate the concepts, let's explore setting up a very simple auto scale group . From the EC2 dashboard, click on launch configurations under the auto scaling item on the left menu. Now, click create auto scaling group. You can read through the introductory material if you'd like. But when you're done, click on create launch configuration. Now select the AMI you'd like to use for all the Instances this group will launch. We'll go with Ubuntu server 1404. For Instance type, we'll click on T2 micro, probably not the most obvious choice for high demand deployment, but it'll do for our purposes in the meantime. And then click on next configure details. We'll give our group a name, and we won't select request spot Instances since these aren't available for the T1 micro Instance type we chose. Clicking on advanced details will display some extra options, the most important of which is user data. Here is where you can add customizing scripts to be run on startup for each Instance. We'll leave that empty for now and click next to add storage.
We'll accept the default and click next to configure a security group. Here will restrict SSH traffic to My IP Address and add a new rule that will permit all HTTP traffic on port 80 from anywhere.
That's our launch configuration complete. Now it's time to actually configure the auto scaling group itself.
We'll give our group a name, say, My Scaling Group and leave the group size field at its default value of 1.
That's probably not what you would use for a real project, of course. We can leave network as default for now. Click inside the subnet box we'll select an availability zone from the options we're given. Let's click next and move onto the configure scaling policies page. We'll select keep this group at its initial size and move onto notifications. We'll leave notifications as default for now, but you can easily see how important it could be to enable notifications for any scaling changes. We can create a tag, and tags certainly can be most helpful when you're looking at menus filled with services or perhaps an email notification. For now, we'll leave it blank. Now, we'll review and create the group.
David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.
Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.
Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.
His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.