Monitoring and compliance
The AWS Certified SysOps Administrator (associate) certification requires its candidates to be comfortable deploying and managing full production operations on AWS. The certification demands familiarity with the whole range of Amazon cloud services, and the ability to choose from among them the most appropriate and cost-effective combination that best fits a given project.
In this exclusive Cloud Academy course, IT Solutions Specialist Eric Magalhães will guide you through an imaginary but realistic scenario that closely reflects many real-world pressures and demands. You'll learn to leverage Amazon's elasticity to effectively and reliably respond to a quickly changing business environment.
The SysOps certification is built on solid competence in pretty much all AWS services. Therefore, before attempting the SysOps exam, you should make sure that, besides this course, you have also worked through the material covered by our three AWS Solutions Architect Associate level courses.
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
Hi and welcome to our fifth lecture. In this lecture, we will use what we learned in the last lecture, and start creating our elastic infrastructure. In the last lecture, we saw this chart with what we're planning to do with our infrastructure.
Over the next few minutes, we will deploy an ELB and configure an auto scaling group. We saw in the planning that we have only two subnets. Let's go on VPC and make sure that we only have two subnets. In my case, I had three. So I deleted one and created some tags to identify the remaining subnets more easily. Be sure to see if you have instances on the subnets before trying to delete them, because AWS doesn't permit that. Let's now go to the EC2 dashboard and start configuring our infrastructure. We have here the instance created with cloud formation, and the instance that we created during the last lecture. I will start creating the elastic load balancer. We need to provide a name for our ELB and specify in what VPC it will live. We could create an internal ELB, which would not be publicly accessible. You can find out more by clicking in here. However, we don't want that. I don't know why we need to click in here because from my experience, we always want to use the advanced configuration where we have the subnets that we will be using on our ELB. Now we're good to click on next. I strongly advice you to always have a unique security group just for your ELB. It is also a best practice. So I will create one and leave only the HTTP port open to the world. Click on next. In this page, we could configure the security settings, which would only be applied if we were using HTTPS or SSL. Here we could define how cookies would be managed, or also insert a certificate for our ELB, for instance. Next, we need to configure the health checks for our portal. We need to replace /index with /welcome, which is the rail's address for the index page of our portal. Here, we can configure the way that ELB will perform the health checks.
We can determine the time out, the interval, and the thresholds. Notice that you can go on the question mark icon when you need to know what a particular field does. Here, we can add the instances that will communicate with our ELB. Since we already have two instances, I will add them. Be sure that these instances are not already part of any auto scaling groups.
We definitely want to enable cross zone balancing, which will distribute the traffic between AZs, which can be quite helpful in an outage. I also want connection draining. This way, ELB will ensure that all connections to an instance will be closed before terminating it. I will leave the default timeout. Next, I will create a tag in here just for clarity. And here, we review all the settings before confirming that everything is fine and clicking on create. And here is our ELB. We could see an overview in here of what it has. For me, the best information is in here, where we can see the status. In our case, there is no instance in service, which is okay for now, because we only just created the ELB.
Let's go now and configure auto scaling. Auto scaling works with two things, a launch configuration and an auto scaling group. The launch configuration is very similar to launching an instance. We need an AMI and an instance type. I will use M3.medium this time, just to be different from the current instances that we have.
But you can choose any type that you want. We have to name our launch configuration. We could work with spot instances as well, although they are not the best choice for our situation. We could define an IAM role, and detailed monitoring. And here, the user data, which is a must in our case. I'm using the same user data from the other instances we created previously.
I will choose to define a public IP for all my instances. I will change the size of the disc, just because I want that. And here, I will use the same security group that the other instances are using. Now, it's time to review and confirm. Now, we will continue the auto scaling group. I will specify a name for this group, select the VPC and add the subnets. In advanced, we can associate this group with the ELB that we created before. You have to create one first before configuring ASG, otherwise this option will not show. I will use the ELB health check, but we could also use the EC2 health checks. You can see more by clicking in here. This is a tricky setting. Define a grace period.
This is the define time frame where the instance cannot be declared unhealthy during the launch. Doing this wrong can cause problems. Before clicking next, I will change the group size to start with two instances. Here is where we define policies to grow or shrink our ASG. I will click in here, but I will only configure the maximum size of our group and leave the policies as they are. This will create two policies that do nothing. We could configure things here, but I prefer to configure them later. Next, auto scaling notifications. Use SNS to send messages. I will leave it blank and configure notifications later. By now, you probably know that I like to create tags for everything. Go to the review page and create the group. It is created and it's launching to m3.medium instances as we speak. Let's have a look at our ELB. The two instances that we added before are now in service. Let's go have a look at our ELB in action.
And it's working as expected. Now we have ASG and ELB configured. See you in the next lecture, where we will continue from here.
- AWS SysOps Administrator Roles and Responsibilities
- Initial Application Setup
- Managing instances
- Deploying an Elastic Load Balancer
- Complete the Elastic infrastructure
- Infrastructure monitoring with CloudWatch
- Industry standards
- Application Security
- The AWS Shared Responsibility Model
- Elastic Beanstalk
About the Author
Eric Magalhães has a strong background as a Systems Engineer for both Windows and Linux systems and, currently, work as a DevOps Consultant for Embratel. Lazy by nature, he is passionate about automation and anything that can make his job painless, thus his interest in topics like coding, configuration management, containers, CI/CD and cloud computing went from a hobby to an obsession. Currently, he holds multiple AWS certifications and, as a DevOps Consultant, helps clients to understand and implement the DevOps culture in their environments, besides that, he play a key role in the company developing pieces of automation using tools such as Ansible, Chef, Packer, Jenkins and Docker.