Monitoring and compliance
The AWS Certified SysOps Administrator (associate) certification requires its candidates to be comfortable deploying and managing full production operations on AWS. The certification demands familiarity with the whole range of Amazon cloud services, and the ability to choose from among them the most appropriate and cost-effective combination that best fits a given project.
In this exclusive Cloud Academy course, IT Solutions Specialist Eric Magalhães will guide you through an imaginary but realistic scenario that closely reflects many real-world pressures and demands. You'll learn to leverage Amazon's elasticity to effectively and reliably respond to a quickly changing business environment.
The SysOps certification is built on solid competence in pretty much all AWS services. Therefore, before attempting the SysOps exam, you should make sure that, besides this course, you have also worked through the material covered by our three AWS Solutions Architect Associate level courses.
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Hi and welcome to lecture number 11, our last lecture, by the way. I will dedicate this entire lecture to deploying our application again, but this time using elastic beanstalk. As I showed you during the first lecture, you don't need to master elastic beanstalk. In fact, the things that I will show you here are probably enough for taking the exam.
Elastic Beanstalk is a way to deploy an application using AWS services, but being managed in a single place. Kind of like cloud formation, you will find some presets. But it can be easier to get started. Beanstalk works with applications, and an application can have one or more environments. We first create the application and then we need to set up an environment.
Elastic beanstalk needs an IAM EC2 role to send logs to S3 and manage Cloud Watch. We could use the default role or create one with the necessary permissions in the IAM console. You don't need to know this part.
This is where we set the specifics or our environment, which is a Ruby on Rails application. I want to use 2.0 Passenger, but our app will work with any Passenger service. I definitely want to use ELB and auto scaling. Here is where we define our application files. They can either be read from an S3 bucket, or we can upload a file directly. For that, we will need a zip file project, that can be downloaded from our app's GitHub page. We could also use AWS's sample app which is the default app for showing how Beanstalk works. Here we can define the deployment limits, which is a good way to maintain high availability by working only with a percentage of our instance per time, or with a fixed number of instances.
Here we can set the name of our environment, which can be the same name as the application, but it has to be unique system wide, like an S3 bucket name. Here, we choose RDS and VPC. And here, we set some EC2 specifics. We can see here the instance type, key pair, URL for the health checks and the scaling policy. I think you probably know what all those things mean after our last lectures, so there's no need to stress all these points. I'll just need to add some tags because I like using tags.
And here are the RDS settings. You can configure these for your own needs. And please, remember the username and password you choose. We will need them later. I will use Cloud Motors for both, but you can use whatever you want. And of course, I'll want multi AZ. Here is a pretty nice way to define your VPC. I will use the default choice to save time, and we can use these checkboxes to select where the services that we're configuring are going to operate. It's awesome. It's like, I know what to do, don't you? Here we review the configuration and click launch. It will now create a whole environment for us, but it will take some time to do that.
So I'll pause the recording. Okay. It took some time and I had to make some changes for it work. Let's see the results and then I'll show what I did. We have an identical portal here, with only a minimum effort compared to the last time.
But before we got a green health check, I had to configure some environment variables. The last six variables were created by me. Remember when I said to remember the username and password of RDS? So now it's time to use it. You should create variables for this entry and this one. I took them from RDS, as I did for this one. Just go to RDS to get this information.
The password and username are the values that you created during the RDS configuration, and this secret key base is a rail setting. You can use any alphanumeric value that you want and it will work just fine. Let's now see what's changed on the other services on EC2. We now have a new auto scaling group, created by Elastic Beanstalk with its own scaling policies, a launch configuration, a new ELB, a T2.micro instance with the name of our environment. Take a look at the tags.
On VPC, we don't have anything new. Since we used the default VPC, everything is the same as it was. On RDS, we have a new DB instance. Elastic Beanstalk has used the parameter given by us. In case we need to upload a new version of our portal, we could select a new one here. This is usually done through the command line. You can download the Elastic Beanstalk CLI and work with it a bit like you work with Docker. Taking a last look at the configurations from the last lectures, I probably don't need to explain what we can achieve here. You're probably already familiar with these terms. Even the scheduled actions to scale our instances will probably not be too unfamiliar to you. I hope you enjoyed this course, and I hope you've gained the knowledge that you needed to go out and explore more. Thanks and good bye.
About the Author
Eric Magalhães has a strong background as a Systems Engineer for both Windows and Linux systems and, currently, work as a DevOps Consultant for Embratel. Lazy by nature, he is passionate about automation and anything that can make his job painless, thus his interest in topics like coding, configuration management, containers, CI/CD and cloud computing went from a hobby to an obsession. Currently, he holds multiple AWS certifications and, as a DevOps Consultant, helps clients to understand and implement the DevOps culture in their environments, besides that, he play a key role in the company developing pieces of automation using tools such as Ansible, Chef, Packer, Jenkins and Docker.