1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Summary - SysOps Administrator — Associate Certification for AWS

Summary

Contents

keyboard_tab
Summary
1
Summary49m 26s
play-arrow
Start course
Overview
DifficultyIntermediate
Duration49m
Students1374

Description

This is the final piece of content of our Learning Path, in this video, we will review some points related to goals of the SysOps certification and take a look at the official sample questions provided by AWS.

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Hi, this is Eric Magalhaes speaking, and this is our summary for this learning path. In this summary we will have a quick review about the goals for the exam. We already covered almost everything that we need for the exam. I would just add a few points, so we make sure that you have enough to go and take the exam. I will review with you the official sample questions. We are going to take a look at them and answer them together. And I will talk about where to go next. I separated some for the material for you to read and also a few features and some pages that you'd definitely should check out. I will talk about the final quizzes and the official practice exam.

So, we covered almost all that during the learning path. We actually covered all that directly, but also indirectly because if you take a look at those goals, you'll see that they are very, very broad. And sometimes it's kinda difficult to have a clear idea of what you really need to know for the exam. And for some of those goals, the topic itself is just too broad to be contained inside a single certification. So, I want to point out two goals in here. I want to quickly talk about identified performance bottlenecks. And I will also want to talk about demonstrate the ability to prepare for secured assessment use of AWS. We kinda talked about it, but we haven't really mentioned that. So, I will give you an overview just to make sure that you connect the dots and understand what you need to achieve those goals.

So to identify performance bottlenecks and implement remedies. First thing, avoid single points of failure. So, we know that ELB is high available for default. We know that we can use auto scaling to increase high availability and make sure that we will allow instances and other availabilities on in case we have problems in that particular availability zone. We know that, for example, if you are even more concerned about high availability, you can also copy your data into another region and make sure that your application will be available across regions. So, we already know about how to avoid single points of failure. Andrew Larkin did a great job in the second course of this learning path, and we already know that.

Also, a great resource for identifying bottlenecks, sometimes you already avoid single points of failure. You have high available blocks in your infrastructure. You're already using ELB, auto scaling, and so on. You already have more multi-AZ database. You already have a read replicas, but you haven't provisioned enough resources, you haven't provisioned enough CPU power, for example, or maybe enough memory to your application. So checking the metrics of your app is a great place to start. We saw that in the monitoring sections of the hands on preparation for this SysOps, and we also have a whole lab just talking about CloudWatch. And we know that we have a way to check out the metrics in a specific time period. So, for example, if you had an outage in your application or you are experiencing an increase of your request time, you can take the time where you identified those issues and you can go to the CloudWatch Console and check out the metrics in that particular given time.

Still talking about performance. There are a lot of things that you could do to improve the pizza time application. I already created a fairly performance application. This is kinda cheap application if you think about it. And it's very scalable. We don't need to worry about too much, but for example, to ensure that we would have more performance in our EC2 instance, we could use the elastic cash service, for example, and we could cash the database responses in the elastic cash service. And we could launch elastic cash instances in both regions. We could launch in the Sao Paulo region and also in the Oregon region. And with that, we would make sure that we would have faster response times without needing to go to our databases. That's even faster than having read replicas. So, we could definitely use that.

And as the pizza time app grows, we could change that even more. We could have our angular app running on an S3 bucket behind the CloudFront distribution because that's a very fast, very scalable and you don't need to manage much, that part, so that part is great. But for example, our instances, in the current scenario they are handling the orders and we might end up in a place that we have too many orders and we can't process single requests as authentication requests, for example. Maybe our authentication requests are timing out because we're already doing a lot of jobs with a few instances. So we could scale out that. We could, for example, instead of saving the orders directly to the database using our API instances, we could only authenticate our users using the API instances, and that would go to our DS instance, but instead of sending data requests directly to the RDS database we could send those requests to an SQS queue and that would ensure, and from that we could create a set of worker instances or maybe configure a limited function to consume those orders, process them. Maybe we need to evaluate something, and maybe we need to process their credit card information to make sure that we would be paid. We can do a lot of stuff in this process. And only then write the orders. And with that, we don't need to have those beefy API instance because the API instance will be used just to authenticate the app and do some support stuff, some actions just to make sure that our angular app will work as expected. We could offload all those requests to SQS, and SQS is high available by design. And we could have a more scalable and cost effective solution with that. For example, we can configure auto scaling to increase the number or decrease the number of instances depending on the number of requests that we have in the SQS queue. So for example, if there is no orders in our SQS queue, we could say that we don't want any working instances. And the same thing applies to Lambda. Lambda we will only run if we have something to process. Why are we are not process anything, why are we don't need to use that, we are not paying for that piece of infrastructure. So, that's great and that's very scalable.

Another almost textbook solution for our pizza time application would be using this simple workflow service to manage our entire delivery workflow. We have this example of an e-commerce application, but that would work more or less the same for our pizza delivery system. We could start the order, we could verify the order, we could do some processing here using Lambda or in certain instances we could charge the credit card using EC2 or Lambda or whatever. And we could move that to a ship order step. And we could, for example, implement. And we could, for example, track our delivery guys to make sure and inform the progress of the delivery to our users. We could put steps, we could say, hey, your pizza is being baked right now. We are putting the sauce in your pizza. We could do all sorts of things in here because the simple workflow servers also support manual actions, so we can create some actions that will depend on manual or human interaction. Then we could just record the completion in our DS database, say that everything is all right, the customer is satisfied. We could create maybe some review feature for that and then we end our finished workflow. So bottom line, by knowing the AWS services and by knowing how to identify problems, how to identify bottlenecks, how to monitor your application, you'll be able to apply that knowledge to identify performance bottlenecks and apply remedies and suggest solutions to that problem. There isn't a single answer for everybody. The perfect answer for you will depend on your environments, so that's why it's so important for you to understand how to monitor, how to collect information, and how to apply the information, apply the technical skills, the technical knowledge that you have about the AWS to solve that problem.

More or less the same thing here talking about security. You need to demonstrate the ability to prepare for security assessment use of AWS. So, we covered security. We showed you a lot of security best practice. We talked about, we had overview about security processes and so on. And we know about the shared responsibility model. So, when preparing for a security assessment, the first thing that you need to do is collect information about your AWS infrastructure. And when I say AWS infrastructure is your EC2 instances, the VPCs, RDS instances, S3 buckets, you can specify your disaster recovery strategy, your security groups, network ACLs, and so on. And you document that, and if needed, you can ask the security and compliance relevant documents to AWS. AWS has a lot of certifications, ISO certifications and so on. So, if you need of a specific documents to prove that your infrastructure running on the AWS in the infrastructure hardware level is compliant with the certification you need, you can ask AWS to provide those documents for you. Remember, you are not allowed to go to an AWS facility. You can't schedule that, so if needed you can ask for those documents and the AWS will gladly share that information with you.

Let's now answer the sample questions available in the SysOps cert page. These are the official sample questions. They are kinda easy. The exam is actually a bit harder, but it's great to have a few, but these questions are great because you can really have a feel of what, because you can really know what you might expect from the exam. So, let's start.

When working with Amazon RDS, by default AWS is responsible for implementing which two management-related activities? So we need to pick two correct answers. This question is more or less related to the AWS shared responsibility model. And we know that the AWS will handle the hardware level and we are supposed to handle the logical level. We are supposed to handle our data, we are supposed to handle our encryption, we are supposed to handle how we configure our security policies to whom we give access to our account. But we also know that RDS is a management services, and for management services, AWS will do a few tasks that for infrastructure services would be our responsibility. So let's start with the choices. So, AWS would be responsible for importing data and optimizing queries. No, AWS will never be responsible for our data. Remember that we are always responsible for our data. So, option A is not right. AWS will be responsible for instead, for installing and periodically patching the database software. That's true. AWS for RDS will patch our database software for us. They have maintenance windows and we can configure the period for those maintenance windows. So, option B is one of the right options. AWS will be responsible for creating and maintaining automated database backups with a point-in-time recovery of up to five minutes. We covered that in the hands on preparation for the SysOpsand that's right. AWS will create and maintain automated database backups unless we say otherwise. So we can specify that we won't keep those automated backups. We can set the value to retain those backups for zero days, and in that case AWS will not maintain that, but we will specifically say so. So, AWS will do that for us. And last, creating and maintaining automated database backups in compliance with regulatory long-term retention requirements. AWS doesn't know about our compliance requirements, so they have no way of doing that. So option D is wrong. The right options are B and C.

You maintain an application on AWS to provide development and test platforms for your developers. Currently both environments consist of an m1.small EC2 instance. Your developers notice performance degradation as they increase the network load in the test environment. So, we talked about that. If you have a small instance, the smaller the instance is smaller than network performance that it will have. So, the question is how would you mitigate these performance issues in the test environment? So, upgrade the m1.small to a larger instance type. That's right because we saw that in the hands on preparation for this SysOps. If we upgrade our instance we will have more network performance. And the problem in this case is they notice performance degradation as they increase the network load, so that will solve the problem. Add an additional ENI to the test instance. That would help you with your throughput in case you were having problems with EBS volumes, for example. But that doesn't really solve the problem of network performance because AWS has a limit for each type of instance. You can have have more throughput by adding ENIs, but you can't increase the megabytes per seconds adds in ENIs. You need to increase the instance type, so option B is wrong. Use EBS optimized option to offload the EBS traffic. That could potentially solve the problem, but there are instances bigger than the m1.small that won't have EBS optimized option. But then you have more network performance so you necessarily doesn't need to use the EBS optimized option. And we haven't even talked about EBS in this question, so we might be talking about an instance that is using an instance store. So that's really not the right option, but it's kind a tricky. And last, configure Amazon CloudWatch to provision more network bandwidth when network utilization exceeds 80%. We can't provision more network bandwidth, so that option is also wrong. Right option, option A.

So, per the AWS Acceptable Use Policy, penetration testing of EC2 instances: Option A, may be performed by the customer against their own instances only if performed from EC2 instances. That is wrong, we can perform penetration testing from both inside the AWS and outside, so that's wrong. May be performed by AWS and it's periodically performed by AWS. No, AWS doesn't perform penetration testing. And they don't perform it at all, so it won't also be periodically performed, so both assumptions are wrong. May be performed by AWS, already wrong, and will be performed by AWS upon customer request. Wrong again. Are expressedly prohibited under all circumstances. No, we can do penetration testing for RDS and EC2, but we need to request permission for that. So, this option is also wrong. That leave us to the option E, which is right.

You have been tasked with identifying an appropriate storage solution for a NoSQL database that requires random I/O reads of greater than 100,000 four kilobytes IOPS. Which EC2 option will meet this requirement? That's super easy. Remember that if you need of more than 48,000 IOPS, you always need to use instance store. So we have SSD instance store as option B, and that's the only available solution for that particular problem.

Instance A and instance B are running in two different subnets, A and B of a VPC. Instance A is not able to ping instance B. So, what do we know in this case? Both subnets, A and B, are running inside this same VPC. So, instance A is not able to ping instance B. Okay, so what are the two possible reasons for this? The routing table of subnet A has no target route to subnet B. That's wrong. When we create a VPC, we also create a default routing table. And even if we created another routing table and associate it with one of these instances, we have a default route for all routing tables and that default route says that all requests coming to the VPC are there will be allowed and the target is local. So if you have a VPC with DCI DR you have a target that you say that all requests to DCI DR will be forwarded to local and you can't delete these rules. So, it's not a problem in the routing table. The security group attached to instance B does not allow inbound ICMP traffic. That might be true. We are not talking about anything about the security groups in here, and that might be a problem in that case. The policy linked to the IAM role on instance A is not configured correctly. We don't need of an IAM role to ping another instance, so that option is completely wrong. Which leave us to option D, the network ACL of subnet B does not allow outbound ICMP traffic, and that's also true, so option's right. The right options are B and D.

Your web site is hosted on 10 EC2 instances in five regions around the globe with two instances per region. So, how could you configure your site to maintain site availability with minimal downtime if one of the five regions was to lose network connectivity for an extended period of time? So, we are talking about a global application that we forward the request to more than one regions. And how can we ensure that we will maintain availability if a whole region goes down, we covered that. So we probably will be able to find the right answer, so let's go really quickly through the options. So, create an ELB to place in front of the EC2 instances, okay. And set an appropriate health check on each ELB. Yeah, okay, kinda right. But we have a global application and ELB doesn't span across regions, so we would be creating a high available applicationa single region, but not around the globe, so option A is wrong. Option B, establish VPN connections between the instances in each region. Rely on BGP to failover in the case of a region wide connectivity outage. Okay, it's a global application. And we need to have a central point where our users will access that application. And that central point must be responsible to forward that request across the regions. And the truth is we don't need to have load balancers or auto scaling groups in those regions. We just want to maintain site availability if a whole region goes down. VPN connections will not solve that. We will connect all the VPCs across the globe, but we can't do anything with that. So there is no way to failover to other region in case we have an outage because that will be useless, creating a VPC connection, because we haven't created a central point to forward the traffic to the other regions. So, option B doesn't make any sense. Create a Route 53 Latency Based Routing Record Set that resolves to an ELB, so far, so good, in each region. Okay, we want that. And set an appropriate health check on each ELB. Yeah, that's fairly right. We need to do that, but we are setting the health check on each ELB, so in case the whole region goes down, we don't have a health check on Route 53, so how do we know that that particular region went down? So that's really not enough for us. Option C is also wrong. Option D is fairly similar to option C, so let's take a look at it. Create Route 53 Latency Based Routing Record Set that resolves to Elastic Load Balancers in each region and has the Evaluate Target Health flag set to true. We saw that in the last lecture of the hands on preparation for the SysOps cert, and we know that we need to create a health check, and we also need to say that we want to evaluate the target health. And by doing this, AWS will failover automatically to the order closest location since we are using Latency Based Routing. So option D sounds good for us, it's the right option.

You run a stateless web application with the following components: An ELB, three web application servers on EC2, one MySQL RDS database with 5,000 provisioned IOPS. Average response time for users is increasing. So, we have a problem there. Response time is increasing IOPS. Looking at CloudWatch, you observe 95% CPU usage on the web application servers and 20% CPU usage on the database. The average number of database disk operations varies between 2,000 and 2,500. So, the only problem that I've seen here is that we have a high CPU usage and the response time for users is increasing. Our database seems okay and everything else seems okay. Although we don't have an auto scaling group, everything else seems okay for you. So, they ask us, which two options could improve response times? So remember, our problem is high CPU usage, and our application is running on EC2. So we could choose a different instance type for the web application servers with a more appropriate CPU/memory ratio. That's correct, we could increase our CPU and memory power, and that would decrease the CPU utilization percentage of our instances because we would have more power. Use auto scaling to add additional web application servers based on a CPU load threshold. That's also correct. We're not talking anything about auto scaling here, and instead of scaling up and choosing more powerful web servers, we could scale out and just increase the number of EC2 instances. So options A and B are right. Let's take a look in the other two. Increase the number of open TCP connections allowed per web application EC2 instance. We can't configure that, we can't manage the number of open TCP connections, on the AWS Console. We can do that inside the instance, but our problem is CPU utilization, so option C is wrong. Option D is use auto scaling to add additional web application servers based on the memory usage threshold. So our problem is CPU usage, not memory usage. But yeah, that might be right. But remember that memory is not a standard metric, so we would have to configure custom scripts, we'd have to configure customized dimensions to manage the memory utilization across all the instances inside our auto scaling group, and our real problem is not memory, so that option doesn't really sound right. So options A and B are right.

Which features can be used to restrict access to data in S3? Pick two correct answers. Won't spend too much time in here. We have two ways of allowing access on S3. We can use S3 bucket policies. And we can use S3 ACLs in the bucket or in the objects. The other options doesn't make sense. Create a CloudFront distribution for the bucket. That won't restrict the access. To restrict the access using CloudFront distribution we would have to create an identity on the CloudFront distribution, but we would have to end up setting up an S3 policy to restrict the access. So it's not the CloudFront distribution that restrict the access, but the S3 bucket policy. So this option is wrong. Option C, use S3 Virtual Hosting. I don't even know what there is, that doesn't exist. And enable IAM Identity Federation. No, that doesn't solve our problem. We need to define permissions in the bucket. We don't need to add an identity federation to our IAM.

You need to establish a backup and archiving strategy for your company using AWS. Documents should be immediately accessible for three months and available for five years for compliance reasons. So, remember when we talked about life cycle policies, that we can use then also for compliance reasons? So, that's a super easy question. Let's take a look at what else it says. Which AWS fulfills these requirements in the most cost effective way? So, we want to be cost effective, great. Why life cycle policies help us in that task? Because the document must be immediately accessible for three months. In other words, that means S3. And available only for five years. They should be available, but we don't expect to access that data after the three months, so it's a perfect scenario to send data to Glacier. So let's take a look at the options. Use Storage Gateway to store data to S3 and use life-cycle policies to move the data into Redshift for long-time archiving. First, Redshift doesn't do long-time archiving. Redshift is the petabyte data warehouse from AWS, so it's already wrong. And although Storage Gateway stores data inside S3, it stores data in form of snapshots, so you can't really create life-cycle policies to that data, so that option is wrong. You can use Direct Connect to upload data to S3, okay, unnecessary but okay, and use IAM policies to move the data into Glacier for long-time archiving. IAM policies are not meant to move data around. So, that part is wrong. The Glacier part could be right. And also Direct Connect is not cost effective, so that option is already wrong, just to mention Direct Connect. Upload the data on EBS, use life-cycle policies to move EBS snapshots into S3 and later into Glacier for long-time archiving. Complicated, upload the data on EBS. Why do you really need of EBS? So, it's really cost effective? I don't think so. And we can't use life-cycle policies to move data from EBS to S3. So, that option is also wrong. Which leave us to option D, upload data to S3 and use life-cycle policies to move data into Glacier for long-time archiving. Great, that's what we want.

Last question talks about IAM policies. Remember we have two whole lectures to talk only about policies. You should be able to read IAM policies for this certification exam. So, for me the easiest way to understand the policy is by breaking it down. First thing, the version. The version is not even mandatory. We know that we can choose between two versions, 2012 and 2008. So it doesn't make a big difference in this case. We found our statement list. Remember of the square brackets, so it's a list. And in here we have our statements. We have one statement and other statement, so we have two statements. So let's focus on this part. First one is allowing something. What is allowing? It's allowing all get calls and all list calls. Notice on the star after the get and the list, so what it's allowing is all the get requests for the S3 servers and all the list requests for the S3 servers, as well for all the resources, we have a star also for the resources. So we are allowing get and list to all S3 buckets that we have. And the get and list actions includes get object, get object ACL, get object version. You have list bucket, list all my buckets, and so on. And we have another statement here. We are also allowing something. We are allowing the put object action, so we are allowing someone or something to write data into, in this specific S3 bucket. So this policy only allows individuals to write in the corporate bucket. So, what does the IAM policy allow? We are supposed to pick three correct answers. First one, the user is allowed to read objects from all S3 buckets owned by the account. That's right. We are allowing that in the first statement. We are allowing to get and list, that we will allow read permissions in the resources, meaning all the S3 buckets owned by the account, so that's right. The user is allowed to write objects into the bucket named corporate bucket. That's also right. That's exactly what this second statement does. We have the put object action, and we are allowing that only for the corporate bucket, so also right. The user is allowed to change access right for the bucket named corporate bucket. That's wrong. We are not allowing put object ACL for this particular bucket. We are only allowing to put objects in there, so we can't manage it, access right. We can also put a bucket policy so we are not allowing access management to the corporate bucket. The user is allowed to read objects in the bucket named corporate bucket. So far, so good. But not allow it to release the objects in the bucket. Well, that's wrong because we are allowing the release actions, all the release actions in our S3 resources, so we are definitely allowed to release objects in the bucket. So that option is wrong. Which leave us to option C, the user is allowed to read objects from the bucket named corporate bucket. That's the first part of the D option, and I said so far, so good, so we are allowed that. We are allowing the user to read objects in the first statement when you specify the get all actions. Wow, a lot of questions. I hope we are not sleeping right now. So, let's move forward to the next topic.

Here is a short list of some further material that I want you to take a look at. The first one is a great whitepaper, overview of security process. This is a very new whitepaper and that will give you an overview about the whole AWS infrastructure, will talk about a lot of services and will cover very quickly the security process that you should take in order to make your apps secure on AWS. So that we'll talk about the security best practices around EC2, RDS, work spaces, and a lot of AWS services. So, read this whitepaper.

The second whitepaper is Amazon Virtual Private Cloud Connectivity Options. I was planning to show you how to create a hardware VPN, but yeah, you need a specific hardware for that and I wasn't able to get my hands in a proper how-to to do that, so you should take a look at this whitepaper and really understand the topic talking about connectivity options. I also have in here a video for that, an AWS video where they show you how to do that in the AWS Console, they will show you what I was planning to show you, so that's great.

Remember when we talked about security assessments? That whole whitepaper is more or less 15 pages long. We'll talk about security assessments for security or compliance reasons. So you definitely should take a look at this whitepaper. The first part of this whitepaper is most important for you because you need to be able to demonstrate that you know how to prepare for security assessments. So you don't need really to go deeper in the security assessments for each service or each part of AWS. This is a page, this is also something new. This page will show you troubleshooting techniques for EC2 connectivity issues. You should definitely check out this page. It's not that big. And there is also a video, more or less related to this page, where we will also talk about troubleshooting connectivity issues for EC2.

The only material that I will not say that mandatory is the general FAQs. But if you already have some experience with AWS, you know that they like to use FAQ for certifications. So, FAQs are a great resource. But sometimes it's a bit boring going through all the services page, reading the FAQs because you don't really know the ones that are really more important. So what AWS did was, I don't know if they did that for certification, I certainly don't expect that, but as much as I know about AWS, my guts tell me that they will use the FAQs that they created in this page a lot for certification because what they did in this page was they took some EC2 FAQs, some RDS FAQs, some general FAQs, and they put those into a single page. So, I might be wrong, but doing an educated guess, I would say that this page will be very hard for certification. So, at least take a quick look at it. Make sure that you have more or less, that you know more or less what it's about. And check out some FAQs, if not all of them in this page. It's a bit long, but yeah, it will be worth it.

After this summary, we will have access to the final quiz of this learning path. This quiz will cover all the topics that you need to know for the SysOps Administrator. We covered all those topics during this learning path, but we didn't explicitly cover all the questions. What I say about that? Some of the sample questions you might notice that I explicitly talked about that during videos. And some of the questions you need to connect some dots, remember a few things, and find the right answer. That will be the case for a few questions in this quiz. So the questions will not be that obvious, that's the point. And you can use the explanation in the end of the quiz to know more about each topic and go further a little bit and study a little bit more.

As I mentioned, check out the FAQs. They are very important. You should check out at least the FAQs for the most important services. Again, AWS loves to take those questions and put them into the certification exam. So that'll be very useful for you. AWS whitepapers are very real-world oriented. And AWS loves to put real-world scenario exam questions in the exam, so whitepapers are a great source of knowledge also for certifications. So at least read the whitepapers that I separated in our GitHub repository. The AWS documentation are a bible. It's more general, but if you don't find the answer in the FAQs, and if you don't find the answer in the whitepapers, you for sure find it in the AWS documentation. I don't recommend that you read everything there. I guess it's a bit impossible. Maybe you are a bit crazier than me and you can do that, but regular crazy people, as I am, they can't handle, read the whole documentation, so use it with with moderation. For certification, I really recommend that you look more at the FAQs and also the whitepapers.

And after taking the final quiz, I strongly recommend that you take the official practice exam. It's available in the same web site where you take the exam, where you will schedule the exam. And this is also a great tool for self-assessment. You have more, you have questions that will look very close to the real ones, and you'll be assessed more or less in the same way that you would be assessed in the real exam, so in the end, for example, you will receive the relation of all domains in this exam and your score in each of those domains. So that's why it's so important that you take the practice exam. Make sure that you are ready. If you see that in one of the domains you are too weak, go back to this training, take our labs, go directly to the AWS Console, and work in your weaknesses because this exam is a bit unfair as you saw, and you have more or less the same percentage for all domains, what means that you need to make sure that you know about everything. Certainly, you can focus on a single domain. So take the practice exam. Make sure that you are ready and only then you schedule the real exam.

Finally, it's time to say goodbye. If you have any question, issues, that are related to our product, maybe your player is not working, we have an issue with our download package and so on, shoot us an email at support@cloudacademy.com. If you have questions, issues, non-related to the topics for the certification, maybe you want to know more about VPC endpoints or Route 53, or across region read replicas, we have our community. That's the best place to ask those questions. And one of our trainers or one of the members of our team will reply you as soon as possible. And I want to hear your feedback. I want to hear what you think about this course. If you have any suggestions. If you don't like it, and if you like it, please say that in the comments below. And please also say why. Sometimes people say, oh, this course sucks. I hope you don't say that about this course, but please specify why, and we love to work on the eventual issues and we also love to hear your general feedbacks. So, thank you for watching. I wish you all the success in the certification. And if you succeed in the certification, please let us know some how. Post it in our community, or in the comments, or shoot us an email, or shoot me an email. You can find my email in our employee's page. Thank you for watching, good luck.

About the Author

Students13401
Labs11
Courses6

Eric Magalhães has a strong background as a Systems Engineer for both Windows and Linux systems and, currently, work as a DevOps Consultant for Embratel. Lazy by nature, he is passionate about automation and anything that can make his job painless, thus his interest in topics like coding, configuration management, containers, CI/CD and cloud computing went from a hobby to an obsession. Currently, he holds multiple AWS certifications and, as a DevOps Consultant, helps clients to understand and implement the DevOps culture in their environments, besides that, he play a key role in the company developing pieces of automation using tools such as Ansible, Chef, Packer, Jenkins and Docker.

Covered Topics