The course is part of these learning paths
This course looks at the Ansible Automation Platform. We'll look at how it works, looking at all the components like modules, tasks, and playbooks. We're going to show Ansible commands and how to use the command line tool, Ansible Navigator. You'll also learn about variables, templates and playbook basics.
Then we'll move on to the automation controller, the web UI and API, and you'll learn where it fits in and who would be using it. We'll also take a look at some enterprise features like Role-based Access Control and workflows.
Learning Objectives
- Learn the basics of Ansible including its components including modules, tasks, and playbooks
- Understand Ansible commands and how to use Ansible navigator
- Learn how to use variables and templates
- Use Ansible's web UI and API, known as Automation Controller
- Learn how to build a job template
- Understand how to use Role-based Access Control and workflows
Intended Audience
This course is intended for anyone who wants to learn more about the Ansible Automation Platform in order to operationalize and put their Ansible workloads into production.
Prerequisites
To get the most out of this course, you should have some knowledge of Linux or Red Hat Enterprise Linux. Existing knowledge of Ansible would be beneficial but not essential.
We're going to move into Roles, cover what are roles, what do they look like? And then we're going to talk about Galaxy and Automation Hub. So a role is a disaggregated playbook. It's where, when you have 2 or more people working on the same playbook, you need to kind of break it up into small, easy to deploy chunks. So what this means for us, like if we look back at our handlers example, the handlers could actually be a separate file. Now that makes sense when we could have like 12 different handlers. We don't want a 500 line playbook. We might want to separate the handlers out, especially when they're only deployed sometimes. It's kind of apparent what they do if you're just restarting a service or flapping a port or whatever they happen to do.
So this example on the screen right now is an nginx role. There's a playbook section with example playbooks. And you can see they say include_role and then this is actually a collection containing multiple roles. We'll kind of elaborate more on what a collection is? And this role here, is breaking apart that playbook. The defaults, are the default variables for that particular role. The files, the tasks, which is our main playbook, the templates, so this is where we would put our Jinja templates and so on and so forth. So let's dive into that Role Structure a bit more. So the defaults could be like a default port. For example, in the httpd example, we've been showing over and over with like a web server, we could default to port 80 or 8080 or 443, but we can make that configurable.
So the playbook always runs and by default it configures the port that makes sense, but it's now in knob. It's a variable that we can change on demand in our playbook. Handlers is the one I used as an example. This will just be where we put all the handlers and they're kind of triggered off the name of that handler. The metadata will show us kind of requirements and dependencies for this role as a way to package it. Task, this is kind of the main heart of the role. This is the playbook. If there is a minimum requirement for a role, you'll actually see this with roles. There's only a folder with tasks-main.yml and then obviously it kind of makes sense here, if I have one file, as I can have multiple tasks files and include them. So I can do like hey, run this task, then run this, then run this. So that task folder can have multiple different YAML files in there and directories in there.
So one of the things you'll see in here is, it's common to do includes or conditional statements within a test directory based on the operating system. So in Linux you might have, for example, we use YUM for or DNF with RHEL, we might want to use APT for things like Ubuntu. Templates, this is where we can throw our Jinja templates into a single directory and call them from anywhere in the task directory. The test, this is used more as you kind of integrate, kind of unit tests on your role, so you can start testing them. So if anyone changes anything, you can kind of kick off CI and test that particular role. So as roles become more and more production ready, they'll usually include tests and roles that are included with your Red Hat subscription. They always have tests. It's kind of a requirement for us. And then vars, these are variables that we kind of imagine people will change.
So defaults are kind of like the same as vars in a way, but vars beats defaults in the order precedence. So this is where we would want people to configure different vars. Or like you could use a survey which we'll talk about later to override any var in there or an extra var on the command-line. So I talked about Collections. So what is a collection? And the collection has just, it started out as like, how do we bundle multiple roles? Historically, Ansible used to have a one-to-one mapping in Ansible Galaxy, which is a distribution mechanism for roles originally. But how do we contain like 10 roles, if I make like a network collection for IOS and I have 10 different roles I want to share. It was kind of annoying to have 10 separate Git repos of that to share with people. Plus, there was not really a well-defined way to share content outside of roles, meaning how do we share module?
So we have a vendor or a company like Cisco or F5 or NGINX, right? And how did they update in module kind of separate from the distribution Ansible itself. So we designed this concept of collections. And a collection allows us to have multiple roles. It allows us to have playbooks, it allows us to have plugins which are actually Python code embedded into the collection. And then modules are a type of plugin that are in there. So we can actually update modules asynchronously from Ansible distribution. So if there's a new feature update for my network switch, I can just download the new collection, not unlike my iPhone. I don't update my iOS on my iPhone to get the newest Calculator app, I just update the calculator app.
Collections also contain documentation and tests. So your role can have tests and your collection can have tests that called the roles. So just depends on how you want to set that up. But it's just a predetermined file structure for how we contain these and then we distribute them in a tarball. So, now that you have the equivalent of an app, we have a distribution mechanism. How do we share these with other folks? And what we've done here is we've created Automation Hub. This is the downstream offering included in your Red Hat subscription for sharing content between organizations. So on console.redhat.com, we kind of have our cloud offering with all the supporting content from us, Red Hat, as well as our partners. So public cloud like Microsoft Azure, Google GCP, Amazon AWS, back to the network vendors that I keep talking about being on a network guy.
Going back to other Red Hat partners like Red Hat Satellite Collection. So the cool part about this is we also allow private Automation Hub and I'll show the architecture map in a second. This allows you to create private content that you don't have to put in the console.redhat.com that you can share with your organization. So if you have a automation team that develops content and you want to share that with other automation teams, they can now distribute this without having to upstream it and open source it. We Red Hat, open source everything, that doesn't mean that your company wants to put everything out in the open. That's a particular proprietary playbook for your use case.
Private Automation Hub allows us to sync content from Automation Hub, which is our downstream distribution mechanism of supported and certified content. We also have Galaxy, which I mentioned before. Ansible Galaxy is the upstream community, so anyone can publish the Ansible Galaxy. So this is good and it could be bad because you're just not really easy way to determine what is community content or what is supported content. And that was a big ask from our customers as understanding which content do I have a throat to choke, a hand to shake, and SLA on. And what content is just like community people developing it.
So this way, private Automation Hub allows you to sync both your Custom content, the certified content, and Galaxy content. You might even use Galaxy to create custom content that then you certify on rubber stamp or you work with Red Hat we can certify, and we want to certify as many collections as we can. We just want to make sure that they meet the stringent kind of tests and requirements we have, so that our support team can take cases on them and make sure everything is running smoothly. So private Automation Hub would be something that you run self-hosted, whether it's On-premises at a data center or on your public cloud account, it doesn't matter. Very low requirements, other than this sinks to the Ansible Automation Platform cluster, which would be automation controller. So you would sync this to your cluster. Then you get that content. And it could be prebaked playbooks that you deploy on whatever you want.
So that's kind of the high level architecture diagram of how private Automation Hub would work? This is how you would distribute these collections and then again, goes back to roles as we would shove tons of roles into a collection which is kind of predawn automation. So how can we get these collections? So we just showed that kind of in the middle here, that architecture diagram as we can pull them via controller. Your project file can also have a requirements file, so dynamically we'll grab all the collections you list there. And you can set that to your private Automation Hub versus Galaxy versus console. redhat.com where you can, just pull on the command-line as there's ansible- galaxy command. And even though that the command name is galaxy, it works for both Automation Hub and Galaxy. You just kind of point it like technically all of them are considered Galaxy servers.
So Automation Hub is just the supported server for the downstream content where galaxy is all the upstream community stuff. So it's just a different type of Galaxy server. So Galaxy has tons and tons of content. This is kinda been in existence as long as I've used Ansible. And this is where a lot of people get started. You'll see tons and tons and tons of content out there. You'll see this background, one would like to show as this is an Ansible fest event, when we got to do them in person. It's a ton of fun, if you've never got to go to one. So we just kind of show there's the shares content, it builds community and it becomes really easy for people to kind of reuse different Ansible content.
So with that, let's show a practical example and I'll kind of look at a role and execute it on the command-line with Ansible navigator and what kind of elaborate on a few of the things we just talked about. So I'm back in Visual Studio Code. We're going to look at a role. This is an exercise from our Ansible workshops. Just see if you have ever translated a bunch of languages us with all these different, Spanish, French, Japanese, Portuguese, Brazilian, Portuguese. But we're going to look specifically here at the role. So the first thing we need to actually use role is a playbook. So everything starts with a playbook, even when we're using roles.
So in this case, we have a playbook. It runs on 1 node here, node2 become true, we have a new concept, we haven't talked about much here. The pre and post tests, I don't see them used as much anymore. It's still completely fine to use. The reason is that we can just have, I could rewrite this playbook and do tasks here. Run this task, then I could do include role and then for another task and the way that actually saves me some lines. It's just another way to write it. So this is a perfectly fine way of doing it. It's not a bad practice by any means. But just in case you don't see this pre_tasks or post_tasks that often that's why as people just do include role. So pre_tasks, it's going to run 'Beginning web server configuration' roles here. It's going to run a role called apache_vhost. And then in post_task, it's going to also do a debug statement and print to the terminal window 'Web server has been configured'.
So, by default wherever the playbook is, it's going to look in its current working directory first. If there's a current working directory that says roles, that has to be, like a required field, is it has to be named role. So in this roles, there's only 1 file here, there's only 1 role. It could be multiple roles in here, it says apache_vhost. So that's how they map together as it looks directly into the local folder. Now if I'm using a collection and I installed it to my machine, it's globally available. We would just call it by the collection namespace. So, if I have my own collection called Sean and my role was called apache_vhost, like it could be that easy, Sean.this or sorry, sean.collection name.role. So it'd be like sean_ collection.apache_vhost.
So then you can call it and that's installed for your machine, your control node, or it's put inside of an execution environment for you. But when you have it in your local folder like this, you don't need the fully qualified collection name there. So we don't need that because it's right here, just to show you the difference and then it's going to run this post_task. So let's look at that role. It's in here. We have files, handlers, tasks and templates. So you'll see that you don't need all of the folders that we discussed. We just need the bare minimum as this task, and this task main has, let's look at it. So it has, this is where it will go.
So when we call this role, it goes to task, it goes to main.yml every single time. It has to be called main.yml. There's ways around it, but by default you don't really want to mess with that or again you'll be difficult to read someone's playbook if they're just trying to like turn knobs to confuse people. So this is going to install httpd. The second task is going to make sure the httpd service is started and enabled, which will allow it to restart and reboot it, that box loses power is rebooted. It's going to make sure that this file is present. So it's just checking. It's going to name it ansible_ hostname, which is a fact. It's going to copy content over.
Now again, the copy module is not a template module, so let's go and look in files. So if you look at this directory files on the left we'll see web.html is there. And the next one is a template. So it's loading a vhost.conf.j2. So under templates we have the vhost.conf.j2. Now, this might not make a ton of sense of organizing because there's only one file on each directory, but this is how they're organized. So when people want to just, they like, hey, I know there's something wrong with the template, they know to go to the template's directory for that role. If they know it's a file they know to look in files. And this is just a standard agreed on file structure that all Ansible role writers have agreed to over the last, how many years Ansible's existed. So that's why it becomes important and it also helps kind of make it easier to work together on the code because you know exactly where it's prescribed to put things.
So you're like, hey, I want to add a task. You put that in the tasks file. Hey, I want to add a variable that goes into the vars file. Or hey, I'm going to add a new template. You put it in the templates directory. It's also makes it easier for people using GET or some sort of file, SCM tool as there's less chance of collisions as it working on as just something I've discovered over time is like a huge advantage of kind of breaking apart the playbook into multiple components makes it much, much easier for people to work on it, especially people that aren't great at using source control management. And a lot of playbook writers aren't programmers, so we don't expect them to be great with tools like Git necessarily.
So by breaking this apart, there's less chance of conflicts or merge conflicts or problems as we kind of break apart the variables that change a lot versus the ones that don't. So what else do we have in here? We have a handler, so that notify, restart_httpd. So in this example, very much like the one I showed earlier is the handler, but now it's in another file. And you'll see that it is a task file. It's just in the handlers directory. So it also starts with 3 dashes indicating that it's a valid YAML file. So with that, I'm going to run the playbook, actually run this ansible-navigator from. And we're going to go in interactive mode because there's a lot more pass in here. So it'll be interesting. And again, this one, only running on node2, if I remember correctly and we'll go and interact with. It's about the type --mode again. I will raise this window. You'll see that there's a Progress bar.
You see the task count going up. So even though there's only 1 node here, we also see that there's a lot of tasks so that zoom functionality. So you can see like in 8 lines, we can see all of the output. So that's a huge advantage of this interactive mode as web server has been configured. I can still get the output. We can actually compare this role to running stdout. This is a good example of why interactive mode exists, as when you get a bunch of different tasks running. Standard mode is awesome. Lot of people when they start their Ansible journey, they love the ansible- playbook command. We can see, especially when I have the font large for recording and stuff is it scrolls right off the screen. It's really hard to get that like 1000 view. And that's why I kind of compare navigator tech like Google Maps so I can zoom in and out. It gives me more context, so I can zoom in to particular task.
You'll also notice when I run navigator might've seen these been going by the machine, as I get these artifact files and I haven't really dived much into those. These artifact files give me a JSON payload of that entire play. And this can actually be reopened by someone else with Ansible navigator on another machine. So they can actually troubleshoot and see the exact same results that I have, which is pretty cool. So with that, that concludes this example. I know it was short. There's a lot of things we can cover it in roles, but let's move into automation controller and start talking about how we can manage automation and operationalize Ansible playbooks into like an actual enterprise.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).