This course looks at the Ansible Automation Platform. We'll look at how it works, looking at all the components like modules, tasks, and playbooks. We're going to show Ansible commands and how to use the command line tool, Ansible Navigator. You'll also learn about variables, templates and playbook basics.
Then we'll move on to the automation controller, the web UI and API, and you'll learn where it fits in and who would be using it. We'll also take a look at some enterprise features like Role-based Access Control and workflows.
- Learn the basics of Ansible including its components including modules, tasks, and playbooks
- Understand Ansible commands and how to use Ansible navigator
- Learn how to use variables and templates
- Use Ansible's web UI and API, known as Automation Controller
- Learn how to build a job template
- Understand how to use Role-based Access Control and workflows
This course is intended for anyone who wants to learn more about the Ansible Automation Platform in order to operationalize and put their Ansible workloads into production.
To get the most out of this course, you should have some knowledge of Linux or Red Hat Enterprise Linux. Existing knowledge of Ansible would be beneficial but not essential.
For this section, we're going to talk about Creating Automation. So this could be a persona within an organization, the automation creator, or it could be anyone who's creating automation and looking at it on the command line. We're going to be covering Ansible-core, which acts as the bridge between the upstream community and the downstream product, which is the Ansible automation platform. So when you pip install Ansible and get the community version, you're getting the same Ansible-core you would with a downstream product, with a bunch of community packages.
We're going to talk about playbook basics. We're going to talk about a new technology called execution environments. And finally, a new command line tool included in your Red Hat subscription called Ansible navigator. So what is Ansible Core? I mentioned it before. It is the underlying command-line tool and framework underlies all Ansible automation. It's been called ansible-base in the past and before it was just Ansible. Ansible Core is referring to just the cli, the language, the framework, and the functions that make up Ansible.
So why is Ansible Core important? Ansible Core is kind of the focal point between the upstream community with Ansible project and the downstream Red Hat Ansible Automation Platform product. So when you run Ansible playbook, you were running Ansible Core under the hood. When you run Ansible Automation Platform, you're interacting with core just the same to execute playbooks. The difference happens is that the community project is command-line only.
There's a variety of different projects with the Ansible Automation Platform. And all the product components are open source. But what you're mostly seeing in the upstream community is just Ansible Core plus the collections. And we're going to cover all the enterprise components of Red Hat Ansible Automation Platform and we bundle that together into a feature rich project or product. So that's the difference between the platform, the product and the upstream community.
Before we kind of deep dive, I want to kind of zoom out and just show kind of a Hello World Ansible playbook. Chances are you've probably seen this one on our website or an e-book or something really similar. But this is just how simple a playbook is. I think anyone who's ever worked on a Linux system could look at this and kind of take away what's happening here. It's running on a host name web, become must be some sort of privilege escalation, which we'll kind of talk about later.
And then this task has, or this playbook has 3 tasks. It's going to install httpd. It's going to template something which we'll get into. Then it's going to turn on the httpd service on this machine. And this is why Ansible's so simple and so powerful, as that not only is it really easy to understand, is that just in a few lines of YAML, we can install web servers on, who knows how many web servers, right? We haven't taught groups and nodes yet. But this is, could be 1 web server, it could be many more. And the real power here is that self documenting, meaning that when someone looks at your playbook, it's not like reading Python code or their C code or a bash script, is that it's very easy. It does this, then it does this. Then it does that. It's very simple to understand the playbook.
So what makes up a playbook? We have Plays, we have Modules and we have Plugins. These are the components of a playbook. We'll kind of dive in and show what they are on that example we just showed. So a playbook can actually have multiple plays. The plays stanza was at the top, it said install and start Apache. It told us what host it was on? This could be a host named web, or it could actually be a group of web servers that we could, we could run on simultaneously in parallel. And finally, we can put our privilege escalation also here. So become: yes is allowing us to be like pseudo into those boxes by default.
Modules, so each task is a one-to-one correlation with the module. So when we look at those tasks, for example, this one is the template task, is using the template module. I think the other thing to mention here is most of the modules are written in Python. Ansible core is actually written in a way that they don't have to be Python. It just happens to be that Ansible core is written in Python, so most of the modules are also written in Python. And there's example with this as for the Windows automation, we actually use PowerShell.
So when you look at the modules for Windows, they are actually PowerShell modules and they actually execute on those Windows boxes in PowerShell. Plugins. So plugins, technically the backup, technically modules or plugins, they're kind of a major differences. There is a task executed. Plugins are kind of all the extra bits that allow us to change the behavior or do interesting bits of code outside of Ansible. So there's a ton of different types of them. And I think the most common are going to be kind of the become here, is technically a plugin for privilege escalation. And it can be done on a play level or it could be done on a module level.
There's also a bunch of like filter plugins. Filter plugins allow us to change the variable output of a particular task into JSON, YAML. You can actually even change it into CSV. If for some reason you just love using Google Sheets or Excel, you can take that data and shove it into a CSV. And there's a ton of different extensible plugins that you can use with Ansible. So one of the questions I get a lot is like, I like using Python or whatever programming language of their choice. And what they're missing is that kind of whoosh moment is that you can use code that you write as a plugin and then plug it into Ansible. You take advantage of all the other things Ansible does really well, even if you prefer doing a particular function or a piece of code your own way.
So inventory. When we said web before, you can see that that's a group. So when we have colon web and sorry, bracket, web end bracket, we actually have 2 webservers here, webserver1. example.com and webserver2. We actually have 3 groups here. There's web, db and switches, as we can now have an inventory that multiple playbooks are hitting. And you can tell that particular playbook to run only on a particular piece of the inventory. So the playbook we just showed was only running on the web servers, where it might not make sense if those switches or a Cisco IOS device, we obviously don't want to install httpd to a Cisco switch.
So Roles, I mentioned this before. We can store them in a collection. But roles are reusable playbooks, basically that are written in a way where we can group it in a reusable structure. So when we have that playbook before we could create a web server role that now anyone, instead of having to cut and paste 3 tasks, they could just say roles web servers, and then that configures the web servers for them. So you'll see this kind of happen as when you have more than 3 or 4 tasks in a row that are always being used the same way. It becomes a good candidate to become a role. And you'll see on a lot of projects, I'm just thinking of examples in my head.
So workshops project we build, I think has like 15 plus roles, where it does a role to set up a control node. It does a role to set up the hosted nodes? It does a role to create a little website for people to log in. You can kinda daisy chain all these roles together and you can have multiple plays in a playbook. So they're just reusable pieces of code. So Collections, this is how we share content for Ansible, and that includes roles. So collection can have one or more roles within it. Previously, multiple years ago, the only way you can share content was one role per Git repo or GitHub specifically. So collections kind of changed the paradigm, as it creates a tarball where we can put multiple roles in there, and playbooks and plugins and docks and tests. And it's basically a file structure that we all agreed to in the community to create content.
So for example, there's a Cisco IOS collection, there's an Arista EOS collection. There's a NetApp collection. There's a collection for all the vendors you know and love and I'll show Automation Hub layer later. You can also create collections within your own organization. So I have a network toolkit collection that I created, where specifically only roles and playbooks dealing with network devices. The collections is that standard unit of automation. And the reason collections exist is so that we can work asynchronously than the Ansible Automation Platform release.
So for example, if a new module or feature comes out, in the old days of Ansible, you'd have to install the newest version of Ansible. Now, it's just like an app on your phone, as I don't update the calculator or Facebook app on my phone. By updating my phone, I just update the apps, right? So collections are just kind of that. They make sense. That's how we distribute content and you can uptake them asynchronously from the releases of the Ansible Automation Platform. So again, we're going to show some code and this fact it's YAML. So, don't freak out too much, but we're just showing a particular collection.
So this is a deploy NGINX collection. And you can see that's a collection by the folder structure here. So it'd be hard for you to know unless you've already memorized the file structure, which I'm not expecting any of you to, as it has a folder for playbooks. It has a folder for plugins, and it has a folder for roles. And when I include the role, I can run different roles that are within there. And this playbook is in here as being displayed.
So when we run the playbook it includes multiple roles. And then these roles can be deployed in the single collection meaning I could hand this collection to someone on my team and they can take that collection and run enroll. And that collection may have dependencies on other collections. And there's a requirements file in there and it will install those as well if it needs it. There's over, it says 90 certified platforms. I think there's actually over a 100 as of today, and that number keeps growing.
Certified specifically means that there is a hand to shake or a throat to choke depending on the term you want to use. I like to be more positive. But it means that as part of your Red Hat subscription, you know it's fully supported. We have contracts with these partners and sign the T's and C's that, you know, this is the official collection. There are community upstream collections and this is a way to differentiate between what's fully supported by that particular vendor or Red Hat and what's just community and someone's creating a proof of concept or a demo. And these certified platforms and certified vendors are all found on console. redhat.com as part of your Red Hat subscription.
So now that you understand what a collection is. I kind of want to talk about a new paradigm shift that's happened. We have tons of different collections we talked about, and we have over a 100 different collections, dozens and dozens of partners that are certified. Plus you'll have your own collections that you build and distribute within your organization. Each collection may have different dependencies. These could be Python dependencies or system level libraries, and they might require different versions of Ansible.
So to solve this problem, we've introduced a new technology called Automation Execution Environments, or EE for short, as you'll see some of the Ansible engineers refer to them. These are components where we take all of these requirements to execute automation. The collections you need to run, the playbooks you want to run in those collections, the libraries you need. Whether that's some sort of Python dependency and API and SDK. And you tie this with the Ansible core version that you want to use. This is on a universal base image. We package this into an execution environment. These are available on registry. redhat.io, as part of your Red Hat subscription. These are completely bundled within the Ansible Automation Platform. And we by default, give these execution environments.
There's no requirement to build them where you get all of the supported content and one execution environment out of the box. However, we also supply a minimal execution environment that you can build on top of. So if you have certified content from a partner plus your own content, you can bundle that together in execution environment and put it together. This is leveraging container technology, but we only use Ansible tooling and we fully support it. So there's no need to understand container technology or be a container Kubernetes expert. We bundle this all within the platform and make it really easy to use. This basically deprecates Python virtual environments and having to re-replicate those in different environments.
Now if I build an execution environment and share this with another person on my team, they can get up and started much quicker than having to replicate all the same requirements that I had on my control node into their control node, mimicking the Ansible core version and so on and so forth. This content creator can use another command-line tool that we're not going to cover today. It's a little bit more advanced. It's not hard, but it's just not covered because it's a different persona, because we bundle these execution environments. But it's called Execution Environment builder or Ansible Builder for short. This builder command will basically bundle these all up together and then create an execution environment which you can then publish to your private automation hub. We will elaborate on private automation hub later in this presentation. But private automation hub is basically a local, self-hosted, or on-premises solution for hosting execution environments and or collections.
Finally, the Content Creator can run Execution Environments using the new command-line tool called Automation content navigator. The actual command- line command is ansible-navigator and I'll showcase that on the command-line in a little bit. This command is very similar, similar to Ansible playbook commands, if you're used to open source Ansible. However, it has a lot of enhanced features, including simply the ability to run execution environments. So instead of just running locally on that Ansible execution on your Linux machine, on your control node, this will run on the specified execution environment for your development environment.
Now what this really helps with to, as you operationalize execution or your automation rather and you go from the developer stage and then you want to put it into automation controller, previously known as Tower. This allows a much more seamless experience because you don't have to replicate those requirements, even from going from command-line Ansible up to automation controller or different sites. So if you have a different Ansible Automation Platform in Japan and Korea and so on and so forth becomes really easy to replicate the automation because you're packaging it all together. You're not just handing them a playbook and then they had to figure out all the requirements to run that playbook.
So we will cover Automation content navigator and it has a new cool interactive mode that allows you to kind of zoom in the plays and zoom out. And I'll also showcase that today. Another thing we'd like to mention in the kind of getting started part, is how Ansible Automation works? And it's a little bit different depending on what we're automating. So when we're automating API endpoints like AWS part three, API or network devices like that, Cisco IOS, where it's just a command line.
We actually execute that automation locally, where we run these in parallel processes on the control node and to get more execution capacity, Ansible Automation Platform can actually be installed as a cluster. And you'll actually be able to spread out compute power if that becomes a problem. Now, I think it scales very well. My own laptop, I can do probably a couple of 100 routers before it starts singing. So it does scale and that's always a question I get. It does really well. And we have a platform solution for scaling this up to the tens of thousands. When we execute this for Linux or Windows host, they actually had the ability to execute code. So we can scale this even further because that code is actually executed on the end system.
So on the RHEL nodes that I'll be showing today, that Ansible automation is actually copying over that module code. It will execute it on that Linux device or Red Hat Enterprise Linux 8 device that I have set up, then it will send back basically a JSON blob saying like what it did before and after, was it successful? And then it kind of deletes itself off the system. So we're agentless as we piggyback on SSH, but we do execute when we can on this remote systems to help scale even farther. Now that you have an overview of Ansible and how it works, let's get into some Ansible basics.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).