1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Introduction to Managing Ansible Infrastructure

Ansible Galaxy


Infrastructure Management
Ansible Tower
Ansible Tower CLI
PREVIEW14m 44s

The course is part of this learning path

Start course

An overview of Ansible Tower and Ansible Galaxy


Welcome back to Introduction to Managing Ansible Infrastructure. If you've been following along through the Ansible courses, you should be well aware of the importance roles play in Ansible. It is not always necessary to write your own roles from scratch, and many roles exist to manage common tasks and services. However, these can be tedious to track down, install, and manage. Ansible's solution for handling these roles is Ansible Galaxy.

Galaxy creates an official hub of community roles, aimed for easy management and installation for maximum re-usability. Ansible offers a couple of methods for searching and browsing the roles available to Galaxy. Galaxy comes bundled with Ansible, so no additional installation is required. Aside from downloading roles, Galaxy also supports a host of useful features, such as the ability to specify role names in a text file, so that Galaxy can install all the roles specified within that file. Galaxy can fully manage roles installed on the system, query their state, remove them, and search for new roles.

Galaxy can also manage authentication for remote Galaxy servers for role development and management of your roles. I'm gonna start off by demonstrating Galaxy's ability to install a role on the local system and create a play to use that role.

For this demonstration, I'm going to install Java on the target host. As you can see, Java is not installed on the system. I'll start by making a very basic playbook. Now, I searched the Galaxy site and found a Java role by this name, so I decided to go ahead and use that. So before I can use it, I will use Ansible Galaxy to install the role.

So as you can see, it goes out and installs, downloads and installs the role. So, I've read through the documentation for this role and I see that it accepts a couple of options. It accepts a parameter for the version of Java I want to use. As well as setting the version that we're installing as the default, which I would like to do as well.

Now I can try and run this playbook. As you can see, it is using the role that we just installed, and is in the process of installing Java. So now we should see Java installed.

Now I'm going to expand on this by installing Java build tools and Tomcat, a scenario that is not uncommon for developing and testing a Java application. Again, I'm gonna search through the Galaxy browse roles from the website, and I found a couple other roles that I believe will be helpful. I'm gonna use the build tools and Tomcat. Now, again, reading documentation, I found Tomcat has a couple of options that will be helpful to use. Specifying a default port, specifying 8081 instead of 8080. We'll set up the default host name and enable the service.

Okay. Now, running this again, you'll see this will fail. It can't find the role. So instead of installing these one at a time, I'm gonna demonstrate how to install these from a single text file and manage their versions. So I'll put in each of the roles that we've been using. We can also manage the versions. We do this by putting a comma followed by the version number we want installed. That will ensure those versions are installed. For whatever's left blank, such as this line, we'll install the latest version available. Using this method allows us to build a stable and consistent dev and testing environment.

So now I have the three roles I want specified. I can now install from this text file. So as you can see, the roles were installed, along with any dependencies that were required.

So now I'll take my updated playbook, and I'll run it again. We can now see that it installed Tomcat. And everything is up and running. Additionally, if this was to deploy test systems, the playbook could download the code or any application artifacts for the convenience of the developer or tester.

Galaxy provides easy access to an entire community of supported Ansible roles. But what if you need your custom role to deploy across your environment? Galaxy provides a way to develop and host your own roles. I will demonstrate how to create roles using Galaxy's tools and allow Galaxy to manage it from a repository. I will base this off the very simplistic sudoers role I've been using throughout this course.

Galaxy has the capability to create the proper file system structure for a role. I'll start by creating that structure. As you can see, the entire directory structure was created and placeholders are in place. With the structure complete, I need to start populating it with the playbook I wrote earlier. I'll move all my YML files into the matching structure of the new role.

This next part varies based on the intention of the role. I'm going to bundle my role into a TAR file and place it on a web server. GitHub can also be used if you intend to share your role with the community. It can be submitted to Galaxy, but that's beyond the scope of this example.

Now I will move to a fresh system that does not have my module on it. I'll write up a simple playbook that utilizes my module and run it. Module names that are being distributed in this manner need to have a username separated by a period with the name of the module. In this case, I called my module nick.sudoers, so I will attempt to run this playbook.

As you can see, it failed as expected, since my module is not on this system. With the role on my web server, I can specify the web server's address and location of my role in my role file. Now, I can use Galaxy to install all the roles from my role file. As you can see, it pulls my module down from the web server and installs it. Now I can run my playbook to success.

It is worth noting, if your module exists on Galaxy's servers, you do not need to specify the entire URL as I did, only the username.role_name. There may be times when the remote host you're running your plays on needs access to the internet and is behind a proxy. The easiest method of handling this is using the environment argument to set the proxy, as shown in the example. This can also be set at the playbook level, as shown in the second example.

There are times when you may need to run only portions of a larger playbook without running the entire playbook. A play and a task can be marked with a tag. Tags are specified on the command line with --tags, or --skip-tags, and multiple tags can be specified, separated with commas. Tags can also be applied for roles and include statements. Lastly, there's a special tag called always. If this is specified, then the play or task it's associated with will always be run, regardless of what tags are specified. However, if always is specified within skip-tags syntax, it will be skipped.

Infrastructure as code is the concept that servers can be provisioned and deployed and applications installed and configured from code. The concept of configuration management covers a good portion of infrastructure as code, as you've seen demonstrated throughout this course. I want to delve deeper into this concept, with a focus on cloud architectures and how Ansible plays an important role in this.

Infrastructure as code centers around the automation of building, maintaining, and destroying infrastructure, which forces the breakdown of infrastructure into modular services and ties them together. This goes beyond what is management with shell scripts and some fancy Perl, and opens the door for a tool such as Ansible. Ansible allows system architecture to be written in a high-level language and encourages defining infrastructure this way. With the ability to manage this infrastructure in the cloud, complete infrastructure can be built from the ground up from a codebase. Developers can build out infrastructure for testing without the need to involve operations. Systems can be replaced or upgraded simply by changing requirements in the code. Infrastructure revisions can be managed in a source control management system.

The ability to provision, manage, and destroy infrastructure from code in a fully-automated manner opens the door for integration in a continuous integration and delivery environment. CI/CD methodology is the idea that every time there is a change to the code, infrastructure, or configuration, everything gets rebuilt. With a traditional infrastructure environment, it is possible to manage a continuous integration pipeline for your application, but continuous delivery is almost impossible. With Ansible and a cloud architecture, infrastructure can be brought up to test code changes in an application with each commit of the code. No extra work is required by the developer. Upon successfully testing this code, it can be automatically moved to production. This is generally referred to as continuous delivery pipeline.

Thank you for completing Introduction to Managing Ansible Infrastructure. I hope this course was helpful in getting you started with tools such as Tower and Galaxy. If you have any questions, feel free to ask them on the community forums. Thanks again for watching, and good luck with your configuration management.

About the Author
A DevOps Automation Company
Learning Paths

Stelligent's entire focus is DevOps automation and Continuous Delivery in the AWS cloud. Founded in 2007, Stelligent is an AWS Advanced Consulting Partner with the DevOps Competency. For more information please visit https://stelligent.com/