Introduction to Ansible

Author: Ben Lambert

This webinar focused on an introduction to Ansible with our DevOps trainer and expert, Ben Lambert. Here below is the transcription for your convenience.

Ansible, what is it? I think most of you probably have some idea or you wouldn’t have shown up. We’ll talk about high-level how it works. We’re not going to get into really the low-level details of the operations. I will save that for a future course. We have a course in the works for Ansible so I’ll save it up for that, and then a live code demo because if I’ve learned anything in my years, live code demo is always a good idea. Inevitably when something goes awry, I’m gonna count on you guys to try them and help me find any typos, find anything that might be asked. This would be an interactive thing we got going here.

 

Let’s dive in. What is Ansible? Depending on who you ask, you’ll hear a lot of things. Here it’s about configuration management, it’s about provisioning. The simplest explanation I could give it is that Ansible is an automation engine. Because of that, it has a lot of power. It can do all of these things. It can do configuration management. You can do deployments, provisioning, security, and anything else that you want to automate. Ansible at its core is about automating all the things right. It’s tying back to why I said it’s automating all the things.

 

 

How it works: Ansible needs to know what servers that you want to do things with. It does this through the concept of inventory files. An inventory file just at its core is a text file and it lists a bunch of servers. That could be IP address or some wild card that encompasses a range of IP addresses or URL. Like you see here, it has a grouping. It has a name. Can you guys see my mouse? Hopefully I’m hovering over where it says webservers. We can define our webservers are www1 and www2.example.com. Anytime we reference the word webservers, that’s a variable that means do something to both of those servers.

 

You can have as many of them as you want. You can break them out logically so that if you were to run a script against, say, your database service to back stuff up, you don’t have to worry about which servers it’s going to run against. It’s always going to run against the ones that you specify.

 

I’m just going to scroll down the questions real quick. Inventory files are basically static files.However, you can also have dynamic files. You could use something like a Python script to dynamically generate your inventory. Now this is useful for stuff like cloud providers because you may school up 10 to a million virtual machines in the cloud. You’re not necessarily going to want to add those all manually to a file so you can use a script that can interact with your cloud provider and it will generate that list for you. We’ll get into an example of that. We’re going to use AC2 and dynamically create some AC2 instances so we’ll talk about that. For the broad strokes, it’s about having all of your inventory so Ansible knows what to do.

 

Now, here we have a couple of ad hoc queries. In this case, we have Ansible. It’s a top line here. Ansible all and then tack m, and then the word ping. Now what that’s doing is this is saying, “Tell Ansible to use all of the servers in our inventory and ping them.” Now, you don’t see all specified here and that’s because it’s a reserved word that just basically means all of your inventory. If you see that, that’s what’s happening and it’s running the module ping and it’s going to ping every server that you have in your inventory.

 

Now that may or may not be exceptionally useful but it’s at least good to know that you can run these ad hoc commands to do one of things. Now the next line, line 2 says, “Ansible webservers tack m, yum tack a.” Then we have a string wrapped around some properties. We’re saying, “We want to call, we wanna execute on all of our webservers.” Remember we have our webservers here, so anytime we reference webservers, it knows that it should be these two. It’s going to run the yum package manager and it’s going to install the HTTP daemon and it’s going to make sure it’s installed.

 

We have state and installed. Now if you run this against one or a million servers, it’s going to make sure that that module is installed. If it is already installed, it can let you know it’s already done, you not to worry about it. You can also use this to specify versions. You could say, “I always want to be the latest if you want to update all of your webservers with one command, you can do that here.” Then again we have webservers and then we run the reboot command so this is going to reboot all of your webservers.

 

What if you need more structure? 

Now, these ad hoc commands are great. You run on the command line. They reach out to some amount of your servers from your inventory and they execute some command. What if you need more structure? Now that’s where playbooks come in. You can define in a playbook a series of tasks that you run. Then you can call that playbook at any time and you can actually run that in a reproducible way. There’s a lot of things I like about this. You can write a playbook, share it with your coworkers. You can have it under version control which means if something goes wrong with one of the changes, you can roll it back and rerun the last version.

 

This one on the screen, it’s going to create a VM on Google cloud engine. If you see under Tasks, we have launch instances and then you can see gce. That’s the Google cloud engine module. You can use these different modules that are built into Ansible to do an awful lot of different things.

 

 

That’s a high level of Ansible. Ansible uses a set of inventory files and those can be dynamically generated through code or they can be static files and it uses that to determine which servers you want to run different commands against. It does all of this through SSH which means you don’t need to have some sort of agent installed on all of those servers that you’re managing. They just have to support SSH and they also have to have Python included. For Linux servers, mostly that’s not a big deal but for some of them, that might turn into an issue.

 

I’m just looking to see if there’s any questions at this point. Let’s end this and let’s get into the LiveCode because that’s where the real fun is, right? Can you guys all see my screen? I just want to make sure you can see my text editor. I’m getting yeses. Great. I thought what’ll be fun is when I was thinking about creating an Ansible webinar, I was thinking back to my development and what I used Ansible for the most and really it was things like handling backups. It was things like deploying. Configuring development environment was a big part of it.

 

So, I thought what I would like to do is take this simple HTML file and I want us to back it up to S3. I want to take this, dump it out to S3, and then I thought it’d be cool if we could take that file and deploy it to a staging server and make sure that Apache is installed and it’s serving it up. Once we’re happy with it, we’re going to deploy it to a production server and these are just the names of the EC2 servers. There’s no real other value then just being labels. If we have time, we’re going to maybe create a module because I think that’s a lot of fun Ansible has, I think like 4 or 500 built-in modules. You have a lot of power to start with. What if you need to do something a little different? If we have time, let’s create a module and show that process because, with a little bit of code, you can do a lot of really cool stuff.

 

Before we do that, I want to show you some additional ad hoc commands because I think seeing it on the PowerPoint doesn’t convey the real value of it so I’m just going to copy the command I have on a different window. I’m not sure if you can see it. I’m cheating and just pasting in this command but what this command is doing is it saying, “We’re gonna run the Ansible executable, work this tack i is the inventory file. Remember I said we can do dynamic inventory? You don’t necessarily have to have it in a static file.

 

Ansible provides this ec2.py file. What this does is it dynamically goes out to your AWS environment and it finds all the info that you need to know about your AWS environment. You have a similar one for Google cloud and for other cloud providers and this is really cool. You can mix and match these and I know it’s probably outside of the scope that we’ll cover here but you could have some stuff using EC2, some stuff using Google Cloud, and some with just local host files.

 

This is using boto for any of you there Python developers. You’re probably familiar with boto. Let’s go back to our command, so we say our inventory should come from our Amazon EC2 instances. We’re going to use the user Ubuntu because these particular servers are just Ubuntu servers and the Ubuntu user have SSH keys for. This this is cool. This is where we specify what we want to actually call.

 

 

We could say only show our production servers, only show the servers that are in a specific region. You can target just the servers you want to run a command against. Let’s see if we can’t do a example of that. We’ll do, what did I do? I just didn’t delete enough. We’re going to do tag underscore Environment underscore staging. Now, if I have my typing correct, okay. Now this isn’t going to mean much until we go in and see.

 

EC2 Instances 

 

On the screen here, what we have is two EC2 instances. We have one named webinar and one named webinar2. Let’s look at the tags. Environment, staging, you can see where I’m highlighting. It’s on the bottom of the screen if you can’t see where I’m highlighting. We have a couple of tags and the key here is Environment and the value is staging. In this one, we have Environment name production. Let’s go back. We said we wanted to ping this server and we should have gotten back, 52.42.122.48. Let’s go back, 52.42.122.48.

 

 

We can target just what we want. We can target these tags with this underscore syntax. We’re saying, “Look at all the tags and find out the tags that have the key of environment. We want all of them to have a staging value.” We can do, what’s your name, right? We have a couple tags. Let’s do this one. We should see if we do, if we ask it for a tag of name and a value of webinar, this is the IP address we should see: 52.41.81.165. We’re saying tag underscore name underscore webinar. We’re going to ping it. There we go, up we got 52.41.81, sorry. You guys, I assume that you know where I’m looking but I’m going to highlight it so that it makes it a little easier to read. If we go back, 52.41.81.165.

 

We can tell Ansible, “This is the pool of our inventory. It comes from Amazon but specifically we wanna hone in on just a certain set of servers.” The value here is, you can run your commands on very specific groups of servers. Now, maybe you have something like a  blue-green deployment. In a  blue-green deployment you have a group of servers that will tag as color blue and a group of servers will tag as color green. Then if you wanted to update your environment, you would update the one that’s not live and then you would flip the switch to tell your router to point to those servers. You can target just those, update them with the latest code, and then switch over the router to look at just those. If everything is good, you’re all set. If not, you switch the router back to the other color environment and you’ve basically just rolled back any broken changes.

 

Let’s clear screen. This is the basic syntax of an ad hoc command. You can run different modules. Here we’re just running the print working directory and anything in our US west 2. This is going to run in a few seconds. There. Our default working environment is /home/ubuntu for both of those servers. Ad hoc commands are great. You can run stuff dynamically. I don’t like to use it to make a lot of manipulation changes, like I don’t like to use it for patching and stuff like that necessarily because I like to have all that in a playbook which allows me to roll back. What I do like it for is any sort of monitoring.

 

 

 

If I want to know, like, “Are all my servers actually up and running?” We can go back to our ping and we can determine, “Yep, okay. Everything’s working,” so here change defaults. That means this was up and running before, it’s still up and running now. We can use these values maybe in some sort of dashboard or some sort of other bit of code to ensure that everything is behaving the way we want or we could check versions of things. You could run an audit to make sure everything is on a particular version of, let’s say, OpenSSL. That’s where I see the value in ad hoc stuff.

 

Now, we talked about playbooks as being a mechanism for when you need more structure or even you want to share this work with your fellow developers or operations folks. You want to create a set of tasks that you can reproduce over and over again. That’s where playbooks come into play.

 

Let’s write a quick playbook. Its goal is going to be to take this index subhtml and we’re going to upload it to S3. Now I want to show you in S3. We’re going to dump it on this Ansible webinar bucket. Nothing in the bucket, no tricks up my sleeve, I’m not trying to pull a fast one. Let’s write this playbook now. It’s not too long so we can do that. I’ve got some coach here. Let’s see. Upload to S3 and it’s going to be a YAML file. Ansible uses YAML which is yet another markup language. That’s not my name for it. That’s the name for it.

 

Let’s write this up. We’re going to start with three TACs and what that does is it just tells the processors that this is a YAML file. We’re going to say hosts. We’re going to specify my local machine, YAML to a connection, connection local. I’ve already said the host is local but I like to run connection local just as a kind of a sanity check. Somebody comes in and edits this later, it’s still going to say connection local and then we should be all set. We don’t have to worry that it’s going to get run on an environment it’s not supposed to. Less of an issue on this. This is a pretty benign thing to be running so it’s not too bad.

 

S3 Module

Now we have our tasks section. In our tasks section this is where we define all of those different tasks. Now we ran them, ad hoc things before but here’s where we break them up. Let’s do S3. This is a module that is provided by Ansible. I want to show you in the documentation. They have an awful lot of modules here. We have S3. It gives you all of the parameters so we know there is a bucket parameter, there’s a destination encryption. Their documentation is really good and it’s getting better all the time.

 

 

Here’s some examples. If you’re ever curious about what needs to happen, you can just swipe one of these, paste it in, and test it out. I think that provides a lot of value to have documentation. Let’s look at all these modules. I don’t know how well it comes across scrolling on the screen but if you go there yourself you’ll see it. It’s quite a list. We’re specifically going to target the S3 module. Just jump back. We’re going to say our bucket equals, this is a bucket in our AWS console. We’re going to grab this, copy it, paste that in.

 

Our object is going to be, we’re going to keep it the same name. We’re going to say index.html and our source. That’s our source file. What file do we want to upload? We’re going say /index.html and the mode is going to be put. Now, you saw before, right? We ran some Ansible, Ansible tack i ec2. We’re going to run playbooks, slightly different, Ansible tack playbook. Actually, no. We don’t need an inventory file because we’re saying running on local host. Let’s just specify it. We called it upload to s3. Let’s see what happens.

 

It’s complaining. The offending line appears to be here. Let’s see, Ansible tack hosts. Let’s see. Who’s seeing it? Let’s see, S3 paste. There we go. Slight typo on my code there. What happened was I just ran this and we have our tasks run. We do a set up and then we run the task, upload to S3. It says it changed. If we go back, if we refresh this page, we should have our index.html. This is great. Now we want to see it so we click here. Some of you probably know what happened. We never specified what the permission should be. We said upload this to this bucket but we didn’t tell it that it should actually be running as a public thing.

 

Let’s look at our permissions. Content and contractors, we get to do whatever we want. We don’t have everyone with the open download permissions. How do we fix that? Let’s go back and we can say, “Whoops and save permission equals public read. We’re saying public read. Let’s go back to our modules if we were to filter for s3. Let’s find our permissions. We can set different permissions here. This added in 2.0 and this lets us determine if it’s private, public-read, public-read-write, etc. Now we’re going to send it back up, if we run this, it takes a second to upload. Close this. In theory, if we refresh, everything should look good. Click here.

 

Notice we have this additional permissions. We have everyone with open download. Let’s click on this. Ansible: Automate all the things and look cool doing it. Now if we make a change, any change we want, this is live. I’m going to save that, going to read to upload it, and still refresh here. There we have it. Having this off in a playbook gives us a level of consistency. It allows us to share this with our coworkers. It allows us to share this with other ops people or other departments.

 

Once you’ve written a playbook and share it amongst your teams, they can all expand on it. They can all make it better. They can all make it more parameterized. You can build things really rapidly that become highly useful. Now this one admittedly is less useful than the stuff you’ll probably be building but it gives you that framework to see that. This sort of collaboration is going to be fairly easy.

 

We have it up on S3 and you saw we had two live servers. We have our production and our server that we call staging. Let’s take a look at these. What’s on production? Nothing. We have nothing on production. It doesn’t even look like it is configured to run Apache. Our production environment, pretty generic, right? Let’s look at our staging environment. Our staging environment, it looks like it has an outdated version. Remember what do we have here: and look cool doing it. This is live. This one’s outdated. Let’s create another file. We’re going to call it webserver. Its job is going to be to make sure that our webserver is installed and that the latest version of the file is loaded.

 

I’m a cheat and just do to a little copy-paste here because I think I want to get to the modules at some point. I don’t want us to run out of time. Let’s do a new file. We’re going to call it a webserver.yaml. Let’s do this. Like I said, I’m going to cheat just a little bit and copy and paste. We’ll go through it just so that it will all make sense. Do you remember in the ad hoc stuff I showed that we can do tag underscore then the key name and then the value? Let’s jump back to that command just so that you see what I’m talking about.

 

If you look at this command, we’re targeting everything that is in a EC2. Everything that’s in Amazon, that’s an EC2 server. We’re saying specifically tag underscore environment underscore production. We want to ping just the production servers. We showed that before, this syntax. This is us narrowing in on the hosts that we want to identify and we can do the same thing here. We can specify tag underscore and then we’re going to use this Jinja2 syntax to specify the tag variable and that a value variable. What this does is we have our variables defined here. You see on the screen. Let me make that a little bigger. I don’t know if you guys can see that. We have tag environment, value staging.

 

Now if we didn’t pass anything into this playbook, this is what this would automatically fill it in for. We look for all of our staging servers and what we would do is we run the apt package management on it, install Apache2. We’re saying, “Update the cache.” We want to make sure that apt-get is using the latest information, so we want to update the cache before we do it and we’re saying the state should be the latest. What that means is it’s going to go out to the apt package management. It’s going to get the latest version of Apache.

 

You can specify a specific version and that’s fine. That will allow you to lock in and use that lock in to ensure that you’re only running the version that you expect. Now, if you run latest on all your production stuff, you could end up with some issues. This is good for development but may not be what you’re looking for production.

 

Then, we’re going to use the get URL. We’re saying we have a URL. We just uploaded this. We uploaded this to S3. This is from our Ansible webinar bucket. It’s our index.html and we want to put it in the var www html index.html. Now, this next one, this is important: force, yes. If we have to go back, if we installed Apache, what happens is Apache dumps a generic HTML file in that location and so when this get URL runs, it’s going to look and say, “Does that file already exist?” It’s saying, “Is this destination something that already exists?” If it does and it does because Apache put in on a default one, it’s going to skip this. We’re telling it force. Every time I want you to go out, I want to get the latest version and I want you to update it no matter what is already there okay.

 

Here’s the interesting thing. We get to say, “Once you’re done with this, I want you to notify this handler.” See this handler here? What this does, it says this service named Apache2 should be restarted whenever we notify the service. When we notify this, run this Apache restart. What we can do with this is we can edit the files that are underneath our var www then we can ensure that Apache gets restarted afterwards so we’re not working off of cached information.

 

Let’s save this. I’m going to see if I have a cheat to run this one. Nope, I don’t. What we’ll do, we’re going to Ansible playbook. We need to use the dynamic inventory. If we go back, our hosts are looking up tags and those tags need to come from AWS. We don’t have them specified anywhere in the hosts file so we need to tell it specifically to use the inventory ec2.py. Remember, this dynamically goes out and grabs all of the inventory from AWS for us, kind of a helper. It makes it pretty dynamic. We’ll go back. We’re saying go out too EC2. Grab all of our servers with connecting with the user of Ubuntu. Then we’re going to run that playbook, webserver. See that it automatically filled this in, Environment underscore staging over back.

 

As soon as this is done, I’ll show you why. I bet most of you are already saying, “It’s the variables,” and you’re right. Install the Apache web server it changed. It must have been an update since it ran last. Download the file and then it runs the restart for Apache. It just runs through our task one at a time in order. These variables, they come from this var section, environment, and staging. We could override that. We’ll do that in just a second.

 

Let’s go out and make sure. This was our previous. That’s a production. This is the file that’s out on S3. S3 [through 00:30:26]. I want to make sure. You know I’m not cheating. Let’s go to [first 00:30:32] staging server. Go back to our description, copy that, then close out this, and paste it in. We told Ansible. What we wanted to do is we want to make sure that we have the Apache web server on that environment. Whatever is specified with a tag of environment the value of staging and we want to make sure it’s the latest version. Then we want to download our HTML file and restart Apache. You can do all this manually but what happens when you have hundreds of servers? This sort of deployment doesn’t scale well with manual stuff. This is why this becomes useful.

 

Now if you want to do something else, let’s say you kick this over to coworker and he says, “Oh, I know a, I know trick to lock down Apache and they get super secure.” You hand this off to them. They can run their tasks that will happen after. You can collaborate and build on these things very quickly. Now we have two cool pieces of functionality we can upload to S3 from our local host. We can deploy to a server. Let’s deploy this to production. I want to show you we can override these variables. We’re not locked in. Remember, our production server has nothing on it. It doesn’t even have Apache installed.

 

Let’s test it out. I mean I haven’t tested this. It should work and the way we’ll do that is we’re going to pass in the tack tack extra, tack vars equals, and we’ll say-what do we call it-tag equals, actually we don’t have to do that. We just do the value. Value equals production. Let’s look back, make sure that everything’s legit. We call this tag. We call this value. Now if I pass in these extra variables, I can override what’s set here by default. I’m saying now, “Value should be production.” Again, that comes from our tags, Environment, production. Let’s see if this works. tack tack extra vars, let me just make the screen a little wider for myself here. I’ll shrink the text so I can.

 

Let’s run that, see what happens. See, it automatically filled in, tag underscore Environment underscore production. We told it we should override that value with something different. This is going to take a little bit longer because last time on the staging server, Apache was already installed but it wasn’t in the latest version. This has to go out. It has to update the apt cache and that has to actually install the server. Now, in theory, if everything worked as it should and we all know that it never happens in a live code demo, that [it really 00:33:39] worked as it should. We should be able to go to that production environment and browse to it, [will set out 00:33:49]. There it is. We can target just a service we want and tell it to hit just those targeted environment inside of our AWS [app 00:34:06] inside of our AWS console.

 

What’s next? We have the ability to upload our HTML so we can make changes. Let’s make a quick change. This is live and then we’ll do a smiley face because we’re happy and we’re going to deploy that just to our production server. No matter what, it’s going to ensure that Apache is there. It says okay. Maybe it didn’t go and deploy. Let’s find out. You know why? Because I was being a [dupe 00:34:52]. What I didn’t do is upload it. I think [if you 00:34:57] just [clock 00:34:57] that for me, so thank you. Thank you for those of you that just said, “Hurry. Make sure you upload it first.”

 

Now we’re going to redeploy. Again, our Apache web server is already installed unless there’s a new version in the last few seconds. We have download HTML file [too 00:35:21]. It changed. The reason it changed is we uploaded our file, our latest file with the smiley face to S3, which is where this webserver playbook is getting its information from. Let’s go back and refresh and see the font. The font doesn’t really do well to convey that that’s a smiley face but that’s supposed to be big old smiley face because we’re all happy playing with Ansible.

 

What’s next? You know what’d be fun? I think it’d be fun if we didn’t have to run two separate commands, if we didn’t have to run the upload to S3, and then the webserver because that’s tedious. Here’s what I’d like to do. I like to create a playbook that just pulls in those other playbooks. We don’t need to duplicate any effort here. What we’ll do and we’ll create a new file and we’ll say, “Upload and deploy.yaml.” Now Ansible gives this really cool functionality to include additional playbooks. If you’re a developer, you’re probably familiar with this type of includes. All it’s going to do is pull in those things before it actually processes the playbook.

 

We have upload to s3.yaml. That’s this file here. We’re going to say, “At first I want you to upload this file.” We’re just going to do some copy-paste because I’m lazy. I admit it. We’ll call it webserver. Let’s make sure it’s named webserver. In theory, we should be able to call this and we should be able to have it upload our index.html and then it’s going to deploy it to whatever server we say. Let’s make a change, otherwise, we won’t know that it’s working. Let’s give this guy a body. He needs he needs a body. We’ll give him some arms and we’ll give him some legs. [inaudible 00:37:47] there.

 

Our little guy here, if you can see on the screen. I know the text is a bit small but this little guy is going to go live out in EC2. Let’s go ahead and run. We’ll send this to production [if 00:38:07] still passing the extra variables because we can pass it in and Ansible will know that it needs to distribute that to the playbook. What we’ll do is we [click 00:38:17] upload and deploy. Let’s try this stuff and see what happens.

 

Great. It’s uploaded. It’s targeting our environment production. Let’s see. We should see changed. There it is. We should see change for the download. That’s right. My apologies. Now it’s going to restart Apache. Great. Let’s copy this. [Already 00:38:55] have here but we’ll just copy it in anyway. There it is, so [our guy 00:38:59]. He’s so happy. That’s awesome. Look at him.

 

There’s a lot of power in this syntax. You can take existing playbooks that are created by different departments. You can combine them into your workflow to make something more robust. This allows you to treat your playbooks, kind of like Legos. If you’re familiar, if you played with Legos as a kid or as an adult, in my case, you can take these little blocks and you can stack them on top of each other and they interlock and they form something even cooler than the individual components. This kind of Lego ability has a lot of power. What comes next?

 

You know what? It occurred to me, we have two servers. We have our production and our staging. Let’s go back and look at those. Now what happens if we want a new server? That’s something that could happen.

 

I want to show that module. I’m getting a little antsy. I’m seeing the time here. I’m thinking that module, it’s calling to me. What I’m going to do is I’m going to copy and paste. We’re going to create a playbook that creates a new EC2 instance. Then we’re going to provision it and we’re going to treat it just like the other ones. Here’s what we’ll do. You have, [you’re going to 00:40:26] create a new file and call it new underscore server.yaml. I’m just going to paste this in. It’s similar to what we had before. We’re running this on a local version. We don’t need to connect out to find anything in Amazon because we’re going to redo this, we’re going to run the code locally. It doesn’t have to run on another instance but it’s going to, that code is going to create a new server.

 

Here’s what it does. We’re saying we have AWS environment called staging. Server name is going to be WebinarVM. Then we’re going to run EC2 module. We’re going to pass in some variables, right? Instance type, t2 micro. This AMI, what this is is just a base Ubuntu image. We’re going to wait for it. I already had a security group that I was using for something else. I’m reusing it here. It’s bad form in a real environment but for a demo it’s okay. Putting in the west 2 region, so sending this out to Oregon. Then we have our instance tags. Remember we saw those here? Tags, environment, name.

 

We have nothing here that’s running that has a name of WebinarVM so we’ll see it up there in a bit, in just a bit. We’re assigning it to a subnet. We want it to have a public IP address. Now here’s a piece of really cool functionality. We’re going to register the output to a variable called ec2 info and we can debug with it for local testing. We can say debug, we’re going to spit out a variable, and this is the variable. This is a very powerful thing because you can use the output from one thing to use as the input for something else later. If you run this register with some module, it’s going to take that output. It’s going to on hold on to it for your playbook and then you can use it in another task. We could say [leads 00:42:31] have another task here named, I’ll just say, pretend. We’re not really doing anything with it. If we had a module, we could do something with it.

 

I’m just going to do something that’s not real for the [sake of say 00:42:49] print. This isn’t real but you get the point. Then we can pass it in info. We can pass it in info from this variable. We could say instances. It’s going to be a dictionary and you can treat it as such. Then you can use whatever data it collects from that output to feed information from other tasks. Let’s save that back and let’s run this. Let’s see what happens. I’ll go Ansible playbook, Ansible tack playbook and we’ll say new server. We’ll see what happens.

 

Remember I said that we can debug. We have this one here, debug, and we’re going to spit out the results that this EC2 module produces. That’s this here. That’s all this information. We can use that for later on. You can say all sorts of fun information here. More importantly, we should now have a new server and here he is. He seems really happy running in his new little environment but there’s nothing on it. We’re going to go back to make sure it’s running. It’s labeled as environment staging. Now if we ran this command we ran just a bit ago and upload and deploy, we’re not going to do production, this time we’re going to do staging. It’s going to send our latest version to both staging servers. Let’s look at that. This is our other staging server. Let’s look at it and see that it shouldn’t have …

 

Remember our latest version has a little guy here? This one does not so we have two servers. We have our new staging and our old staging and [they’re 00:45:17] both on an older version. Let’s see if we can update those. [Notice 00:45:33] picked up on our new server and it doesn’t install Apache on the previous [staging 00:45:44] but it does need to do it for the new one because we just provision a brand new instance. It has nothing on it so that’s what’s happening there. It’s going to restart Apache. Great.

 

Refresh this. Our first staging server, up and running. We refresh this. If you have one or a million servers, it’s okay. You tell Ansible how to identify those servers and what to run and it’s going to go out and run them. Let’s close it up here, close it here, let’s go way back. Let’s go all the way back. Now, playbooks, let’s see. What have we covered? We’ve covered how to create a basic playbook with uploading this to S3 and then we provisioned our webserver to use that and then we created our upload and deploy and if you wanted to always be dynamic, you could do that.

 

Let’s copy and paste this. There we go. You could include new server here, new underscore server. If we had another playbook that would kill the servers when we’re done, what we’ve had is the ability to create a new server dynamically, a staging server, upload our code to it, put it on, configure it. Actually, set up the environment the way we would in production and then if we have another playbook to run some tests, we could run those tests, verify that everything’s the way we expect it. If everything is good, we can send it over to production. You can mix and match, conclude stuff, you can dynamically create servers, destroy them when you’re done, and it makes life a lot easier. These are now reproducible little Lego bits of code that we can reuse as we see fit. No matter how many times we run new server, we’ll always get the same consistent results.

 

Let’s back up here. Now I want to show you [again 00:47:46] how to create a module. We’ve used EC2 module. You can see it here. We used the S3 module, apt-get URL. These are all built-in modules. There are hundreds of them. There’s an awful lot. What happens when you want to do something custom? We’re going to create our own [run 00:48:07] click demo.py. I’m going to steal some code that I had from previous demo. Just copy that and I’m going to paste it in here.

 

Here’s what we’re doing with this. I’m saying we’re going to grab our [cut 00:48:32], Ansible gives us a framework for creating some of the basic stuff that most, all modules will do. You’re going to [part some 00:48:39] command line arguments and you’re going spit out some output in a JSON format. To make it easier, Ansible has given us the ability to import these modules. What we’re doing here is we’re creating an Ansible module and we’re passing in our expected command line arguments. We’re going to say, “Show name its required faults.” You’re going to put it in optionally and it’s a type of Boolean. This will cast it into a Boolean when we use it later on.

 

We’re going to grab the IP address from the machine and then depending on whether we say, “Show name,” we’ll either show the operating system name or we won’t. Now this is not particularly useful. However, it’s just a skeleton so that you can see we have just a little bit of code. You can grab some information from the command line and then do something with it and return something valuable.

 

Here’s what we’re going to do. We’re gonna try and run our command. Let’s see. Save this and I got a handy little cheat sheet here. Let’s say, “Show name, yes.” What’s happening here? This is not unexpected. We’ve created a module and we have told it to go ahead and run it. Problem is Ansible doesn’t know that we have an additional modules directory so it’s looking in its default directory for all of the modules and it’s saying, “I don’t know what you’re thinking because this isn’t the default.”

 

We solve that by making sure that, you could put in the Ansible config or we can make sure that our environment variable is set. Let’s do that. I’m just going to set in an environment variable. I’m saying the Ansible library should also consider my stuff to come from this location. This is where we’re currently at, so let’s run that again. Now, we have three. What this is doing is it’s connecting to all of our servers on us-west-2. Go back here. That’s the servers here.

 

Let’s see. We have, let’s get this side by side, 172.31.16.216. If you look here at private IP addresses, 172.31.16.216. If we went through the list, we have 41.11, 41.11, and 25.53, 25.53. What this does is it says, “Connect to all of these servers. Take this code, take our demo module, and then run it on those servers.” That’s why Python needs to be installed on the servers because it’s going to execute this code from those environments. That’s how it gets this information.

 

With just a few lines of code, we were able to create a very simple module. You see we have this module.exitjson. If everything is good, it’s just going to exit. It’s going to do an exit with a zero to say that there were no errors and it’s going to return this as a JSON object. That’s pretty much what we saw here, so JSON object.

 

If, for some reason, you don’t have a sufficient module to do what you need, you can create your own. Like I said, the module index is pretty involved. They have an awful lot. Before you go and create something on your own, you should check. Make sure that with some module or combination of a couple modules, if you can’t create what you need to, then go ahead and create your own. If it’s something that’s kind of a generic enough that you can share with the community, that’s great too.

 

We won’t talk about Galaxy Ansible. Let me just click. I know I just said we won’t talk about it but now I’m feeling bad. Ansible Galaxy is a repository for people [how to 00:53:05] solve problems every day. We have different forms of playbooks, so maybe it’s configuring a Python server under Apache with modwsgi. People solve these problems all the time. You don’t need to reinvent the wheel. You can go to Galaxy. You can see if the problem that you’re trying to solve has been solved and then you can download these playbooks that people have already written. It just makes it so it’s less time for you. If you can have something already written that does at least 90% of what you need, then it makes sense to use. Something like that.

 

This is Ansible at a high level. It’s an automation engine that allows you to either statically or dynamically specify which servers you want to target and then run commands across those servers. It also allows you to easily install your own modules if the built-in ones don’t work. That’s kind of the demo as I have prepared.

 

We have just a couple of minutes. I’m scrolling through the questions to see. Let me just make the screen a little bigger. Let’s see. Someone’s asking about using range, like their question was specifically, “If we have a,” only plug the example here. Let’s see. [inaudible 00:54:54] All right. If you see here, we have our our tag [on the screen 00:54:56] environment production. They want to know about using some mechanism to specify like a regex for production, maybe a production 1, production 2. Yeah, to my knowledge that is a thing. I know it exists in the static and the static hosts where you can use regex to specify. I’ve never used it on the command line. I will get back to you guys on the community forms to tell you for sure if it worked on the command line but it will work with the hosts file.

 

“Is Ansible Tower basically a GUI for the command line or is it different?” This is a really good question. It’s about Ansible Tower? Yes, it’s a GUI for the command line of your playbooks but it’s a little bit more than that. It keeps a history. It allows you to have some level of audit-ability so you can audit who and what and you can share specific playbooks with people. Yes, at a high level, it does allow you to just treat the command line as a GUI but it gives you a lot more than that too. I heard rumors. This is a rumor. Please don’t quote me. I heard rumors that Red Hat may eventually make that open source. If it hasn’t been already, I haven’t kept up on that. If that’s the case, that’ll be really cool.

 

Let’s see. “Where are the AWS connection settings?” That’s a really great question. You can use a couple places. Now, I’m not going to drill into this file because it’s where my private key is stored for my particular user and I’m still using that key. You see here there’s a EC2. Can I make this bigger? I can’t. On the left-hand side of the screen, just above where it says ec2.py, we have an ec2.ini. You can specify there where the AWS [inaudible 00:57:04] you have these here. You can specify on your INI file. You can also export these, right? These are just the generic ones that the use. You can export these for your terminal session.

 

You can have them also in your your RC files so that it will happen every time you log in to the terminal but you can specify path to this INI file and that’s where we’ll be. I just flashed on the screen. Hopefully you guys won’t do anything nefarious with my secret keys here.

 

Let’s see. “How do you compare it with Chef?” That’s a great question. I’ve used both in the past and I really liked them both. Here’s what I like about Chef. I’m familiar with Ruby. I did a lot of Ruby dev, so I felt really comfortable jumping in and using their syntax for their DSL. Not everyone does. I’m told that the learning curve can be steep and I imagine that’s true. If you’re a programmer, you feel really at home with Ruby and Chef because you can do really complex things in code. Now with Ansible, you’ve seen you have these playbook files. A lot of the stuff that you would do is easily encapsulated in one or two lines but you also have the ability to use modules.

 

Your module can be anything. It doesn’t have to be Python. Python is what Ansible is written in and so they have some nice helper libraries but anything that it can handle standard I/O can be used as a module. There’s a lot of flexibility there. I also like that I don’t need to have any sort of agent on a particular server and I think that has a lot of power.

 

Let’s see. Somebody asked, “How does Ansible know to use which OS command to run when installing something?” In our example, we used apt to install Apache but if we were on a Red Hat system we would use yum. Now, you can specify with the, I think, the actual key word is when or where. Sorry. It’s slipping my mind at the moment but there is a syntax that allows you to specify if the OS is of this flavor, then you’ll want to do this command or that. You’d say, “When it’s a Red Hat, you’ll do the same command with yum. And if it’s Ubuntu or Debian, you’ll want to use apt.”

 

Let’s see if we have time for one more. Let’s see. Someone was kind enough to suggest that they can guarantee others won’t mock about with my private keys and they suggest [deleting MI 01:00:18]. I think that’s probably a good idea. If you’re trashing my environment as we speak, it’s going to end in like 40 seconds. I apologize. Your fun will be over.

 

Do we have time? Yeah, we got time for one more. “Do the commands in Ansible execute serially through the [importance 01:00:40], for example, when you install Apache. Does it need to wait?” You can specify how many it will run simultaneously so you can run them in parallel or you can run them serially. Great feature if you were going to do a batch of 10 different servers. You could run them either all at once or you could have it just go through one at a time. There’s pros and cons to each, like if you do one at a time and there is a failure you’ll see it before it completes them all or all at once, obviously, it’s faster if you’re trying to update something like the heartbleed bug that we had ages ago. Remember that one? You’d test it locally and then you would want to just be able to deploy that as quickly as possible.

 

I think that’s about all the time we have. You guys have a lot more questions I see. Here’s what I want to do. I’m going to try and distill these questions that are left because there are a lot of really great questions here and working with colleagues to try and get these on to the community forums. Check out the DevOps community forum over the coming days. It’ll probably be in a few days. I’ll start asking these questions and answering them. Then you guys will have that as a resource to come back to. Thanks everyone for coming. It’s been a lot of fun. I hope this was useful for you guys. It was a lot of fun for me. I look forward to talking to you at the next webinar. Thanks everyone.