Ansible and AWS: cloud IT automation management

With things moving a bit more slowly through the holiday season, we’re going to re-run some of our most popular posts from 2015. Enjoy!

The kinds of virtual infrastructures that define the Cloud Computing ecosystem demand a high level of automation. As the number of virtual servers used for individual deployments grows, the complexity of that widely distributed automation grows, too. Ansible, a relative newcomer to the IT automation and orchestration market offers some unique and compelling features.

The goal, as with all such open source tools like CFEngine, Chef, and Puppet, is straightforward: to simplify IT infrastructure management and make organizations comfortable with their ever growing IT portfolio. Chef and Puppet have already established themselves by being used to manage giants like Facebook, Google, and eBay. Ansible, especially considering their very recent acquisition by Red Hat, is increasingly seen as a major player, too.

Besides resource provisioning and configuration management, Ansible can also orchestrate complex sequences of events like rolling upgrades and zero-downtime provisioning in a simple or multi-tier application environment. The power of Ansible is not limited to managing servers: it also can manage network switches, firewalls, and load balancers. Ansible has been designed to work seamlessly within cloud environments like AWS, VMWare, and Microsoft Azure.

Ansible’s unique feature set:

  1. Based on an agent-less architecture (unlike Chef or Puppet).
  2. Accessed mostly through SSH (it also has local and paramiko modes).
  3. No custom security infrastructure is required.
  4. Configurations (playbooks, modules etc.) written in the easy-to-use YML format.
  5. Shipped with more than 250 built-in modules.
  6. Full configuration management, orchestration, and deployment capability.
  7. Ansible interacts with its clients either through playbooks or a command-line tool.

Ansible Architecture

Ansible and AWS

Ansible runs as a server on just about anything: even a humble PC or laptop. It has an inventory of hosts, modules, and playbooks that define various automation tasks. Individual modules are pushed to manage entities via SSH and, once a result is returned, the modules are deleted – which is why server-based agent installations are unnecessary.

Installation

There are multiple ways to install Ansible. You can use yum or apt-get from a Linux repository, git checkout, or PIP. Here is a sample installation command, run through yum:

#yum install Ansible
===========================================================
Package                    Arch            Version                Repository                  Size
===========================================================
Installing:
ansible                    noarch          1.9.2-1.el6            epel                       1.7 M
Installing for dependencies:
PyYAML                     x86_64          3.10-3.1.el6           public_ol6_latest          157 k
libyaml                    x86_64          0.1.3-4.el6_6          public_ol6_latest           51 k
python-babel               noarch          0.9.4-5.1.el6          public_ol6_latest          1.4 M
python-crypto2.6           x86_64          2.6.1-2.el6            epel                       513 k
python-httplib2            noarch          0.7.7-1.el6            epel                        70 k
python-jinja2              x86_64          2.2.1-2.el6_5          public_ol6_latest          465 k
python-keyczar             noarch          0.71c-1.el6            epel                       219 k
python-pyasn1              noarch          0.0.12a-1.el6          public_ol6_latest           70 k
python-simplejson          x86_64          2.0.9-3.1.el6          public_ol6_latest          126 k
Transaction Summary
===========================================================
Install      10 Package(s)

As you will notice, this has installed Ansible-1.9.2 on the server. The default host inventory file should be located at /etc/ansible/hosts. You can configure your managed servers in this file. For example, a group called test-hosts might be configured as follows:

[test-hosts]
3.3.86.253
3.3.86.254

Ansible assumes you have SSH access from the Ansible server to the machines behind the above two IP addresses. To run a sample Ansible command, you can run the following:

[root@ansible opt]# ansible -m ping test-hosts --ask-pass -u "ansibleadmin"
SSH password:
3.3.86.254 | success >> {
"changed": false,
"ping": "pong"
}
3.3.86.253 | success >> {
"changed": false,
"ping": "pong"
}
[root@ansible opt]#

As you can see, Ansible is able to communicate with the listed servers without installing any agents as such. Assuming you have generated and placed ssh-keys in their required locations, everything works over SSH.

Ansible components

Inventory

The “inventory” is a configuration file where you define the host information. In the above /etc/ansible/hosts example, we declared two servers under test-hosts.

Playbooks

In most cases – especially in enterprise environments – you should use Ansible playbooks. A playbook is where you define how to apply policies, declare configurations, orchestrate steps and launch tasks either synchronously or asynchronously on your servers. Each playbook is composed of one or more “plays”. Playbooks are normally maintained and managed in a version control system like Git. They are expressed in YAML (Yet Another Markup Language).

Plays

Playbooks contain plays. Plays are essentially groups of tasks that are performed on defined hosts to enforce your defined functions. Each play must specify a host or group of hosts. For example, using:

 – hosts: all

…we specify all hosts. Note that YML files are very sensitive to white spaces, so be careful!

Tasks

Tasks are actions carried out by playbooks. One example of a task in an Apache playbook is:

- name: Install Apache httpd

A task definition can contain modules such as yum, git, service, and copy.

Roles

A role is the Ansible way of bundling automation content and making it reusable. Roles are organizational components that can be assigned to a set of hosts to organize tasks. Therefore, instead of creating a monolithic playbook, we can create multiple roles, with each role assigned to complete a unit of work. For example: a webserver role can be defined to install Apache and Varnish on a specified group of servers.

Handlers

Handlers are similar to tasks except that a handler will be executed only when it is called by an event. For example, a handler that will start the httpd service after a task installed httpd. The handler is called by the [notify] directive. Important: the name of the notify directive and the handler must be the same.

Templates

Templates files are based on Python’s Jinja2 template engine and have a .j2 extension. You can, if you need, place contents of your index.html file into a template file. But the real power of these files comes when you use variables. You can use Ansible’s [facts] and even call custom variables in these template files.

Variables

As the name suggests, you can include custom-made variables in your playbooks. Variables can be defined in five different ways:

1. Variables defined in the play under vars_files attribute:

vars_files:
- "/path/to/var/file"

2. Variables defined in <role>/vars/apache-install.yml
3. Variables passed through the command line:

# ansible-playbook apache-install.yml -e "http-port=80"

4. Variables defined in the play under vars

vars: http_port: 80

5. Variables defined in group_vars/ directory

Sample Playbook:

## PLAYBOOK TO INSTALL AND CONFIGURE APACHE HTTP ON Servers
- hosts: all
  tasks:
   - name: Install Apache httpd
     yum: pkg=httpd state=installed
     notify:
       - Start Httpd
  handlers:
    - name: Start httpd
      service: name=httpd state=started

You run the playbook from your Ansible Server. The following command has been run from within the path where our apache-playbook.yml playbook is stored:

# ansible-playbook apache-install.yml  --ask-pass -u "ansibleadmin"

Ansible and AWS: provisioning and Installation

Let’s try to provision an AWS EC2 instance using both the Ansible EC2 module and a playbook. An full example is beyond the scope of this document, however there’s plenty of great documentation available. The ec2_module documentation can be seen here.

Step-1: Install python-boto on your Ansible host:

#yum install python-boto

Step-2:  Install argparse (in case you need it):

#yum install python-argparse.noarch

Create AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY environment variables (either export in a shell or place in your ~/.bashrc file)

Step-3: Add a local inventory to the /etc/ansible/hosts file:

[local]
localhost

Step-4: Run the following command (if you are running in adhoc mode):

# ansible localhost -m ec2 -a "image=ami-d44b4286 ec2_region=ap-southeast-1 instance_type=m3.medium count=1  keypair=ansible-key group=ansible-ws  wait=yes" -c  local

As I mentioned, you can also do the same thing through a playbook:

- hosts: localhost
  connection: local
  gather_facts: False
  tasks:
     - name: Provision a set of instances
      ec2:
         key_name: ansible-key
         group: ansible-ws
         instance_type: m3.medium
         ec2_region: ap-southeast-1
         image: "ami-d44b4286"
         wait: true
         count: 1
         instance_tags:
            Name: Demo
      register: ec2

Conclusion

If you stayed with me so far, you should now have a fair idea of how Ansible and AWS can work together and especially how well it integrates with Amazon’s EC2. Ansible provides a great IT automation and orchestration tool for the cloud environment, and with so much portability in its command syntax, it’s easy to create either playbooks or out-of-the-box modules.

Written by

Cloud Computing and Big Data professional with 10 years of experience in pre-sales, architecture, design, build and troubleshooting with best engineering practices.Specialities: Cloud Computing - AWS, DevOps(Chef), Hadoop Ecosystem, Storm & Kafka, ELK Stack, NoSQL, Java, Spring, Hibernate, Web Service

Related Posts

— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...

Read more
  • AWS
  • EC2
  • re:Invent 2018
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS
Albert Qian
— August 9, 2018

AWS Summit Chicago: New AWS Features Announced

Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...

Read more
  • AWS
  • AWS Summits
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless