In a previous blogpost, we have seen the 5 best tools for AWS deployment. One of the tool which we covered was Ansible. In this blogpost, we will see how to install this software and will learn the basics of it, to help you to get started with Ansible.
Get Started with Ansible: Introduction
Ansible is one of the youngest and fastest growing configuration management, deployment and orchestration engine. Released in 2012, it is one of the most popular GitHub project already.
Some of the biggest pros of using Ansible are its agent-less architecture, the use of SSH protocol for communication and the use of YAML syntax for its configuration files. It only requires Python packages installed on client nodes. Agent-less architecture removes the burden of upgrading packages at each new release, while the SSH protocol makes the communication between server and clients very secure. Further, YAML is very easy to read and understand, making the use of Ansible a lot simpler.
Ansible is available in two versions: Ansible Tower (paid one – free up to 10 nodes) and Ansible Open-Source (free).
Basic System Requirements
- Control Machine – This pretty much acts as the Ansible server where all the playbooks and configuration files are located. Control machine can be configured on the most common Linux Distribution, OS X or any BSDs. It requires Python 2.6 or greater. The Windows operating system is not supported by the Control Machine.
- Managed Node – These are the nodes managed by the Control Machine. In a server-client architecture, managed nodes are clients where configuration or application is deployed. Any operating system (Linux, Windows or Mac) with python 2.4 or greater installed can acts as Managed Node.
- Inventory – Ansible holds the information about nodes and group of nodes to be managed in a simple INI format inventory file. The default ansible host inventory file is located at /etc/ansible/hosts. The sample inventory files looks like :
staging.example.com [webservers] prod-web01.example.com prod-web02.example.com [databaseservers] prod-db01.example.com prod-db02.example.com
Apart from information about nodes and group of nodes, the inventory file also holds information about host specific variables (e.g.: ssh ports, db parameters), group variables (e.g.: defining some system level parameters or default interpreter) and group of group variables.
You can also pull information about your dynamic inventory using external inventory system. Plugins are available to fetch inventory from your cloud provider (AWS, GCE, Rackspace, Openstack etc), LDAP or Cobbler.
- Modules – Ansible modules are independent pieces of code which can be used to alter and manage your infrastructure. These modules can be executed independently (ad-hoc way) or with ansible playbooks (described below). At a beginner level, modules can help you install, start, stop and restart services, execute commands, copy files etc on your hosts. There are around 250+ modules available which helps you to perform a wide range of tasks on your infrastructure. As per Ansible documentation, Modules are idempotent, that is: they will only execute if the state change is required.
Example: The service module is the easiest way to restart your webservers (apache)
ansible webservers –m service –a “name=httpd state=started”
In this case, if a service is already running, it won’t restart the service. This is waht modules idempotence means.
- Playbooks – Playbooks are the heart of Ansible. As mentioned above, modules can run in ad-hoc way but when you are looking to orchestrate your configuration and execute series of complex commands in an order, playbooks comes into picture. Playbooks are written in YAML format. In a playbook, you can include multiple modules and perform tasks synchronously or asynchronously.
A playbook is broken into multiple parts:
- hosts: where you want to deploy your configuration
- remote_user: execution of steps as a defined user
- tasks: execute modules with specific variables
- handlers: triggering a specific execution only once if notified by multiple tasks or system state changes. It depends upon notify block under tasks.
A sample playbook is here below:
--- - hosts: webservers vars: http_port: 80 max_clients: 200 remote_user: root tasks: - name: ensure apache is at the latest version yum: pkg=httpd state=latest - name: write the apache config file template: src=/srv/httpd.j2 dest=/etc/httpd.conf notify: - restart apache - name: ensure apache is running service: name=httpd state=started handlers: - name: restart apache service: name=httpd state=restarted
We will dig deeper into Ansible playbooks in the next posts of this series.
Remote Connection Methods
As discussed above, the beauty of Ansible is that it is agent-less and relies on the SSH protocol to communicate with hosts. However, there are multiple ways you can connect to hosts or execute Ansible playbooks:
- OpenSSH – This is the most recommended procedure for connecting to hosts. If you are using the new control machine, it will support ControlPersist, Kerberos and other options in the SSH config file. It is default from Ansible 1.3.
- Paramiko – Paramiko is the Python implementation of OpenSSH. It is used for old EL6 operating systems where ControlPersist feature on OpenSSH is not supported.
- Local – Local is used if you wish to execute playbooks on your control machine itself. If you want to make an API call to AWS or Rackspace, there is no need to execute these commands on a remote host. You can pretty well run ansible-playbook in local mode.
One of the other features making Ansible very easy to use is the variety of installation procedures.
- OS Package Manager – It is the recommended way to install Ansible on your control machine. Ansible is packaged and avilable in the archives of the most important distribution, so check your distro’s documentation to find out how to install it. If you are using RHEL, CentOS or Amazon Linux, you will need to enable EPEL:
- Enable EPEL repository on Amazon Linux – Go to your yum.repos.d folder and enable epel.repo (modify enabled=0 to enabled=1).
- Install ansible
# yum install ansible
1. If you are on RHEL or Centos, install the EPEL repository as well
2. Enable the RHEL Optional Repository [Only applicable for RHEL]
3. Install ansible
#yum install ansible
- PIP – You can also install Ansible using a Python package manager like PIP. For installing ansible using pip, ensure pip is installed on your system. If not, please install it using your distro’s package manager or esay_install. Once done, you can use pip to install Ansible:
#pip install ansible
- GIT – You can also clone the Ansible repository and install it from source. Before installing ansible from GIT, make sure pip and additional python related dependencies are installed. Then, you can install Ansible from source:
# git clone git://github.com/ansible/ansible.git –recursive #cd ./ansible #source ./hacking/env-setup
To build the inventory, you need to put your managed nodes information in your inventory file. For demonstration purpose, we have launched two fresh EC2 Amazon Linux instances and put down their private dns in inventory file (/etc/ansible/hosts).
Using Ansible in Ad-Hoc Mode
As discussed above, Ansible can be used in Ad-Hoc mode or playbook mode. For this blogpost, we will demonstrate using ansible in ad-hoc mode to ping, install apache and start apache service on webservers mentioned in the inventory section. To connect to hosts, you should use ssh-agent.
To ping webservers instances group:
# ansible webserver –m ping –u ec2-user
To install apache on the webservers instances group :
# ansible webservers –m yum –u ec2-user --sudo –a “name=httpd state=present”
Now start the apache service:
# ansible webservers –m service –u ec2-user –sudo –a “name=httpd state=started”
That’s it. Apache is now installed and running on the webservers instances group defined in inventory file.
In our next blogpost, we will have a close look at Ansible playbooks.
WaitCondition Controls the Pace of AWS CloudFormation Templates
AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...
The 9 AWS Certifications: Which is Right for You and Your Team?
As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in the cloud.As the market leader and most mature p...
Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live
The announcements at re:Invent just keep on coming! Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. If you're not too familiar with Amazon EC2, you might want to familiarize yourself by creating your first Am...
Google Cloud Certification: Preparation and Prerequisites
Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...
Understanding AWS VPC Egress Filtering Methods
In order to understand AWS VPC egress filtering methods, you first need to understand that security on AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructur...
S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3
Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...
Microservices Architecture: Advantages and Drawbacks
Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...
What Are Best Practices for Tagging AWS Resources?
There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...
How to Optimize Amazon S3 Performance
Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...
How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy
One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...
What are the Benefits of Machine Learning in the Cloud?
A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...
How to Use AWS CLI
The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....