Skip to main content

AWS Puppet deployments

AWS Puppet for streamlined deployments

If you’re interested in automating and simplifying control over your application deployments, then Puppet – in any of its iterations – is probably a good place to start. And an AWS Puppet project can present a particularly good match.
Don’t think of this as an all-in quick start guide. The complexity of the kinds of network infrastructure that AWS Puppet deployments are designed to manage makes the word “quick” pretty much incompatible with the word “guide.” However I believe that this post should be useful as an orientation, to introduce you to the kind of things Puppet can do and some of the ways to do them. By the way, even though Puppet is obviously not an AWS product (it’s actually the brainchild of PuppetLabs), I’m going to use the phrase “AWS Puppet” in this post as a convenient way to describe the ways they can work together.

AWS PuppetHow Puppet works

Let’s say you’re running ten or fifteen Ubuntu web servers within an AWS VPC. The odds are that you will need to make configuration and content changes to each of these machines fairly often, but the process – especially as the number of servers grows – can be tedious.
Using Puppet, you can set up each server as an Agent that will, every thirty minutes or so, query a file called a manifest that lives on a separate AWS Puppet Master machine. If the manifest file, as it relates to a specific Agent, has changed since the last query, its current instructions will be immediately executed on that Agent.
This way, you can precisely and automatically control the ongoing configuration of any number of virtual (or actual) machines – running Linux/Unix or Windows – from a single file. It couldn’t get more simple. But that doesn’t mean it can’t get complicated.

Packages – AWS Puppet Master

Once you’ve created the EC2 instance that will serve as your Puppet master, you’ll have to install a Puppet package. Even if you’re limited to the open source version, there is still quite a selection of packages to choose from.
We’ll assume that we’re working with an AWS Ubuntu 14.04 AMI. For simplicity, I’ll use the puppetmaster-passenger package that’s found in the apt-get repositories. This package comes out of the box with everything you’ll need for connectivity, including Apache.

sudo apt-get update
sudo apt-get install puppetmaster-passenger

.

Puppet Agent

You’ll also have to install the Puppet Agent on each server – or Node – you’d like to be automatically controlled by your AWS Puppet Master. Once again, there is a choice of packages, but we’ll go with the apt-get repository:

sudo apt-get update
sudo apt-get install puppet

You’ll have to enable the Puppet Agent by editing its puppet file:

sudo nano /etc/default/puppet

…So that the START line looks like this:

START=yes

Configuration

We’re not even going to begin discussing many of the basic configuration elements you would need for a production deployment. Therefore, take it as given that you’ll have to properly configure your DNS, NTP, and SSL certificates. Here’s a great resource that addresses these issues in some detail.
You could, by the way, set up your DNS services using an old and reliable open source package like Unbound but, since we are working with AWS infrastructure, why ignore the obvious integrated solution: Route53?
You will probably also want lock the release version of each installed AWS Puppet package – both on the Master and on each Agent – to prevent unexpected version updates…which can have unpredictable results.

AWS Puppet Files

The file that matters the most for Puppet (on an Ubuntu installation) is /etc/puppet/puppet.conf. Here’s a example of what it might look like:

[main]
logdir=/var/log/puppet
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet
factpath=$vardir/lib/facter
templatedir=$confdir/templates
prerun_command=/etc/puppet/etckeeper-commit-pre
postrun_command=/etc/puppet/etckeeper-commit-post
[master]
# These are needed when the puppetmaster is run by passenger
# and can safely be removed if webrick is used.
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY

This config file tells Puppet (and you) where to find key system resources, including Puppet’s log files and SSL certificates.
The files that do all the heavy lifting for Puppet are called Manifests, and use a .pp extension. Here’s a simple example:

file {'/tmp/local-ip':
  ensure  => present,
  mode    => 0644,
  content => "My Public IP Address is: ${ipaddress_eth0}.\n",
}

This manifest will create a file named local-ip in the /tmp directory, assign it read-only permissions for everyone except the owner (0644), and populate it with a script that will output the public IP address of the eth0 interface from whichever machine it’s running on.

Common Puppet Commands

Start a manifest file:

puppet apply web-servers1-2.pp

Create a new group:

sudo puppet resource group puppet ensure=present

Create a new user:

sudo puppet resource user puppet ensure=present gid=puppet shell='/sbin/nologin'

Specify a single Node (or group of Nodes)

You can apply particular configuration details to specific Nodes within a general manifest file using the “node” declaration:

node 'web-server1' {
  include apache
  class { 'ntp':
    enable => false;
}
  apache::vhost { 'example.com':
    port    => '80',
    docroot => '/var/www/example',
    options => 'Indexes MultiViews'.
  }
}

(The above example also shows how to deploy pre-baked modules provided by either the Puppet team or the user community.)

Launching manifests

Finally, you can create and run manifests directly from the command line. You might, for instance, want to create a file with local IP information (using our example from above). Assuming that we named that file local-info.pp, and saved it to the AWS Puppet Manifests directory, you could run it using:

sudo puppet apply /etc/puppet/manifests/local-info.pp

 

Written by

A Linux system administrator with twenty years' experience as a high school teacher, David has been around the industry long enough to have witnessed decades of technology trend predictions; most of them turning out to be dead wrong.

Related Posts

— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...

Read more
  • AWS
  • EC2
  • re:Invent 2018
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS
Albert Qian
— August 9, 2018

AWS Summit Chicago: New AWS Features Announced

Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...

Read more
  • AWS
  • AWS Summits
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless