The 5 Best Tools for AWS Deployment

While we build scalable, highly available, and fault tolerant systems on Amazon Web Services, it is important to be aware of AWS deployment tools that can handle system- and application-level deployments and ensure consistency, predictability, and integrity across multiple environments. This leads to continuous and more rapid deployment, lower failure and error rates, and faster recovery.
In this post, we will highlight the 5 best AWS deployment tools that offer solid integration with the Amazon cloud or are part of the AWS ecosystem.

The 5 Best Tools for AWS Deployment

Chef

Chef is one of the most popular configuration management and deployment tool widely used across enterprises. It was launched in 2009, developed in Ruby and licensed under the Apache open source license. Chef is available in three versions: hosted chef (SaaS solution); private chef (enterprise Chef behind firewall) and the open source version.

The Chef Infrastructure comprises of 3 components:
Master Server: The server acts as a hub that is available to every node. All chef client nodes will be registered with the server. The server holds all the cookbooks, recipes and policies. Clients communicate with the server to get the right configuration elements from the server and apply it to the nodes. Clients communicate with the server to get the right configuration elements from the server and apply it to the nodes. The Chef server supports all of the most important Linux versions.

Workstation: The workstation is the development machine from which configuration elements like cookbooks, recipes, and policies are defined. Configuration elements are synchronized with the chef-repo and uploaded to the server with the knife command (a tool for managing cookbooks, nodes, roles, etc.). Workstation is supported on Linux, Windows, and Mac OS X.

Client Nodes: Nodes are the systems that are managed by the chef-client, which performs all the infrastructure automation. The chef-client is an agent that continuously runs on the nodes and interacts with chef-server using its own combination of public-private key pairs. Chef-clients fetch the instructions from chef-server and execute them on that node. Chef-clients can be installed on all of the major operating systems.

To develop your own cookbooks, you will need to have a general understanding of Ruby. Chef is heavily used by large organizations like Facebook, Target, Bloomberg, GE Capital, and Airbnb. AWS Opsworks internally relies on Chef recipes to install and manage packages, manage services, and deploy apps.
Today, most of the major cloud computing players provide an easy-to-use UI on which to build your IT infrastructure in the cloud. However, unlike provisioning an on-premise infrastructure, you may have to dynamically provision (or de-provision) dozens of virtual machine (VM) instances, a few instances of dynamic storage, and some SaaS-based services. In addition, software releases need to be pushed regularly (weekly, daily, or even hourly in some cases).

One way to go about it is to create VM images for every change and create a new VM instance to push it. However, this is laborious and prone to errors, especially if different instances have different application data. What about the storage? Databases? Network configuration? What about the architecture? As your usage of cloud infrastructure for Dev/QA/Production environments grows, it becomes an operational challenge to manage the entire infrastructure. Operational tasks such as the one listed below become a nightmare for a System admin:
• Creating instances
• Configuring instances with storage, services, firewall, software
• Monitoring and deleting instances
• Ensuring all instances in a layer (web/app) are in the same state.
This is when you would need a configuration management system that gives you the ability to deploy, update, and repair your entire application infrastructure using nothing but pre-defined, automated procedures. Ideally, you want to automatically provision your entire environment from bare-metal, all the way up to running business services completely from a pre-defined specification, including the network configuration.
Chef Configuration Management

Enter Chef.

Chef is an infrastructure automation framework that makes it easy to set up, configure, deploy, and manage servers and applications on any environment (physical/virtual/cloud).
With Chef, you can code your infrastructure (called “recipes”) and use the recipes to set up the infrastructure.
Once automated, you hold a blueprint for your infrastructure, which enables you to build (or rebuild) automatically in minutes or hours, not weeks or months. Better still, in the event of a disaster (network, hardware, or geographical), Chef makes the recovery process easier.
Chef has become one of the most widely used tools for configuration management. Apart from Chef, other tools that support cloud environments are Puppet, Ansible, Salt. AWS OpsWorks is an application management service that makes it easy for DevOps to model and manage the entire application from load balancers to databases. Amazon OpsWorks supports Chef.
With Chef you will be able to:
• Manage servers by writing recipes.
• Integrate tightly with applications, databases and more.
• Configure applications that require knowledge about your entire infrastructure
• Create perfect clones of QA environments, pre-production environments, partner preview environments and more.
Before we get started working with Chef, let’s take a look at some of its most frequently used terms:

recipe A configuration element within an organization. Recipes are used to install, configure
software and deploy applications
cookbook A fundamental unit of configuration and policy distribution. Each cookbook defines a scenario, such as everything needed to install and configure MySQL.
knife Knife is a command-line tool that provides an interface between a local chef-repo and the Chef server. Knife helps provisioning resources, manage recipes/cookbooks, nodes & more.
chef-repo Chef-repo is located on the workstation and contains cookbooks, recipes, roles. Knife is used to upload data to the chef server from the chef-repo.
workstation A workstation is a computer that is configured to run Knife, to synchronize with the chef-repo, and interact with a single server. The workstation is the location from which most users will do most of their work.
node A node is any physical, virtual, or cloud machine that is configured to be maintained by a chef-client
run_list A run_list is an ordered list of roles and/or recipes that are run in an exact order.
chef-client A chef-client is an agent that runs locally on every node.

There are 3 types of Chef Servers

1. Hosted Chef: Hosted Enterprise Chef is a version of the Chef server that is hosted by Chef. Hosted Enterprise Chef is cloud-based, scalable, and available service with resource-based access control. Makes life easier, you will not have to run additional server and manage it.
2. Enterprise Chef: is similar to hosted chef but the chef server will be located on premise.
3. Open Source Chef: is a free version of Chef Server.
In the next post, we will get started with Open source Chef on Amazon Web Services.

Puppet

AWS Deployment
Along with Chef, Puppet is another deployment and configuration management tool widely used in organizations of all sizes. An initial version from PuppetLabs was first released in year 2005. It was initially launched as Free Software under the GPL license until version 2.7, but later they switched to Apache 2.0.
Puppet comes into two variants: Puppet Enterprise (free up to 10 nodes) and Puppet Open Source (completely free). Puppet is also written in Ruby.
On an abstraction level, Puppet is similar to Chef. It works on a server-client model where one has to install Puppet agents on managed nodes, and centralized administration happens on the Puppet Master/Server. Puppet agents contact Puppet Master/Server periodically (say 15 minutes) to fetch the latest configuration. Once fetched, this configuration is executed on Puppet clients and results are sent back to Puppet Master.
Puppet Modules are used to configure your Puppet clients with relevant resources and to a state. These modules are either written in the Puppet-specific language based on Ruby, or in Ruby itself, and then stored on Puppet Master/Server. Each Puppet Module has its own purpose, such as for configuring NTP, MySQL, or Tomcat, etc.
Puppet Master is only supported on Linux distributions, while Puppet clients can be run on Linux, Windows, and Mac OS X.

Ansible

AWS Deployment
Released in 2012, Ansible is the one of the youngest and fastest growing tools for open source deployment, configuration management, and orchestration. Unlike Chef and Puppet, Ansible relies on an agentless architecture, which means that it does not require any client package installation on client nodes apart from regular Python packages. With Ansible, client nodes are managed over SSH protocol. Ansible’s agentless architecture makes the upgrade process simple and easy to implement.
Ansible is available in two versions: Ansible Tower (paid version) and Ansible Open Source (free version). Ansible is written in Python and is licensed under the General Public License (GPL). One of the advantages of using Ansible is that it uses YAML syntax for its configuration files, also known as playbooks. This is a very nice choice, given that YAML is quite easy and avoids the unneeded complexity of major languages.
With Ansible, there are two types of nodes: Control Machine and Managed Nodes. The Control Machine has Ansible installed. It supports most Linux distributions and requires Python 2.6+. Managed Nodes requires Python 2.4+ and supports Linux, Windows, and Mac OS X.
You can refer to Cloud Academy Ansible courses to learn more about the tool.

AWS Elastic Beanstalk

If you are looking for the fastest, simplest and maintenance free way to deploy your application on AWS, AWS Elastic Beanstalk is definitely up for consideration. AWS Beanstalk is a free service provided by AWS where you only have to pay for resources provisioned by the Beanstalk environment
AWS Beanstalk allows deployment of applications written in many different languages including PHP, .NET, Ruby, Java, Node.js, Python, and has native Docker support for various web and application servers like Apache, Tomcat, IIS, Nginx, etc. Its features include:
Quick Deployment: Uploading application files to Beanstalk initiates the deployment process on your EC2 instances. In case of a failure, one can rollback to the previous version.
Integration with other AWS services like Autoscaling, Elastic Load Balancer, SNS, CloudWatch, RDS, etc.
Application health monitoring using CloudWatch and SNS notifications in case of any issues.
Easy access to application and system logs, even without logging into instances.
Customized software and applications by passing configuration files to the AWS Beanstalk environment. These configurations files are written in YAML or JSON formats.
As it is a PAAS managed service by AWS, it frees up the organization from the burden of deployment and configuration management.

AWS Code Deploy

AWS Deployment
If one is looking for a simple code deployment service, he/she should definitely look into AWS CodeDeploy, The new service launched a few weeks ago during the AWS re:Invent 2014 in Las Vegas. AWS CodeDeploy provides several features which definitely simplify the deployment process:
Minimize downtime: tracks the application health and performs rolling updates across deployment targets. You can deploy the previous revision in case of any failure.
Automatic Deployment: enables deployment across different environments and thousands of deployment targets.
Integration with existing 3rd party tools: works with existing configuration management tools (like Chef, Puppet, Ansible), version control tools (GitHub, AWSCodeCommit etc.) and continuous integration tools (Bamboo, Jenkins, CircleCI etc.)
Centralized Management: you can execute and monitor the deployment process CodeDeploy also provides reporting feature for your deployment process.
Integration with other AWS services: it works with AWS CloudFormation, AWS OpsWorks, AWS Beanstalk, AutoScaling, etc.

Apart from code deployment, AWS CodeDeploy also enables running scripts and setting up permissions during multiple lifecycle events like ApplicationStop, BeforeInstall, Install, AfterInstall, ApplicationStart, etc. These lifecycle events are written in YAML formatted AppSpec (Application Specific) files, similar to Ansible.

To take advantage of AWS CodeDeploy, you should install CodeDeploy agents on your Linux and Windows instances. Tested agents are available for Amazon Linux, Ubuntu, and Windows. For other operating systems, open source versions of CodeDeploy Agent are available. Currently, it is only available in AWS N.Virginia and Oregon regions with no additional charges.

 

Avatar

Written by

Cloud Academy Team


Related Posts

Valery Calderón Briz
Valery Calderón Briz
— October 22, 2019

How to Go Serverless Like a Pro

So, no servers? Yeah, I checked and there are definitely no servers. Well...the cloud service providers do need servers to host and run the code, but we don’t have to worry about it. Which operating system to use, how and when to run the instances, the scalability, and all the arch...

Read more
  • AWS
  • Lambda
  • Serverless
Avatar
Stuart Scott
— October 16, 2019

AWS Security: Bastion Host, NAT instances and VPC Peering

Effective security requires close control over your data and resources. Bastion hosts, NAT instances, and VPC peering can help you secure your AWS infrastructure. Welcome to part four of my AWS Security overview. In part three, we looked at network security at the subnet level. This ti...

Read more
  • AWS
Avatar
Sudhi Seshachala
— October 9, 2019

Top 13 Amazon Virtual Private Cloud (VPC) Best Practices

Amazon Virtual Private Cloud (VPC) brings a host of advantages to the table, including static private IP addresses, Elastic Network Interfaces, secure bastion host setup, DHCP options, Advanced Network Access Control, predictable internal IP ranges, VPN connectivity, movement of interna...

Read more
  • AWS
  • best practices
  • VPC
Avatar
Stuart Scott
— October 2, 2019

Big Changes to the AWS Certification Exams

With AWS re:Invent 2019 just around the corner, we can expect some early announcements to trickle through with upcoming features and services. However, AWS has just announced some big changes to their certification exams. So what’s changing and what’s new? There is a brand NEW ...

Read more
  • AWS
  • Certifications
Alisha Reyes
Alisha Reyes
— October 1, 2019

New on Cloud Academy: ITIL® 4, Microsoft 365 Tenant, Jenkins, TOGAF® 9.1, and more

At Cloud Academy, we're always striving to make improvements to our training platform. Based on your feedback, we released some new features to help make it easier for you to continue studying. These new features allow you to: Remove content from “Continue Studying” section Disc...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
  • ITIL® 4
  • Jenkins
  • Microsoft 365 Tenant
  • New content
  • Product Feature
  • Python programming
  • TOGAF® 9.1
Avatar
Stuart Scott
— September 27, 2019

AWS Security Groups: Instance Level Security

Instance security requires that you fully understand AWS security groups, along with patching responsibility, key pairs, and various tenancy options. As a precursor to this post, you should have a thorough understanding of the AWS Shared Responsibility Model before moving onto discussi...

Read more
  • AWS
  • instance security
  • Security
  • security groups
Avatar
Jeremy Cook
— September 17, 2019

Cloud Migration Risks & Benefits

If you’re like most businesses, you already have at least one workload running in the cloud. However, that doesn’t mean that cloud migration is right for everyone. While cloud environments are generally scalable, reliable, and highly available, those won’t be the only considerations dri...

Read more
  • AWS
  • Azure
  • Cloud Migration
Joe Nemer
Joe Nemer
— September 12, 2019

Real-Time Application Monitoring with Amazon Kinesis

Amazon Kinesis is a real-time data streaming service that makes it easy to collect, process, and analyze data so you can get quick insights and react as fast as possible to new information.  With Amazon Kinesis you can ingest real-time data such as application logs, website clickstre...

Read more
  • amazon kinesis
  • AWS
  • Stream Analytics
  • Streaming data
Joe Nemer
Joe Nemer
— September 6, 2019

Google Cloud Functions vs. AWS Lambda: The Fight for Serverless Cloud Domination

Serverless computing: What is it and why is it important? A quick background The general concept of serverless computing was introduced to the market by Amazon Web Services (AWS) around 2014 with the release of AWS Lambda. As we know, cloud computing has made it possible for users to ...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
Joe Nemer
Joe Nemer
— September 3, 2019

Google Vision vs. Amazon Rekognition: A Vendor-Neutral Comparison

Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs. This post is a fact-based comparative analysis on Google Vision vs. Amazon Rekognition and will focus on the tech...

Read more
  • Amazon Rekognition
  • AWS
  • Google Cloud Platform
  • Google Vision
Alisha Reyes
Alisha Reyes
— August 30, 2019

New on Cloud Academy: CISSP, AWS, Azure, & DevOps Labs, Python for Beginners, and more…

As Hurricane Dorian intensifies, it looks like Floridians across the entire state might have to hunker down for another big one. If you've gone through a hurricane, you know that preparing for one is no joke. You'll need a survival kit with plenty of water, flashlights, batteries, and n...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
  • New content
  • Product Feature
  • Python programming
Joe Nemer
Joe Nemer
— August 27, 2019

Amazon Route 53: Why You Should Consider DNS Migration

What Amazon Route 53 brings to the DNS table Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service offered by AWS. It is named by the TCP or UDP port 53, which is where DNS server requests are addressed. Like any DNS service, Route 53 handles domain regist...

Read more
  • Amazon
  • AWS
  • Cloud Migration
  • DNS
  • Route 53