While we build scalable, highly available, and fault tolerant systems on Amazon Web Services, it is important to be aware of AWS deployment tools that can handle system- and application-level deployments and ensure consistency, predictability, and integrity across multiple environments. This leads to continuous and more rapid deployment, lower failure and error rates, and faster recovery.
In this post, we will highlight the 5 best AWS deployment tools that offer solid integration with the Amazon cloud or are part of the AWS ecosystem.
The 5 Best Tools for AWS Deployment
Chef is one of the most popular configuration management and deployment tool widely used across enterprises. It was launched in 2009, developed in Ruby and licensed under the Apache open source license. Chef is available in three versions: hosted chef (SaaS solution); private chef (enterprise Chef behind firewall) and the open source version.
The Chef Infrastructure comprises of 3 components:
• Master Server: The server acts as a hub that is available to every node. All chef client nodes will be registered with the server. The server holds all the cookbooks, recipes and policies. Clients communicate with the server to get the right configuration elements from the server and apply it to the nodes. Clients communicate with the server to get the right configuration elements from the server and apply it to the nodes. The Chef server supports all of the most important Linux versions.
• Workstation: The workstation is the development machine from which configuration elements like cookbooks, recipes, and policies are defined. Configuration elements are synchronized with the chef-repo and uploaded to the server with the knife command (a tool for managing cookbooks, nodes, roles, etc.). Workstation is supported on Linux, Windows, and Mac OS X.
• Client Nodes: Nodes are the systems that are managed by the chef-client, which performs all the infrastructure automation. The chef-client is an agent that continuously runs on the nodes and interacts with chef-server using its own combination of public-private key pairs. Chef-clients fetch the instructions from chef-server and execute them on that node. Chef-clients can be installed on all of the major operating systems.
To develop your own cookbooks, you will need to have a general understanding of Ruby. Chef is heavily used by large organizations like Facebook, Target, Bloomberg, GE Capital, and Airbnb. AWS Opsworks internally relies on Chef recipes to install and manage packages, manage services, and deploy apps.
Today, most of the major cloud computing players provide an easy-to-use UI on which to build your IT infrastructure in the cloud. However, unlike provisioning an on-premise infrastructure, you may have to dynamically provision (or de-provision) dozens of virtual machine (VM) instances, a few instances of dynamic storage, and some SaaS-based services. In addition, software releases need to be pushed regularly (weekly, daily, or even hourly in some cases).
One way to go about it is to create VM images for every change and create a new VM instance to push it. However, this is laborious and prone to errors, especially if different instances have different application data. What about the storage? Databases? Network configuration? What about the architecture? As your usage of cloud infrastructure for Dev/QA/Production environments grows, it becomes an operational challenge to manage the entire infrastructure. Operational tasks such as the one listed below become a nightmare for a System admin:
• Creating instances
• Configuring instances with storage, services, firewall, software
• Monitoring and deleting instances
• Ensuring all instances in a layer (web/app) are in the same state.
This is when you would need a configuration management system that gives you the ability to deploy, update, and repair your entire application infrastructure using nothing but pre-defined, automated procedures. Ideally, you want to automatically provision your entire environment from bare-metal, all the way up to running business services completely from a pre-defined specification, including the network configuration.
Chef is an infrastructure automation framework that makes it easy to set up, configure, deploy, and manage servers and applications on any environment (physical/virtual/cloud).
With Chef, you can code your infrastructure (called “recipes”) and use the recipes to set up the infrastructure.
Once automated, you hold a blueprint for your infrastructure, which enables you to build (or rebuild) automatically in minutes or hours, not weeks or months. Better still, in the event of a disaster (network, hardware, or geographical), Chef makes the recovery process easier.
Chef has become one of the most widely used tools for configuration management. Apart from Chef, other tools that support cloud environments are Puppet, Ansible, Salt. AWS OpsWorks is an application management service that makes it easy for DevOps to model and manage the entire application from load balancers to databases. Amazon OpsWorks supports Chef.
With Chef you will be able to:
• Manage servers by writing recipes.
• Integrate tightly with applications, databases and more.
• Configure applications that require knowledge about your entire infrastructure
• Create perfect clones of QA environments, pre-production environments, partner preview environments and more.
Before we get started working with Chef, let’s take a look at some of its most frequently used terms:
|recipe||A configuration element within an organization. Recipes are used to install, configure
software and deploy applications
|cookbook||A fundamental unit of configuration and policy distribution. Each cookbook defines a scenario, such as everything needed to install and configure MySQL.|
|knife||Knife is a command-line tool that provides an interface between a local chef-repo and the Chef server. Knife helps provisioning resources, manage recipes/cookbooks, nodes & more.|
|chef-repo||Chef-repo is located on the workstation and contains cookbooks, recipes, roles. Knife is used to upload data to the chef server from the chef-repo.|
|workstation||A workstation is a computer that is configured to run Knife, to synchronize with the chef-repo, and interact with a single server. The workstation is the location from which most users will do most of their work.|
|node||A node is any physical, virtual, or cloud machine that is configured to be maintained by a chef-client|
|run_list||A run_list is an ordered list of roles and/or recipes that are run in an exact order.|
|chef-client||A chef-client is an agent that runs locally on every node.|
There are 3 types of Chef Servers
1. Hosted Chef: Hosted Enterprise Chef is a version of the Chef server that is hosted by Chef. Hosted Enterprise Chef is cloud-based, scalable, and available service with resource-based access control. Makes life easier, you will not have to run additional server and manage it.
2. Enterprise Chef: is similar to hosted chef but the chef server will be located on premise.
3. Open Source Chef: is a free version of Chef Server.
In the next post, we will get started with Open source Chef on Amazon Web Services.
Along with Chef, Puppet is another deployment and configuration management tool widely used in organizations of all sizes. An initial version from PuppetLabs was first released in year 2005. It was initially launched as Free Software under the GPL license until version 2.7, but later they switched to Apache 2.0.
Puppet comes into two variants: Puppet Enterprise (free up to 10 nodes) and Puppet Open Source (completely free). Puppet is also written in Ruby.
On an abstraction level, Puppet is similar to Chef. It works on a server-client model where one has to install Puppet agents on managed nodes, and centralized administration happens on the Puppet Master/Server. Puppet agents contact Puppet Master/Server periodically (say 15 minutes) to fetch the latest configuration. Once fetched, this configuration is executed on Puppet clients and results are sent back to Puppet Master.
Puppet Modules are used to configure your Puppet clients with relevant resources and to a state. These modules are either written in the Puppet-specific language based on Ruby, or in Ruby itself, and then stored on Puppet Master/Server. Each Puppet Module has its own purpose, such as for configuring NTP, MySQL, or Tomcat, etc.
Puppet Master is only supported on Linux distributions, while Puppet clients can be run on Linux, Windows, and Mac OS X.
Released in 2012, Ansible is the one of the youngest and fastest growing tools for open source deployment, configuration management, and orchestration. Unlike Chef and Puppet, Ansible relies on an agentless architecture, which means that it does not require any client package installation on client nodes apart from regular Python packages. With Ansible, client nodes are managed over SSH protocol. Ansible’s agentless architecture makes the upgrade process simple and easy to implement.
Ansible is available in two versions: Ansible Tower (paid version) and Ansible Open Source (free version). Ansible is written in Python and is licensed under the General Public License (GPL). One of the advantages of using Ansible is that it uses YAML syntax for its configuration files, also known as playbooks. This is a very nice choice, given that YAML is quite easy and avoids the unneeded complexity of major languages.
With Ansible, there are two types of nodes: Control Machine and Managed Nodes. The Control Machine has Ansible installed. It supports most Linux distributions and requires Python 2.6+. Managed Nodes requires Python 2.4+ and supports Linux, Windows, and Mac OS X.
You can refer to Cloud Academy Ansible courses to learn more about the tool.
AWS Elastic Beanstalk
If you are looking for the fastest, simplest and maintenance free way to deploy your application on AWS, AWS Elastic Beanstalk is definitely up for consideration. AWS Beanstalk is a free service provided by AWS where you only have to pay for resources provisioned by the Beanstalk environment
AWS Beanstalk allows deployment of applications written in many different languages including PHP, .NET, Ruby, Java, Node.js, Python, and has native Docker support for various web and application servers like Apache, Tomcat, IIS, Nginx, etc. Its features include:
• Quick Deployment: Uploading application files to Beanstalk initiates the deployment process on your EC2 instances. In case of a failure, one can rollback to the previous version.
• Integration with other AWS services like Autoscaling, Elastic Load Balancer, SNS, CloudWatch, RDS, etc.
• Application health monitoring using CloudWatch and SNS notifications in case of any issues.
• Easy access to application and system logs, even without logging into instances.
• Customized software and applications by passing configuration files to the AWS Beanstalk environment. These configurations files are written in YAML or JSON formats.
As it is a PAAS managed service by AWS, it frees up the organization from the burden of deployment and configuration management.
AWS Code Deploy
If one is looking for a simple code deployment service, he/she should definitely look into AWS CodeDeploy, The new service launched a few weeks ago during the AWS re:Invent 2014 in Las Vegas. AWS CodeDeploy provides several features which definitely simplify the deployment process:
• Minimize downtime: tracks the application health and performs rolling updates across deployment targets. You can deploy the previous revision in case of any failure.
• Automatic Deployment: enables deployment across different environments and thousands of deployment targets.
• Integration with existing 3rd party tools: works with existing configuration management tools (like Chef, Puppet, Ansible), version control tools (GitHub, AWSCodeCommit etc.) and continuous integration tools (Bamboo, Jenkins, CircleCI etc.)
• Centralized Management: you can execute and monitor the deployment process CodeDeploy also provides reporting feature for your deployment process.
• Integration with other AWS services: it works with AWS CloudFormation, AWS OpsWorks, AWS Beanstalk, AutoScaling, etc.
Apart from code deployment, AWS CodeDeploy also enables running scripts and setting up permissions during multiple lifecycle events like ApplicationStop, BeforeInstall, Install, AfterInstall, ApplicationStart, etc. These lifecycle events are written in YAML formatted AppSpec (Application Specific) files, similar to Ansible.
To take advantage of AWS CodeDeploy, you should install CodeDeploy agents on your Linux and Windows instances. Tested agents are available for Amazon Linux, Ubuntu, and Windows. For other operating systems, open source versions of CodeDeploy Agent are available. Currently, it is only available in AWS N.Virginia and Oregon regions with no additional charges.
What Exactly Is a Cloud Architect and How Do You Become One?
One of the buzzwords surrounding the cloud that I'm sure you've heard is "Cloud Architect." In this article, I will outline my understanding of what a cloud architect does and I'll analyze the skills and certifications necessary to become one. I will also list some of the types of jobs ...
Boto: Using Python to Automate AWS Services
Boto allows you to write scripts to automate things like starting AWS EC2 instances Boto is a Python package that provides programmatic connectivity to Amazon Web Services (AWS). AWS offers a range of services for dynamically scaling servers including the core compute service, Elastic...
Content Roadmap: AZ-500, ITIL 4, MS-100, Google Cloud Associate Engineer, and More
Last month, Cloud Academy joined forces with QA, the UK’s largest B2B skills provider, and it put us in an excellent position to solve a massive skills gap problem. As a result of this collaboration, you will see our training library grow with additions from QA’s massive catalog of 500+...
DevSecOps: How to Secure DevOps Environments
Security has been a friction point when discussing DevOps. This stems from the assumption that DevOps teams move too fast to handle security concerns. This makes sense if Information Security (InfoSec) is separate from the DevOps value stream, or if development velocity exceeds the band...
Test Your Cloud Knowledge on AWS, Azure, or Google Cloud Platform
Cloud skills are in demand | In today's digital era, employers are constantly seeking skilled professionals with working knowledge of AWS, Azure, and Google Cloud Platform. According to the 2019 Trends in Cloud Transformation report by 451 Research: Business and IT transformations re...
Disadvantages of Cloud Computing
If you want to deliver digital services of any kind, you’ll need to estimate all types of resources, not the least of which are CPU, memory, storage, and network connectivity. Which resources you choose for your delivery — cloud-based or local — is up to you. But you’ll definitely want...
Google Cloud vs AWS: A Comparison (or can they be compared?)
The "Google Cloud vs AWS" argument used to be a common discussion among our members, but is this still really a thing? You may already know that there are three major players in the public cloud platforms arena: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)...
Deployment Orchestration with AWS Elastic Beanstalk
If you're responsible for the development and deployment of web applications within your AWS environment for your organization, then it's likely you've heard of AWS Elastic Beanstalk. If you are new to this service, or simply need to know a bit more about the service and the benefits th...
How to Use & Install the AWS CLI
What is the AWS CLI? | The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services and implement a level of automation. If you’ve been using AWS for some time and feel...
Cloud Academy’s Blog Digest: July 2019
July has been a very exciting month for us at Cloud Academy. On July 10, we officially joined forces with QA, the UK’s largest B2B skills provider (read the announcement). Over the coming weeks, you will see additions from QA’s massive catalog of 500+ certification courses and 1500+ ins...
AWS Fundamentals: Understanding Compute, Storage, Database, Networking & Security
If you are just starting out on your journey toward mastering AWS cloud computing, then your first stop should be to understand the AWS fundamentals. This will enable you to get a solid foundation to then expand your knowledge across the entire AWS service catalog. It can be both d...
How to Become a DevOps Engineer
The DevOps Handbook introduces DevOps as a framework for improving the process for converting a business hypothesis into a technology-enabled service that delivers value to the customer. This process is called the value stream. Accelerate finds that applying DevOps principles of flow, f...