DevOps in AWS: CloudFormation, CodeCommit, CodePipeline, and CodeDeploy

We are entering a new era of technology this is causing and a cultural shift in the way software projects are built. The waterfall model paved the way for agile development a few years back. Today a need exists for better collaboration between IT operations, developers, and IT Infrastructure engineers. This need brought about the new culture that we call DevOps.
DevOps is fundamentally about improved collaboration, integration, automation, reliable, and consistent IT operations. DevOps is an efficient journey from development to production. It requires the overlapping and intersecting responsibilities of developers and IT operations.
DevOps means:

  • Infrastructure as code
  • Continuous Deployment
  • Continuous Integration
  • Version Control Integration
  • Test Automation
  • Monitoring

The goal of this post is to introduce you to CloudFormation, CodeCommit, CodePipeline & CodeDeploy which fits separately into DevOps pipeline. I include other important tools and technologies such as OpsWorks, Cloudwatch, and Beanstalk because without them the discussion would be incomplete.

Infrastructure as a code with CloudFormation:

Treating infrastructure as code offers many benefits. It means creation, deployment, and maintenance of infrastructure in a programmatic, descriptive, and declarative way. It maintains the infrastructures’ version,
putting in a defined format, maintaining the syntax and semantics helps in the long run for an IT operation engineer. When the code is maintained and deployed according to the above-mentioned practice, the expected result is a consistent and reliable environment.
Maintaining infrastructure as code is nothing new for the engineers who have worked with Opscode Chef, Puppet, Ansible, and Salt among others. A good example of this is cookbook in Chef. This is what we like to call, Infrastructure as code. See below:

package 'nginx' do
  action :install
service 'nginx' do
  action [ :enable, :start ]
cookbook_file "/usr/share/nginx/www/index.html" do
  source "index.html"
  mode "0644"

Take a look at how easily an nginx server installation starts and displays the homepage in a defined code format. This is a cookbook in the Ruby language and maintained as if it were regular code. You can use version control software like git or svn to store it and make improvements.
Those are not new to AWS can easily find that from a very early time in cloud computing provides infrastructure as code in terms of CloudFormation. By using CloudFormation template (in JSON format) you can define and model AWS resources. They follow a defined syntax and maintained just like the way the DevOps principle demands. Take a look:

  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "AWS CloudFormation Sample Template Sample template EIP_With_Association: This template shows how to associate an Elastic IP address with an Amazon EC2 instance",
  "Parameters" : {
    "InstanceType" : {
      "Description" : "WebServer EC2 instance type",
      "Type" : "String",
      "Default" : " m2.xlarge",
      "AllowedValues" : ["m2.xlarge", "m2.2xlarge", "m3.xlarge", "m3.2xlarge"]
      "ConstraintDescription" : "must be a valid EC2 instance type."
    "KeyName" : {
      "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instances",
      "Type" : "AWS::EC2::KeyPair::KeyName",
      "ConstraintDescription" : "must be the name of an existing EC2 KeyPair."
    "SSHLocation" : {
      "Description" : "The IP address range that can be used to SSH to the EC2 instances",
      "Type": "String",
      "MinLength": "9",
      "MaxLength": "18",
      "Default": "",
      "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})",
      "ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x."
  "Mappings" : {
    "AWSInstanceType2Arch" : {
      "m2.xlarge"   : { "Arch" : "PV64"   },
      "m2.2xlarge"  : { "Arch" : "PV64"   },
      "m3.large"    : { "Arch" : "HVM64"  },
      "m3.2xlarge"  : { "Arch" : "HVM64"  }
    "AWSInstanceType2NATArch" : {
      "m2.xlarge"   : { "Arch" : "NATPV64"   },
      "m2.2xlarge"  : { "Arch" : "NATPV64"   },
      "m3.xlarge"   : { "Arch" : "NATHVM64"  },
      "m3.2xlarge"  : { "Arch" : "NATHVM64"  }
    "AWSRegionArch2AMI" : {
      "us-east-1"        : {"PV64" : "ami-5fb8c835", "HVM64" : "ami-60b6c60a", "HVMG2" : "ami-e998ea83"},
      "us-west-1"        : {"PV64" : "ami-56ea8636", "HVM64" : "ami-d5ea86b5", "HVMG2" : "ami-943956f4"}
  "Resources" : {
    "EC2Instance" : {
      "Type" : "AWS::EC2::Instance",
      "Properties" : {
        "UserData" : { "Fn::Base64" : { "Fn::Join" : [ "", [ "IPAddress=", {"Ref" : "IPAddress"}]]}},
        "InstanceType" : { "Ref" : "InstanceType" },
        "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ],
        "KeyName" : { "Ref" : "KeyName" },
        "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
                          { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] }
    "InstanceSecurityGroup" : {
      "Type" : "AWS::EC2::SecurityGroup",
      "Properties" : {
        "GroupDescription" : "Enable SSH access",
        "SecurityGroupIngress" :
          [ { "IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : { "Ref" : "SSHLocation"} }]
    "IPAddress" : {
      "Type" : "AWS::EC2::EIP"
    "IPAssoc" : {
      "Type" : "AWS::EC2::EIPAssociation",
      "Properties" : {
        "InstanceId" : { "Ref" : "EC2Instance" },
        "EIP" : { "Ref" : "IPAddress" }
  "Outputs" : {
    "InstanceId" : {
      "Description" : "InstanceId of the newly created EC2 instance",
      "Value" : { "Ref" : "EC2Instance" }
    "InstanceIPAddress" : {
      "Description" : "IP address of the newly created EC2 instance",
      "Value" : { "Ref" : "IPAddress" }

Here all the details required to launch an EC2 instance–coded and maintained. At first glance, it may not appear useful. But upon closer inspection, the code has every detail, what should be the size of the instance, what is the architecture we are looking for, security group, region we want to deploy etc. It is a very basic template. But imagine when you create a complete environment and you want to maintain the consistency and reliability, this kind of template will help you immensely at later point of time. No wonder, a PaaS platform like PCF on AWS can be completely deployed using a CloudFormation template.
You can also check the Cloud Academy blogs on:
1. Cloud Formation Deployment Automation
2. Cloud Formation Deployment Tool
3. Writing your first cloud formation template
4. Understanding nesting cloud formation stacks
for better and detailed understanding on the topic.

Version Control Integration with CodeCommit:

AWS CodeCommit is the Git-as-a-Service in AWS. We have an introductory blog on CodeCommit which will be a useful starting point to get introduced to CodeCommit. A cloud-based source control system with features like HA, fully managed, can store anything, secure. It has virtually limitless storage and no repo size limit, secured with AWS IAM and data is replicated across AZs for durability.

Continuous Deployment with CodeDeploy:

Continuous Deployment is another principle of DevOps where the production-ready code is automatically deployed from version controlled system repository. AWS CodeDeploy helps to deploy codes in AWS EC2 with minimal downtime, with centralized deployment control and monitoring. Using CodeDeploy is easy and few step process which are:

  1. Create an AppSec file and package your application. AppSec file describes series of steps that CodeDeploy will use to deploy.
  2. Set up your deployment environment. In this ste,p you define your DeploymentGroup such as AutoScaling Group and Install the agents in EC2 instances.
  3. Deploy.

A sample CLI command is as follows:

$aws deploy create-deployment --application-name test-app
            --deployment-group-name test-dg
            --s3-location bucket=test-bucket,

You can read about our blog posts on CodeDeploy from here.

Continuous Delivery and automation with CodePipeline:

AWS CodePipeline is Amazon’s offering for Continuous Delivery and release automation service.  AWS CodePipeline automates the release process for building the code, deploying to pre-production environments, testing your application and releasing it to production. Every time there is a code change, AWS CodePipeline builds, tests, and deploys the application according to the defined workflow.

(Image Courtesy: Amazon)

CodePipeline is flexible which allows you to integrate partner tools and your own custom tools into any stage of the release process. The partner tools are GitHub for Source Control, Solano Labs, Jenkins & CloudBees for Build, Continuous Delivery & Integration, Apica, Blazemaster, Ghost Inspector for testing and XebiaLabs for deployment.


AWS has always put the best effort to provide its users with best of the tools and technologies. The effort is also made to make them simple, effortless and not very tough learning curve to use. Every new announcement, we are experiencing that AWS has made the cloud journey smooth, agile and efficient by embracing the DevOps technologies and principles in its platform.

Written by

Chandan Patra

Cloud Computing and Big Data professional with 10 years of experience in pre-sales, architecture, design, build and troubleshooting with best engineering practices.Specialities: Cloud Computing - AWS, DevOps(Chef), Hadoop Ecosystem, Storm & Kafka, ELK Stack, NoSQL, Java, Spring, Hibernate, Web Service

Related Posts

— February 11, 2019

WaitCondition Controls the Pace of AWS CloudFormation Templates

AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...

Read more
  • AWS
— January 24, 2019

The 9 AWS Certifications: Which is Right for You and Your Team?

As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in the cloud.As the market leader and most mature p...

Read more
  • AWS
  • AWS certifications
— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

The announcements at re:Invent just keep on coming! Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. If you're not too familiar with Amazon EC2, you might want to familiarize yourself by creating your first Am...

Read more
  • AWS
  • EC2
  • re:Invent 2018
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

In order to understand AWS VPC egress filtering methods, you first need to understand that security on AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructur...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
  • SpotInst
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS