DevOps in AWS: CloudFormation, CodeCommit, CodePipeline, and CodeDeploy
We are entering a new era of technology this is causing and a cultural shift in the way software projects are built. The waterfall model paved the ...Learn More
If you are like me (and I know that many of you are), then I am sure this has happened to you. You get your hands on a new toy (like CloudFormation) and just want to get going. Rather than read through some of the excellent online help documentation provided by an army of technical writers, you just ‘borrow’ some Google-friendly scripts (or templates) and tweak them for your project.
Until you hit a dead end.
Something doesn’t work the way you thought it would, and you’ve got to go back and debug everything you’ve
Sound familiar? Are there times you wished you had a better understanding of a tool’s inner workings?
I’ve been using CloudFormation for quite some time now. Looking back at my path of enlightenment, I do remember things I wish I had understood better or at least given more attention at the time.
So let me share a few nuggets with you in a way that will hopefully flatten your learning curve just a bit. From there I hope to discuss some of the key opportunities and challenges that will likely follow your adventures into coded infrastructure deployments.
Newcomers to AWS cloud services may be tempted to dismiss CloudFormation templates as time-wasting “paperwork.” Why automate the provisioning of a vanilla EC2 instance with CloudFormation (or a similar tool) if you can just as easily order up a brand new clean instance from the AWS Console?
Careful. Don’t forget that launching a new server is never just about the operating system. Back in the ‘old’ days of hot, noisy server rooms, did you ever have to provision a bare metal server running nothing more than a pristine, untouched operating system? I doubt it! What about your application stack, database environment, and network configuration? Do you remember how many days and weeks it took to get a new server running the way you like (even if we ignore the ordering and delivery process)?
And when was the last time that you needed only one of each type? Once upon a time, “one process” meant one server. Ha! I also remember those good old days, when development, test, and production were all based on a single code base running on a server hidden somewhere in the attic. Today you tend to require at least four environments to facilitate the software development life cycle. For good measure, you’ll probably also want to add one more for a BlueGreen Deployment.
So even virtual servers take time to build, and even (especially) virtual servers need to be spun up multiple times. This sounds like an argument for managing your infrastructure as code. Sounds like automation just became king of the jungle.
AWS provides great helper scripts that come pre-installed with all Amazon provided machine images or as executables for installation on your own images. In combination with the instructions you provide within the stack templates, those automation scripts enable you to deploy an entire infrastructure stack with just a few clicks (unless, of course, you choose to automate even this through the CloudFormation API).
Understanding the interdependencies and different roles played by the various sections of the template and automation scripts will help you successfully develop your stack.
The most commonly used CloudFormation script (besides cloud-init – more on that in a later post) would unarguably be cfn-init (“CloudFormation init”). cfn-init reads and processes the instructions provided within template metadata. To run cfn-init you need to call it from within the user data instructions or as part of any of your image’s start-up processes.
You might like to know that the user data instructions are ‘magically’ executed by cloud-init. Cloud-init is an open source package that is widely used for bootstrapping cloud instances. More details on this, too, will have to wait for another post.
Cfn-init accepts a number of command line options. At a minimum, you will need to provide the name of the CloudFormation stack and the name of the element that contains the instance metadata instructions.
/opt/aws/bin/cfn-init -v --stack YourStackName --resource YourResourceName
This could either be the launch configuration or an EC2 instance definition inside the CloudFormation template.
It is important for you to understand that the instance itself isn’t ‘seeded’ with the template instructions as part of the launch. In fact, the instance itself has no knowledge of the fact that its launch was initiated by CloudFormation. Instead, the cfn-init script reaches out to the public CloudFormation API endpoint to retrieve the template instructions. This is important to remember if you’re going to launch your instance from inside a VPC that has no Internet connectivity or that gets connectivity via a proxy server that needs special configuration.
CloudFormation init instructions can be grouped into multiple configuration sets. I strongly suggest you take advantage of this to allow greater resource isolation and more modular configurations (i.e., template fragments that can more easily be reused). With its procedural template instructions, CloudFormation doesn’t necessarily support DRY coding practices – and nor should it.
However, if your setup requires you to install a common set of applications or configurations on each instance (think: anti-virus, regulatory compliance, or log forwarding agents), you will be well served by separating each element into its own configuration settings. Doing this alongside a centralized source control management system or an advanced text editor like Sublime or Notepad++, you can then easily maintain and re-use common stack elements.
Note: this isn’t the only way to ensure common components are always rolled into the stack. In a previous post on architecting on AWS, I wrote about the advantages and trade-offs for scripted launches vs. the use of pre-baked, customised machine images.
However, configuration sets don’t necessarily scale well for larger environments. If you want to automate your infrastructure across tens or hundreds of templates, you will soon hit limits. As your environment requires patching, and you start refactoring your code fragments, you need to manually ensure that every stack in your environment is kept up-to-date.
Once you reach that point, you should start to explore the use of Continuous Deployment solutions that can hook into AWS for a more automated management of stacks across multiple environments.
Which leads nicely to my closing words. I am sure you’ve all heard the popular saying:
‘If all you have is a hammer, everything looks like a nail‘.
Rest assured that your infrastructure and deployment solution is subject to the same paradigm. When I started using scripted deployments on AWS, I made good use of the user data scripts. I split everything up into individual bash or PowerShell scripts that I deployed to the instances, and called them from within the user data or cascaded them amongst each other. And I felt very clever!
At least, until my fleet of instances started to grow. Then I discovered that a lot of that effort could be avoided by using CloudFormation. So my instance definitions moved to CloudFormation Init metadata, which gave me additional flexibility. CloudFormation Init then allowed me to define in a declarative way what actions I wanted to perform on an instance, and in which order – much like a YAM-based cloud-init configuration, but on a full-stack scale rather than a single instance. No longer did I have to navigate to a specific directory, download an RPM package using wget or curl, install it using the package manager, ensure the application is started at boot time, and so on. Instead, I can just provide declarative instructions inside one or more of the seven supported configuration keys.
Once again, I began to feel very smart. I started to organize my individual declarative instructions in configuration sets, managed them in a central repository for re-use, etc. Until – well you can probably already guess it by now – until I discovered that it was time to consider the use of AWS Opsworks and Elastic Beanstalk resources inside my CloudFormation stack.
AWS Opsworks abstracts your configuration instructions further away from the declarative configuration in the init metadata. Using a managed Chef service you have access to a large variety of pre-defined recipes for the installation and configuration of additional system components. Since those recipes are continuously maintained and updated by the wider community, you don’t need to re-invent the wheel over and over again.
Not that re-inventing the wheel has no benefits; imagine if we still used stone. But it’s obvious that the wisdom and throughput of a whole community can be much higher than the capability of an individual.
The same can be said for Elastic Beanstalk. Where OpsWorks helps you to accelerate the deployment of common components, Elastic Beanstalk lets you automate the resilient and scalable deployment of your application into the stack without you even having to describe or configure the details for load balancing and scaling.
The point I would like to make is that in a world where “the slow eats the fast”, we can never settle permanently for any given solution. The whole technology community, including AWS, is constantly evolving to allow organizations to innovate, develop and ship features at an ever-increasing rate. This is achieved partly through the continuous abstraction away from the core underlying infrastructure and services and through a combination of traditional features with new functionality and innovation.
To stay on top of the game as an IT professional, you will need to constantly challenge the status quo and, where applicable, make the leap of faith to investigate and learn new ways of doing our business.
AWS is renowned for the rate at which it reinvents, revolutionizes, and meets customer demands and expectations through its continuous cycle of feature and service updates. With hundreds of updates a month, it can be difficult to stay on top of all the changes made available. Here ...
Amazon Web Services (AWS) offers three different ways to pay for EC2 Instances: On-Demand, Reserved Instances, and Spot Instances. This article will focus on effective strategies for purchasing Reserved Instances. While most of the major cloud platforms offer pre-pay and reservation dis...
If you’re building applications on the AWS cloud or looking to get started in cloud computing, certification is a way to build deep knowledge in key services unique to the AWS platform. AWS currently offers 11 certifications that cover major cloud roles including Solutions Architect, De...
The AWS Solutions Architect - Associate Certification (or Sol Arch Associate for short) offers some clear benefits: Increases marketability to employers Provides solid credentials in a growing industry (with projected growth of as much as 70 percent in five years) Market anal...
Moving data to the cloud is one of the cornerstones of any cloud migration. Apache NiFi is an open source tool that enables you to easily move and process data using a graphical user interface (GUI). In this blog post, we will examine a simple way to move data to the cloud using NiFi c...
Amazon DynamoDB is a managed NoSQL service with strong consistency and predictable performance that shields users from the complexities of manual setup.Whether or not you've actually used a NoSQL data store yourself, it's probably a good idea to make sure you fully understand the key ...
As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in cloud computing.As the market leader and most ma...
Learn how Aviatrix’s intelligent orchestration and control eliminates unwanted tradeoffs encountered when deploying Palo Alto Networks VM-Series Firewalls with AWS Transit Gateway.Deploying any next generation firewall in a public cloud environment is challenging, not because of the f...
Use AWS Config the Right Way for Successful ComplianceIt’s well-known that AWS Config is a powerful service for monitoring all changes across your resources. As AWS Config has constantly evolved and improved over the years, it has transformed into a true powerhouse for monitoring your...
Cloud Academy is a proud sponsor of the 2019 AWS Summits in Atlanta, London, and Chicago. We hope you plan to attend these free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. These events are all about learning. You can learn how t...
The AWS cloud platform has made it easier than ever to be flexible, efficient, and cost-effective. However, monitoring your AWS infrastructure is the key to getting all of these benefits. Realizing these benefits requires that you follow AWS best practices which constantly change as AWS...
Amazon Web Services’ resource offerings are constantly changing, and staying on top of their evolution can be a challenge. Elastic Cloud Compute (EC2) instances are one of their core resource offerings, and they form the backbone of most cloud deployments. EC2 instances provide you with...