1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Advanced Use of AWS CloudFormation

Custom Resources


Advanced CloudFormation Skills
State Machines
Data Flow
19m 36s

The course is part of these learning paths

DevOps Engineer – Professional Certification Preparation for AWS
Solutions Architect – Professional Certification Preparation for AWS
AWS Cloud Management Tools
more_horizSee 1 more
Start course
Duration2h 2m


As AWS-based cloud environments grow in complexity, DevOps Professionals need to adopt more powerful tools and techniques to manage complex deployments. In the AWS ecosystem, CloudFormation is the most powerful and sophisticated automation tool available to developers. In this course, we explore some of the most advanced CloudFormation skills AWS engineers can learn.

In addition to normal templated resource generation for single stacks using CloudFormation, engineers will learn how to:

  • Develop continuous integration and continuous deployment on CloudFormation
  • Tie into CloudFormation system events via SNS for operational tasks
  • Nest multiple levels of CloudFormation stacks to build out massive cloud systems
  • Author CloudFormation Custom Resources to add additional functionality and resource types to stacks

This course is best taken after reviewing the basics of CloudFormation with CloudAcademy's starter course How To Use AWS CloudFormation.

Demonstration Assets

The AWS CloudFormation templates and related scripts as demonstrated within this course can be found here:



Welcome back to CloudAcademy's Advanced Amazon Web Services CloudFormation course. Today we're going to talk about custom resources, and in more generally, resource life cycles.

Before we get into custom resources, we should talk about how resource life cycles work in general. This diagram aims to explain how a resource is actually managed when you're working with CloudFormation template or stack. The key takeaway here is that CloudFormation does actually go and create resources itself, but it delegates resource creation to wrappers around normal Amazon Web Services service endpoints.

So let's let that sink in for a moment. All that CloudFormation itself is doing is checking for JSON syntax issues during a request, validating CloudFormation scripts against the JSON schema v4, resolving depends on and implicit dependency order, and figuring out the order in which resources need to be created, interpreting FN function intrinsic function syntaxes, providing the intrinsic or pseudo-variables like AWS region or AWS account ID, delegating service call logic to service wrappers, and tracking stack statuses and admitting event to SNS.

What we're most concerned with right now is number six here. If you look over here on the left, we see that the stack that we've built is creating three different resources here. And then we have an explicit or implicit depends on between the VPC, the instance, and the route table. So for example, if you're creating a VPC with a NAT box in it, the instance would depend indirectly or directly on the VPC, and then the route table would depend directly or indirectly on the instance if you're trying to make it a NAT box.

Now we can pass ref or fn.get attributes to other resources that depend on the VPC, and we can also send values into things that depend on the VPC, like the instance or indirectly the route table. So we have this idea of parameters that are sent in on the input of the resource, and return values that come out of the resource.

We can see how this happens when we get reading properties JSON coming out and going into the CloudFormation service, and then supplying ref and getAttribute hashes back after running through the resource life cycle on the right.

When we talk about the resource life cycle, we're talking about how CloudFormation delegates the actual management of each resource to service wrappers. We see here that CloudFormation actually send a request to a resource service provider that is associated with CloudFormation, and can accept a create, update, or delete request object and understand how to use the JSON properties object received to manipulate the resource and fulfill the promise set forth by the API or the interface for the properties.

For instance, we could receive a create, an update, or a delete signal for an EC2 instance from a stack, and then this resource service wrapper would understand how to translate the properties inside of the object that it receives into an API call for a launch instances command, or in the case of an update, the wrapper would understand how to take the present value of the resource that it got, and update it to fulfill the new properties values by communicating back and forth with EC2. So there's some logic here that understands how to fulfill the promise set forth by the property's API.

Whenever one of these resource providers finishes doing its job, it does not respond directly to CloudFormation since some of these resources may be long-running. For instance, when creating a large Redshift Cluster, we don't want to leave that HTTP request open the entire time. So CloudFormation fire and forgets a create, update, or delete action, and then expects a response back via a signed put URL to an S3 bucket associated with CloudFormation. Once the resource provider puts something into the resource response bucket, its job is done. Meanwhile, CloudFormation is constantly polling for changes on the bucket. This may sound complicated, but the key takeaway for us is that CloudFormation itself, the main service that we are used to interfacing with, is actually only a small subset of what the CloudFormation team does. That core module just delegates requests to these CloudFormation service wrappers around EC2 instances, RDS instances, etc.etc., and fulfills those promises. This is the second responsibility of the CloudFormation team, and it's actually a pretty large swathe of things to work on because they need to implement every single resource that it supports.

The reason that we're talking about this in the context of custom resource lecture is because this is how we implement custom resources so simply in CloudFormation. Given that CloudFormation is already set up to do a delegation request and receive responses to S3 buckets, adding custom logic to CloudFormation service wrappers is fairly simple.

Now let's take a look more specifically at how CloudFormation custom resources work now that we've had a glance at how resources work in general. This should look very familiar to the previous slide because we still have request objects being delegated from CloudFormation to any custom logic, then to AWS service endpoints, or in the case of custom resources, third party services, which then the execution logic finishes, the response goes to an S3 bucket, which CloudFormation polls for changes. And CloudFormation talks with the actual stack template instance. There's still a developer writing the template in this case, but there's another party here which is writing the custom resource logic inside this custom wrapper rather than relying on the CloudFormation team.

The template developer and the resource developer can often be the same person. Most resource developers write the resources because they want to be a template developer using the resource. These are not necessarily formal roles, but simply two behaviors that need to happen for this entire ecosystem to work.

We have some further footnotes here that correspond to other things we need to be aware of. When we see a custom resource type resource, we need to define it a certain way. We use depends on for other resources like normal, but we need to use a specific custom :: type name to define the custom resource. Furthermore, there's a special property that must be defined on all custom resources called service service. It's the ARN of the custom resource provider, that is, where CloudFormation should admit an event to delegate an action to the resource.

This is implemented internally as an SNS publish, and when it's used with a Lambda, that is you can write a custom resource using a Lambda, what's really going on is a Lambda and publish an SNS event into the Lambda. The rest of the properties can be any kind of keys that the resource developer wants to define. They do not need to be simple string values. They can be compound JSON, like any other resource.

Metadata deletion policy and creation policy can be defined like usual. After CloudFormation sees this custom resource in the template and interprets the values like we just talked about, it forwards a request object to custom resource logic in our resource provider. This looks like this request object here on the left. Having a request type of create, update, or delete, which we should already be familiar with, the response URL, which is the signed S3 put URL that should be used when the resource provider is done creating the resource. The stack ID, which is the ARN of the current stack that requested the resource be created. The request ID, which is simply a unique string to uniquely identify the custom resource creation request. The resource type, which is the type name in the stack template and corresponds to this angle bracketed type. The logical resource ID, which is the key in the resource's block in the template when you actually write the template. The physical resource ID during update and delete, which is the system ID of the resource. The resource properties, which is the JSON of the properties on the resource inside the template, and in the case of an update, old resource properties, which are the resource properties for the previous iteration, i.e. pre-update properties.

The request object is sent to the service token defined inside of the custom resource template, which can either be an SNS topic or a Lambda. In the case of an SNS topic, the notification is fanned out and the resource can be created by any subscriber on the SNS topic. For Lambda, the Lambda is invoked and you can do anything inside of the Lambda code body to fulfill the resource promise and provide the resource.

After the custom logic is finished, it needs to send a response object via the signed put URL back to the S3 bucket. This can be done through any HTTP client. This signed put should have a specific format, as in one of these three denoted response objects. The status can either be success or fail. The reason can be an optional string during the failure, plain text debugging for failed. The physical resource ID will be used as the ref value in the stack. The stack ID is copied from the request. The request ID is also copied from the request. The logical resource ID is also copied from the request. All three of these properties must be copied exactly for the request to work. It is simply a security function beyond the S3 signed URL so that you don't accidentally overlap requests between different stacks.

The data is an attribute hash. This must be a key-value hash of string to string, not compound resource properties, that is then used in the fn.getAttributes inside of the stack. As of this recording, the most common implementation of a custom resource provider is with a Lambda because of its ease of use.

You can execute arbitrary code inside of the Lambda and provide roles that allow the Lambda to create resources to the Lambda during the creation. Lambdas can use a full TCP/IP stack and can call out beyond just HTTPS, an API, or SDK calls to Amazon. This means that you can make custom resources that interface with other clouds or on-site servers, or with custom endpoints that you create within your Amazon cloud, i.e. you could create your own application or own compound resource providers.

When we write a custom resource, we'll simply be writing Lambda code that accepts events that look like these request objects, do some logic to fulfill the promises defined by the property's object, then returns a response object by putting the object into S3.

That's it for this lecture. During our next lecture, we'll see an actual CloudFormation custom resource in action implemented via a Lambda.

About the Author

Nothing gets me more excited than the AWS Cloud platform! Teaching cloud skills has become a passion of mine. I have been a software and AWS cloud consultant for several years. I hold all 5 possible AWS Certifications: Developer Associate, SysOps Administrator Associate, Solutions Architect Associate, Solutions Architect Professional, and DevOps Engineer Professional. I live in Austin, Texas, USA, and work as development lead at my consulting firm, Tuple Labs.