image
Custom Resource Demos

Contents

DOP-C02 Introduction
Amazon CloudWatch
3
4
Anomaly Detection
PREVIEW14m 35s
Advanced CloudFormation Skills
14
State Machines
PREVIEW8m 54s
15
Data Flow
19m 36s
AWS OpsWorks
21
Parameter Store vs. Secrets Manager
40
AWS Service Catalog
41
AWS Service Catalog
PREVIEW10m 34s
AWS Control Tower
47
AWS Control Tower
PREVIEW19m 56s
Managing Product Licenses
Amazon Managed Grafana
Amazon Managed Service for Prometheus
AWS Proton
57
AWS Resilience Hub

The course is part of this learning path

Start course
Difficulty
Intermediate
Duration
7h 24m
Students
80
Ratings
4.3/5
starstarstarstarstar-half
Description

This course provides detail on the AWS Management & Governance services relevant to the AWS Certified DevOps Engineer - Professional exam.

Want more? Try a lab playground or do a Lab Challenge!

Learning Objectives

  • Learn how AWS AppConfig can reduce errors in configuration changes and prevent application downtime
  • Understand how the AWS Cloud Development Kit (CDK) can be used to model and provision application resources using common programming languages
  • Get a high-level understanding of Amazon CloudWatch
  • Learn about the features and use cases of the service
  • Create your own CloudWatch dashboard to monitor the items that are important to you
  • Understand how CloudWatch dashboards can be shared across accounts
  • Understand the cost structure of CloudWatch dashboards and the limitations of the service
  • Review how monitored metrics go into an ALARM state
  • Learn about the challenges of creating CloudWatch Alarms and the benefits of using machine learning in alarm management
  • Know how to create a CloudWatch Alarm using Anomaly Detection
  • Learn what types of metrics are suitable for use with Anomaly Detection
  • Create your own CloudWatch log subscription
  • Learn how AWS CloudTrail enables auditing and governance of your AWS account
  • Understand how Amazon CloudWatch Logs enables you to monitor and store your system, application, and custom log files
  • Explain what AWS CloudFormation is and what it’s used for
  • Determine the benefits of AWS CloudFormation
  • Understand what the core components are and what they are used for
  • Create a CloudFormation Stack using an existing AWS template
  • Learn what VPC flow logs are and what they are used for
  • Determine options for operating programmatically with AWS, including the AWS CLI, APIs, and SDKs
  • Learn about the capabilities of AWS Systems Manager for managing applications and infrastructure
  • Understand how AWS Secrets Manager can be used to securely encrypt application secrets
Transcript

Welcome again to CloudAcademy's course on Advanced AWS CloudFormation. Today, we'll be talking about a practical example of CloudFormation Custom Resources.

Real briefly, this is the Resource Life Cycle slide again from the previous lecture. This time around, we'll be calling out to CloudFormation to do an intrinsic or implicit reference function to another CloudFormation stack. And we'll be implementing our custom resource any logic, that footnote six number there. That will be implemented using Amazon Web Services Lambda that we author inline inside of the master CloudFormation stack.

So, moving over to our code, real briefly, let's go over what we're going in our templates again. Just like in our previous demos where we had the wait demonstration and we talked about life cycles where we had the system email us as well as run a poll function when we were trying to integrate with a continuous integration-style testing system. And when we had these nested stacks, we were seeing a combination of these two techniques with the custom resource demo plus the additional third element of the Custom Resource for us to better model the stack creation to closer reflect our actual application.

So we have two stacks, just like the nested, we have multiple stacks rather. And we have them segregated by the database, which is a simple Lambda table and the API which will include several more resources since we're doing the Custom Resource this time but most importantly, the other two that we've been getting used to seeing, the "SampleLambdaExecutionRole" which gives the permissions to the Lambda to execute queries including put against the DynamoDB table.

Again, the Lambda function is the exact same. However, this time we have three new resources. We have our Custom Resource and the associated resources with it. So, we have a "LambdaExecutionRole" which is going to allow us to reference the database.json stack and a "LookupStackOutputs" Custom Resource Lambda Implementation which will be used to go look up the outputs of another stack based on the stack name.

Then we have a "DBStackReference" which actually uses the "LookupStackOutput" Custom Resource Lambda that we've created. As we can see here, it has a custom type, "ExternalStackReference." We provided a "ServiceToken" which is equal to the "ARN" of the Lambda that we just created to perform the stack lookup by doing a "Get Attribute" on the Lambda and grabbing the "ARN." And then we provide the only other property that it needs which is the "StackName" that it needs to go look up outputs from. This "DBStackReference" will then return a key-value hash of the outputs that come out of this other stack, this "DBStackName" which is coming in as a parameter.

Now, why might we want to do this? Well, in many applications you'll want to break up your complex stack into multiple stacks, but you may want to control the life cycles of two layers of the stack differently. For instance, controlling CloudFormation stacks that have databases inside of them can be a little scary since you may accidentally delete a database and its content. If there's important user data inside of this database, it may make sense to create a separate CloudFormation template that you can version-control and manage separately.

You can make it very restricted for IAM users to manipulate it, or you can just rely on the fact that once we segregate it, we can actually manipulate life cycle separately. Another reason this is beneficial is that I may need to A/B test my API or do a Blue-green deployment of the API layer only and share a database or do a Canary build where I might want to deploy one-tenth of my capacity with a separate API but retain the same shared database and same shared customer information as I do a graduated rollout.

To do this, I need to have a separate stack to create the DynamoDB Tables or application data if you were to use a MySQL database, you could also do that, Dynamo just happens to be fast for the demo. I could create two api.json versions and launch them both and have them both point to the same database stack and have three stacks running where both of the API layers reference the same database so I can gradually roll out or do a test roll-out of my API.

Now, the actual implementation of this Custom Resource is fairly straightforward. It uses the CFN-response module which is available to me since I'm using the zip file property in CloudFormation. It logs the requested as received. Because we're doing a lookup function, there's actually no Delete that needs to occur here. So we inspect to see if there's a Delete and just return a Success. Otherwise, we need to find the correct stack that we should be looking up. I set an empty hash that will be used as the key-value hash that I will return. I make sure that the "StackName" is actually defined. And if there's not definition, then I throw an error because this resource type, that is this stack lookup resource type, must have a stack name otherwise it won't function. Then I load in the aws-sdk, create and prepare the Amazon Web Services CloudFormation namespace. Then I run the "describeStacks" operation, which we saw in another lecture, over the "stackName." If I have an error then I throw the error back to the user with a "FAILED" and I give it the reason why it's failed. Then, if I have a "SUCCESS," that is no failure, I return the outputs after I create a hash out of them by iterating over the outputs array, setting the "responseData" hash keys equal to the array output keys and array output values. Then I send it back.

This "responseData" that we see we're setting the keys on will be the attribute hash that is referenceable in the outside templates. We can see where we do a reference when we go and look at, for instance, the role where we're allowing rights to that table. I get the attribute off of the "DBStackReference" and send it to the Dynamo table.

In addition to creating the Lambda that does that, I also need to give the Lambda the permission appropriate to "DescribeStacks" like I did there. Then I need to actually create the "DBStackReference" resource using the parameter name of the database that I want it to point to so I can use it later.

So, what this is letting me accomplish is I can run multiple versions of the API referencing the same database. Now that we've had a look at both the API and the DB stack, we can run the test. Note that in our test we're doing slightly different code than we did with the nested stack or just the simple wait stack because we need to launch two different stacks that do not have an explicit dependency.

All we need to do is create the database stack first which the API stack depends on. Run the manual wait using the wait function that we've all ready seen. Then create the other stack and run a manual wait again. And now we're actually going to borrow the same test script that we've already seen in this example.json so we can prove that this is yet another technique to create the same resources but with a different purpose. Here, we're allowing multiple APIs to touch the same database, so we want to drive that and make sure that we can run the same kind of tests using a different implementation. Now, without further ado, let's go see if this test passes after creating this sequence of stacks.

As we can see, we get the "StackID" back because the first stack initializes its creation correctly. Now the ping wait cycle has begun for the first stack. We should be able to see the custom resource CADB stack in the Events because we're currently creating the database table that the other API stack will depend on. This stack is complete and soon the BASH script will detect that it is complete. As we receive a "CREATE_COMPLETE" signal, we begin creating the other stack. So now we should be able to see two stacks in the console. We see that we passed the parameter for the created stack name.

Now, as we watch the events go through, we should start seeing stack resources being created. This "Lambda ExecutionRole" is what we need to allow our Custom Resource Lambda to be able to look up the outputs of this other stack. Now we can see that our "LookupStackOutputs" Custom Resource is finished creating and the Custom Resource Instance has finished creating as well. This means that we should be able to see inside of CloudWatch, which is where we inspect logs for Lambda, the outputs of that function.

So as we can see, a little bit less than a minute ago, we created logs. We see that we have a matching request signature that looks very similar to this request signature that we were looking at before, that is over here on the second footnote on the left. We have a "RequestType", "ResponseURL", "StackID", "RequestID", "ResourceType", "LogicalResourceID" "PhysicalResourceID" and "ResourceProperties." Here, we don't have a "PhysicalResourceID" yet because we're in a create operation. Those only come through for updates and deletes where we've actually created an ID.

Responding was a "PhysicalResourceID" equal to the logs of this request because we don't really care what the "PhysicalResourceID" is for lookups since there's no real resource. So we have a "DBStackReference" and since we had one output for the DynamoDB table, it returns the table name.

As we can see, the API created successfully and we also saw that the test passed because we got the correct outputs back from the implicit reference stack that uses this Custom Resource to call out and then we finished the life cycle.

So, again, we created another relatively complicated system where we are able to now mount multiple API layers on top of a single database layer. So, as a demonstration, we can prove that we're actually able to launch multiple API layers on top by providing the same template to CloudFormation then providing another stack name.

Now, what we're doing here is we're creating two separate APIs mounted on top of the database which is a common use case for this kind of stack reference where we need these CloudFormation stacks, the API stacks, to properly attach themselves to a database, but we do not want to include the database and the API in the same stack because we want to be able to reuse the database across multiple API stacks.

Once the IAM Role completes, we should see the Lambda start creating. Now the "DBStackReference" itself, which is also relatively fast because it's only doing a lookup. We need to wait for the execution role to complete.

Now that the stack is complete, we should also verify that this stack works by taking the command that we ran against the original API stack and verifying that it will still work for the new stack name. So we can test both APIs on top of the same database and still get a full-stack test even though we have a shared resource. So we have our stack name. There we go.

So now, these could potentially be two slightly different versions of the API or a different version of the tests. We used the same ones just for the sake of quickness in the demo but this could be inside of your Continuous Integration System where perhaps you want to not alter your database because the database takes a long time to create or is expensive but you want to test two different versions of the API automatically and headlessly. By using all of these techniques together where we're working with multiple stacks, working with stack events and working with Custom Resources, we can enable a rapid testing environment that will suit our own needs. Thanks for watching and I hope to see you again.

About the Author
Students
229443
Labs
1
Courses
216
Learning Paths
173

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.