image
Nested Stacks Demos
Start course
Difficulty
Advanced
Duration
2h 2m
Students
7287
Ratings
4/5
Description

As AWS-based cloud environments grow in complexity, DevOps Professionals need to adopt more powerful tools and techniques to manage complex deployments. In the AWS ecosystem, CloudFormation is the most powerful and sophisticated automation tool available to developers. In this course, we explore some of the most advanced CloudFormation skills AWS engineers can learn.

In addition to normal templated resource generation for single stacks using CloudFormation, engineers will learn how to:

  • Develop continuous integration and continuous deployment on CloudFormation
  • Tie into CloudFormation system events via SNS for operational tasks
  • Nest multiple levels of CloudFormation stacks to build out massive cloud systems
  • Author CloudFormation Custom Resources to add additional functionality and resource types to stacks

This course is best taken after reviewing the basics of CloudFormation with CloudAcademy's starter course How To Use AWS CloudFormation.

Demonstration Assets

The AWS CloudFormation templates and related scripts as demonstrated within this course can be found here:

https://github.com/cloudacademy/advanced-use-of-cloudformation

Transcript

Welcome again to CloudAcademy's Advanced Amazon Web Services CloudFormation course. Today we'll be moving out of the slides and into real applied knowledge for nested stacks. Let's hop out of the slides and take a look.

Our goal today is to reproduce the demonstration that we did when we were looking at the stack events, that is, when we were having a wait function and running tests on stacks. We want to reproduce the same test using nested stacks to illustrate how they work.

So in our old stack, recall that we had three resources. We had a lambda, a DynamoDB table, and the role that allowed the lambda to write to the DynamoDB table. We are going to create the same thing inside of this nested demo here.

So the first thing that we should see is that we actually have four templates going on here. There's a parent. JSON and this is the one that we'll actually be interacting with when we submit it to the cloud. That is because, as we were discussing before, there's this notion of nested stacks where we have a pointer only to the master stack template, which we can see is the middle resource being admitted from the user on the left-hand side there. And it is the resource stack in the top right where the child stack is that AWS CloudFormation stack resource inside of the master stack as a pointer to another stack. Note there that we must upload templates to S3 since the master stack references the bucket and key of the child stack template.

Okay, so so now that we know we're going to create an example of one of these net CloudFormation nested stacks, let's take a look at what it actually looks like. So in our original example, we had three resources and they all depended on each other. And we made it two outputs that we could use for testing. Now our parent stack should also have two outputs, table name, and Lambda names so our tests still run, but we can construct the way that we actually return the parameters significantly differently. Rather than depending on direct resources, note that we actually have only two resources and the parent.JSON file, the child stack A and the child stack B. These are two actual CloudFormation stacks that we will be using to create the other resources.

The template URL points us to the template for the child stack, and the parameters inside of the properties is the key-value hash for the parameters that should be passed into the child stack. We can also use depends on just like any other resource since CloudFormation sees this AWS CloudFormation stack resource type as just another resource.

Note here that we have no parameters because the child stack A is going to create some DynamoDB table, which does not require any parameters based on our demo. But child stack B requires or depends on the child stack A because child stack B will be creating both the lambda and the lambda role. But the lambda role and the lambda require the name of the DynamoDB table in order to, in the case of the role, restrict the write and read permissions to that specific table in the resources block of the policy, or in the case of the lambda, to be able to reference the correct cable inside of the inline code of the lambda.

So we see that we can create more than one child stack inside of a stack. We can also pass parameters into the child stack inside of the properties block. And then we have the pointer to the template. So we effectively have two high-level parameters. We're giving it a template URL and a bunch of parameters. Template URL and a bunch of parameters.

So let's take a look at inside of the child A. Note that there is not a parameters block since it's optional. All I've included is resources and outputs. The outputs in this case or what are returned to the parent stack as the fn.getAttributes attributes. We reference the stack and then outputs. and then the output name is the correct namespace for referencing outputs on a compound CloudFormation stack or a nested CloudFormation stack. This is the same DynamoDB table definition that was in the other one, however, the rest of the stack is different in that it's the only resource we're creating, and then we simply return the table. This is, of course, a contrived example just to illustrate the level of complexity that we can get with stack dependencies. And you would probably not create a single resource inside of a nested stack pattern. However, this is a great way to illustrate that we can indeed pass outputs from one stack to the other.

Now looking back at parent, in addition to child stack A after we've created child stack A, which included the DynamoDB table, we now have the ability to run child stack B and depend on child stack A for attributes that we need to get. So here we're actually passing the output of the first stack into the input or parameters of the second stack. I still have a pointer, which is just a normal S3 URL that I must have access to. Then if we go and look inside child stack B, we see a couple of resources. We see that there's one parameter just like we were passing in from the other stack. Note that the parents passing in the Dynamo table name, and that the child is receiving a Dynamo table name. All I've done is set the type to be string and we have two resources here.

Now to illustrate the, again, the level of complexity that we're capable of achieving using nested stacks, I've created a grandchild stack so we can see that a child stack can have a child stack. The grandchild stack has a pointer to yet another stack template. It also has a parameter. We are passing into the grandchild stack the parameter to this stack itself as the same name. If we go and look at the grandchild stack, we see that the grandchild receives the DynamoDB table name and it produces the lambda role.

So this looks, again, the exact same as it did in the other example. All we've done is move it to a separate stack. Again, there's no advantage here because it's a single resource, but in reality, we would create this as perhaps the application layer or network layer, any logical grouping of resources.

We also have to include an outputs block such that we can use an fn.getAttributes on grandchild inside of child stack B. So note that role ARN is equal to the lambda roles ARN. And then we see inside of child B that grandchild stack, once it's finished completing, we have a depends on grandchild stack block, which means that the grandchild stack will be created first, and thus the lambda role will be created first. Then we see that the lambda roles ARN is available to us as an output of the grandchild stack. Again, namespace of the outputs. and then the actual output name that we want to acquire. Again, this is extremely similar to the other demonstration that we did. We're just getting the properties from different places and creating them in a slightly, more complex fashion of these nested stacks.

Finally, we need to output the lambda name because the parent stack needs to be able to pass the table and lambda back to our test script. So as we can see, once we roll up both child stack A and child stack B in our output blocks, we take the output of the Dynamo table from child stack A, since child stack A is the tree in which we actually created the DynamoDB table, and we take the output of the lambda from child stack B, since that is the area in which we created the lambda.

So the combination of these four stacks are actually achieving the same thing as this example.template from our previous example when we ran the CFN wait, which means we should be able to run the same exact tests on this nested stack as we did with the waiting stack. Note all I've done is change the relative paths, but my commands are all the same as they were before, with the exception that I use a template URL rather than directly uploading the parent.JSON script.

So just like our other stack, we should expect to see the same waiting life cycle since we're only interfacing with the parent.JSON and using pointers. Note here that I have four uploads. This is simply me uploading the latest version of the stacks into S3 so we can make sure that we're deploying up to date. You would want to use a similar line inside of your continuous integration and continuous deployment system. If you're going to use this nested stack technique in conjunction with the automation techniques that we already learned for events and polling for completion.

If we go to the CloudFormation console, we should see that nested one, which is our parent stack is create in progress. We can see that it has a child resource, which is this other stack that we see creating. This is simply child stack A. If we look inside of child stack A, we should see that child stack A is actually creating a concrete resource, this Dynamo table, and it's not quite done acknowledging that the stack is finished. Now it is. It takes a moment for a nested stack one, the parent stack, to register that the child stack is finished. There's a slight delay since it's long-running.

Once the parent stack has recognized that the child stack A is done, because child stack B had a dependency on child stack A, which is now resolved, child stack B can begin launching. Child stack B recognizes that there's a grandchild stack that it needs to create because if we recall, the grandchild stack created the lambda role, which the other resource inside of child stack B depended on. Note that our logical ID has an even longer name. We're now waiting for the role to complete the concrete resource.

As we can see, the ping cycle is still continuing and working as expected. It recognizes that the create in progress status is still on the nested stack with the parent stack here. We have create complete from the grandchild, so the grandchild should register as complete from within its parent quite soon. There we go. The stack itself registers as complete, then child stack A and B are then both complete, which are all of the resources in the parent. And then we see that the stack registers itself as completely done. We still get the correct outputs, and if we go back and look, we see that our continuous integration system noticed that the create complete event was emitted, then got the correct outputs and ran our same test again.

So we just walked through a very complex nested stack that was broken down into four separate pieces with parent/child relationships and actually child/grandchild relationships, saw that the dependencies all worked together well if we constructed the parameter pass arounds correctly. And then saw that we can actually use the techniques that we learned earlier on in the course, the continuous integration code and techniques, in conjunction with the advanced techniques, like nested stacks. They do not make each other more complicated and they just work together.

Anyway, that's the end of this demonstration. Next up, we will be talking about CloudFormation custom resources.

About the Author
Students
15766
Labs
2
Courses
3

Nothing gets me more excited than the AWS Cloud platform! Teaching cloud skills has become a passion of mine. I have been a software and AWS cloud consultant for several years. I hold all 5 possible AWS Certifications: Developer Associate, SysOps Administrator Associate, Solutions Architect Associate, Solutions Architect Professional, and DevOps Engineer Professional. I live in Austin, Texas, USA, and work as development lead at my consulting firm, Tuple Labs.