As AWS-based cloud environments grow in complexity, DevOps Professionals need to adopt more powerful tools and techniques to manage complex deployments. In the AWS ecosystem, CloudFormation is the most powerful and sophisticated automation tool available to developers. In this course, we explore some of the most advanced CloudFormation skills AWS engineers can learn.
In addition to normal templated resource generation for single stacks using CloudFormation, engineers will learn how to:
- Develop continuous integration and continuous deployment on CloudFormation
- Tie into CloudFormation system events via SNS for operational tasks
- Nest multiple levels of CloudFormation stacks to build out massive cloud systems
- Author CloudFormation Custom Resources to add additional functionality and resource types to stacks
This course is best taken after reviewing the basics of CloudFormation with CloudAcademy's starter course How To Use AWS CloudFormation.
Demonstration Assets
The AWS CloudFormation templates and related scripts as demonstrated within this course can be found here:
https://github.com/cloudacademy/advanced-use-of-cloudformation
Welcome to CloudAcademy's Advanced Amazon Web Services CloudFormation course. Today we'll be doing a demonstration of how we can tie some of the techniques we've already learned into a continuous integration system with the aim of improving our DevOps environment.
Since we're doing a demo today, we won't be in the slideshow very much, so let me hop out and go over to an image so we can explain what we're looking to do today.
So if we recall, when we were looking at our DevOps maturity plan, we had this region where we're looking at advanced infrastructure, and advanced build and deploy, and advanced testing. Now if we look, we have a couple of goals that seems like they would work well for CloudFormation, like test systems are current. I don't have to think about builds and our deploys always work, and I'm comfortable with major cloud-wide changes. Now to enable this, we need to install a test module where we're able to launch a full CloudFormation stack and run unit or integration tests on the new stack before we promote the stack to a stage or a production environment.
This way whenever a developer is making changes on code, they also have the opportunity to make changes inside of the infrastructure file, or files, in CloudFormation, and have the environment update and be up to date whenever the builds start running, then run their end to end tests inside the continuous integration system before they even have to promote the environment to a shared environment with other developers.
So what we're looking to do here is mostly this end to end testing code portion here, full automatic architecture tests, and fully automatic environment creation. So these three are what we're looking to do today. And what the solution looks like is if we are zoomed into our advanced area here, we're using advanced CloudFormation. We're setting up the portion here that will work for a build or a deploy script. But most importantly, we're working on this portion here where we automatically go from a CloudFormation update next to code into a unit and integration test suite, where we can automatically run tests after the CloudFormation script is run.
The primary challenge that we have is that the CloudFormation script is difficult to detect when it's finished because it's not a request-response model, and the CloudFormation piece may be long-running. So we need to be able to look for failures during the long-running process that is CloudFormation, but also to be able to detect when the stack is finished building so we can run our tests.
Now if we look at our stack status life cycle, which we went over in one of the previous videos, we see that in our create and update stacks there are, these are the ones that we'll be focusing on primarily today rather, we see that there are these terminal success states, some intermediary not fail yet states, and then a lot of failure states where we know that the stack is not going to be able to create correctly. So when we're looking to integrate CloudFormation into our continuous integration workflow, we need to be aware of these environment life cycle phases and be able to detect them as well as act upon them whenever one of them changes, or we see the stack in a certain state.
So today we'll be building out a system that can tell when a stack is in the create in progress or create complete phase, or if it's already failed, as well as undergo updates. I'll primarily be demoing the create cycle because it's the fastest one for me to demonstrate but the same techniques can be applied to update since we're mostly focusing on how to integrate CloudFormation into an advanced environment.
So looking over at our other diagram that we looked at in our other slideshow, we can see that any user automation portion here, it's a person figure inside of this diagram, but today we're going to be creating a system such that this could easily be another computer, i.e. our CICD machine.
So we're actually, for this first demonstration, going to be doing two separate notification or status change integrations. The primary one we'll be polling for the status so that the continuous integration machine can just execute a normal-looking BASH script with a CloudFormation wait poller in it, where we use this number five here to detect when the stack is done or failed. And then we'll be using this number 10 here where we have CloudFormation published to an SNS stack to demonstrate sending an email to an operations personnel whenever a stack is finished completing.
So what I have prepared for us today is pretty much all of the demonstration, which includes the script that a piece of automation would use as well as the stack that we need to write to have the resources be produced, and actually the tests that we'll be running on the line after we finish this polling for the status change.
So without further ado, let's take a look at the stack that we are creating first so we can get an idea of what we're trying to test. So I've folded the code here. This is a technique that you can use in Sublime text editor to make these little ellipses where I have 15 lines of code hidden here, and this is useful for getting the overlay of a full template. If you're using Sublime Text, the way to do this quickly is on a Macintosh you would do Cmd + K and 3 in quick succession. And if you're on any other platform, you would hold the Ctrl key.
So as we can see, I have a pretty simple sample stack here where I have three different resources. I have a sample Dynamo table, a sample Lambda execution role, and a sample Lambda. So the stack that I'm building is fairly simplistic. We have a Lambda table that will act as our DB or database persistence layer, a role that allows us to write against a Dynamo table, and then a Lambda that actually has code inside of it that lets us do that right to the Dynamo table. So this a serverless API of sorts where we can invoke the Lambda function in order to treat it like an API. So that might be slightly foreign to you, but it's not the important part of this entire system. The important part is that we're going to be able to an integration test. So when we finish executing this stack, what we're going to see is we'll have a finished Dynamo table that has a Lambda trying to write to it that has an execution role that allows the Lambda to write into the Dynamo table.
Our test will be a very simple one. It will be us invoking a Lambda in a request-response model so that the write should finish before we get a result back from Lambda. Then we will verify that that record shows up in the table by doing a direct read off of the database table from the machine running the test. Then we'll clean up the table by deleting the item out of DynamoDB, which will have proved that we were able to use the Lambda to write into Dynamo and that we were able to access DynamoDB properly.
So if you can imagine that's a pretty simple test, but the goal there is that it should reflect pretty similarly how you might want to do a very full integration test on a normal system where you might be using RDS for a SQL-style database rather than Dynamo. I only choose this way rather than doing EC2 and RDS and server full things because Dynamo tables and Lambdas create and launch a lot faster, so we're not bored out of our mind waiting for this thing to finish.
So I have my template format version, which is optional. I have my description, which is optional, and I've just put a sample stack for you guys. Let's take a look at the resources that we're actually creating, so you'll be able to verify that it's working if you're following along here. This is simple enough for you to try on your own as well. It's a sample DynamoDB table where I've set the type correctly to DynamoDB table, and I've provided the minimum possible attributes for me to create a table. That is, I've defined the hash or primary key, primary simple key, the key schema, which again, I just said the ID was a hash. And then I gave it three read and write units, which is really, really cheap.
After we finish the sample Dynamo table, we should create a role that allows a Lambda function to write to the Dynamo table so we can actually write some logic inside of a Lambda. I've set the type to an IM role. I've made sure that the role depends on the table itself because we'll be using the value of the table's name to assign a policy that restricts the write ability on DynamoDB actions to the specific table from the stack.
So we can see that we have this assume role policy document. This is important for us to assume a role so the Lambda can actually operate using this policy. If I roll that up, path is optional and it defaults to this value, but we see the policies here. We've named the policy similar to Lambda execution role and I've just stuck the stack name in there so we don't have namespace collisions on the policies. I've also written a policy document here where I have the standard Lambda permissions where it just lets me write logs and such. This should be part of your Lambda knowledge if you've ever done that before. But all this is letting me do is put things into CloudWatch logs whenever I need to from the Lambda. But most importantly, I'm adding the ability to do CRUD operations, create, read, update, and delete operations, on the DynamoDB table from the Lambda that will be assuming this role. So I'm allowing it. Then I'm affixing the position to a specific resource because the ref value for DynamoDB does not return the ARN, which is what we need inside of the resource definition for an IAM policy. We are using the table name, which is what has returned from the ref value, and constructing the ARN in the correct format for DynamoDB table. That is, giving it the DynamoDB namespace, then providing the region delineated again by a colon, and providing the account ID before saying it is a table resource type inside of the DynamoDB namespace with a path at the name of the table. Okay, so that's the end of my execution role.
My sample Lambda, which will actually be doing the writing, we make sure that it depends on both the role and the table because we need the table value for in-lining the table value into the code in the Lambda. And we need the execution role so the Lambda can assume it. So here in our properties, we have a code, which is a zip file. I have actually in-lined this so I don't have to download anything else or provide an S3 key to point to the code files. But pretty simply, we log some data out so we can do some debugging we require in the Amazon Web Services SDK and make sure that we have a DynamoDB document client to do the write. Then we export this function here, which has some code. And first, we log the event that we receive, then we set the table variable value, actually, to the table name of the DynamoDB table, then we close off the string inside node, then we operate over the DynamoDB put. So we can see that we provide the item, which is the event. It's what we invoke the Lambda with. Then we provide the table name, which we have computed based off of this join function here, and we either error handle if there's an error doing the put, or we just return with success.
Of course, we need to have a description for the Lambda. We define index.handler as the handler because that's exports.handler on line 29 here. Then we provide the minimum memory size possible so that we're not spending too much money. Then we give DynamoDB the appropriate role that we just created. After specifying the correct runtime and a long timeout, just because we have finished our resources block of our template. None of that should have been super unfamiliar. You may be learning new things about the specific resource types for Dynamo and Lambda. However, you should already understand what I'm doing with the fn ref function, and the getAttribute, and those kinds of things.
Then for my outputs, I need for my test system to be able to easily grab the outputs of the ARNs or the names for the table and the Lambda so I can run my tests. I need the Lambda name so I can invoke the Lambda to test it, and then I need the table name so I can test my assertion that the Lambda actually wrote to the table.
So again, we have three resources and two outputs, and we're going to integrate a continuous integration system to test that this stack works, full stack, that we can actually run all the way through the compute layer and into the database with the correct permissions.
Okay, so first and foremost, what we're going to create is something that looks like this in our test environment, which can either be locally when I'm running tests if I have appropriate permissions for Amazon. Or this could be inside of your continuous integration system. This could be a snippet of code that you run in your CI. Because it's BASH, it's extremely portable and you can run it either locally or in your CI. We're actually just going to run it locally today because I am not going to set up a continuous integration system in front of the class because I don't want to pick continuous integration system and stick with it. That is, I don't want to make it seem like I'm advocating this particular one. But this will work in any system you want, like a circle CI, a Worker, a Jenkins, or anything.
So first definitely set your set E, so this script will error out if there's any error anywhere, which is exactly what we want. If the tests fail, we want the script to exit. This is just a BASH trick to figure out where the script is executing from, so I can use relative file paths in the rest of this script. The standard AWS CLI command, which you may or may not have learned from the basic CloudFormation course, is just AWS CloudFormation create stack. We'll just be doing create today like I said earlier. We provide the stack name, so this is the English name that we'll be using and the stack name that will show up inside of the console in the line item whenever I do my list stack operations. Then I provide the template body as a file B, which means a binary file that just means stream the content off of disk and pipe it to Amazon as the template body. Then I provided the IAM capability and this is only important if you're writing a CloudFormation stack that has the IAM capability need. That would be any stack that creates other stacks, or creates IAM entities or manipulates IAM entities.
So now we have the important piece here. The advanced techniques are that we are creating a polling function here. Here the relative path is an index because we're calling it wait, CFN wait, for CloudFormation wait. This could be your own module or this could be a package off of the NPM or a Python package, or anything. All you really need is some sort of line of BASH that invokes some scripting that waits until the stack finishes completing. Now because we're trying to do a walkthrough today, I'll actually show you what the code looks like for node.js, but you can write this in any language that you want. I'm just trying to teach you the conceptual way that you would go about doing this.
So here my interval is the number of seconds that I should wait before doing polls. And by polls, I mean the polls that we were looking at here. I'm defining the duration between each of these polls and how many of them I should do right here. And I'm providing the stack name that I had created during this phase, excuse me, and I'm providing the stack name that I had created during this phase here. So we're doing these four actions occur when this happens here, line 13 through 19, then 21 through 22 is us waiting for... We're doing polling and we're waiting for six, seven, eight, and nine to finish.
Once those are done and I've gotten an acknowledgement back from the poll for status, I will continue to the next line of BASH. So here if I exceed my interval or my max, my script that I wrote, which I'll show you the source code for in a moment here, will timeout. And if it doesn't timeout and it finishes completely, and it has a good state, that is if we land in because we're doing a create, if we land in the create complete state while we're doing this line here, we're doing our wait function and we detect that the status has entered a positive status, then we will execute our tests.
So again, this could be any line of BASH that just runs your tests. If you're working in Python or you're working in Ruby, or anything else, the idea here though is that we're just doing a creation or an update, or a delete even, then we have a wait function that executes this polling operation or behavior that we're talking about while the rest of this region of the flowchart finishes. Then we run the tests after we've closed this full loop. So that would be after this entire life cycle here finishes and we've detected that it finished by doing a final poll, then we run this. We don't want to run this unless we're already in create complete.
So what does the source code actually look like for this wait function? Let's go take a peek. So when we're doing the wait function, again like I said, this is a node.js. I'll explain symbolically. First in any language that you're working in, import or acquire in the Amazon Web Services SDK, if you're working in a language that does not have a supported SDK, you can just use the HTTP API to do this with the equivalent actions. The SDK just makes the code a little bit cleaner and easier to read. This is simply a little helper function that I wrote to get a key-value hash of the argument values on the command line, so it will work in this format, so it can pass the stack interval and max values. Then we have the ping interval stack and max that are just interpreting the arguments like you saw me invoking in the test script. I configure or set up the CloudFormation namespace off of the AWS SDK. This is just what it looks like in node, but you can do the equivalent in your language. I detect if I have a null or an empty string for the key that is stack off of the command line. Again, you can do this in any scripting language or any language that is going to let you invoke it through BASH. This is just what it looks like in node. Then for the Amazon Web Services node.js SDK for CloudFormation, it requires a hash, that is a key-value pair here, with only one key-value inside of it. This is extremely similar to the post body for the HTTP API if you want to go look that up as well. I just provide the Pascal cased stack name as the key, then the actual stack name. So this stack name here in the ping request parameters, it will be invoking the API with the double dash stack parameter, which we can see as defined here on this line.
So once I've done some of that housekeeping and I've just prepared my function invocation, polled all the arguments off of the command line, and set it up so I've put my ping counter to zero, I do a little bit of a console.log to provide feedback to the user, and I just immediately run a ping. So a ping, it doesn't really mean a ping from the BASH perspective where you're doing a timing on the request-response, but what we are doing is we're checking the state of the CloudFormation stack that we just created here on line 15 where we have defined the stack name. Similarly, I'm also passing the same stack on line 22. So when we're thinking about index.js, we should be thinking about a stack that's in the middle of being created.
Now the way that we detect if a stack is created or not using the SDKs is this operation called describe stacks. Unfortunately, there's not a single stack get operation that lets us check the state of a single stack. So we need to do this array-based method. But we can still accomplish everything that we want. So again, this is the node.js way using asynchronous programming and a function callback, but really all we're doing is we're saying whatever SDK we're using, invoke the CloudFormation describe stacks action, hell you could even do this in BASH, with the correct parameters. That is, you give it the stack name. Then in node, we have this callback where the continuation happens here and we're passed either the error of the continuation or the data of the successful request is passed in.
But long story short, if you were working in a language where this is represented as synchronous code and not a continuation call back, you might try catch and catch this error, and then have the other healthy path be you setting or storing the value of the describe stacks operation as data. But long story short, if there was an error, this error will be non null and have some value that I can console log. If it's fine, then the data will be an object in JavaScript.
So I need to do that quick check, and I can just process.exit one so we fail our test script if we get an error when we're trying to ping for CloudFormation. Otherwise, I need to do a quick check on the data because we're doing this nasty array-based operation, there is not a single get. I need to check the data property and make sure we get a well-formed stacks object back when we did a describe stacks. So off of the root of the object, it looks like a JSON object, right? Off of the root of the object one of the first properties is stacks, and stacks should be an array of stacks that we're describing. Because we provided the stack name, we should expect the data.stacks to have a length of one. So here in node.js, we can just do this check because the zero will be false C if we check data.stacks.length. And that returns zero, then we'll get this value will be false, and then we'll get a true because we've negated the entire compound statement here. So basically if we don't find a stack element in the data.stacks property, we'll also fail out. That is, we didn't find the stack even we were able to correctly ping CloudFormation.
So our else clause here is we were able to correctly ping and get some information back from CloudFormation. First, to make it a little more readable, I just accessed the actual stack that we're looking for. This data stacks and first element in the array will be a JSON object representing the stack itself, and the stack itself has a top-level property for that object type that is called stack status, which is directly mapping to this stack status that we're talking about here.
So when we're running our ping cycle, we're trying to see if this stack status here on line 76 ever becomes create complete, then we succeed. If it ever becomes anything in this error category, then we fail. If it's still create in progress, then we need to run another ping, and we do a little bit of waiting conditions there where we effectively sleep the function.
So let's run our little switch statement. JavaScript allows you to do string-based switch statements, which is what I'm doing here. So I'm just saying based on a value of the stack status here, which should be something on this entire grid here, then do some operations. I have included the update and delete life cycle states because this script needs to not explode if somebody in the middle, for instance, between this line of BASH and this line of BASH, if someone manually goes and alters something, then I need to be able to handle the edge cases where we're no longer just doing a create. And perhaps somebody has canceled the operation and commenced a deleted. So we'll see all of the possible statuses, not just the ones that we would expect to see during a create operation.
So first, I enumerate or list out all of the positive dynamic states. So mean the I'm not done yet and nothing is wrong. The one that we would expect during a create cycle would be create in progress, but I'm also including update in progress and update cleanup in progress because those are also positive in the update cycle. If we get a "Oh gosh, we're not done yet" back, then we should provide some feedback to the user, increment our counter somehow as we go through the pingbacks count. Then if we are at the top of our counter value, then we need to just error out because we've hit a timeout, otherwise we should just do another request later.
So again, JavaScript is asynchronous. This is the equivalent of doing a sleep operation where ping intervals in seconds and the 1,000 converts the units to milliseconds. JavaScript operates over milliseconds, so all I'm going here is effectively saying in ping interval times 1,000 milliseconds, we need to run the ping function again, which makes perfect sense given that the ping function is the thing that checks the status or runs our little periodic polling.
Okay, so if we're not still okay in waiting, we could also be done already, in which case we need to have our stable positive end states listed out. Those stable positive end states are just create complete, and update complete for when we're doing an update like this. Both of these states will mean that we are now able to actually run our tests and do things like, for instance, in our case try invoking the Lambda function.
So now we also need to be able to handle failure cases. We effectively list out every other state that aren't the positive ones, or the continuation ones. We list out all of the other possible cases, then we just want to exit with a non-zero error code so BASH knows to trip this set E error and not run our next line of BASH and run the tests. That is, we don't want to try running the tests on the CloudFormation stack if it was, of course, unable to create anything inside the stack. This default is just a catch-all so we don't get a bunch of uncaught exceptions in node if somebody tries to do something silly, but we should never get any unrecognized states back because the Amazon Web Services CloudFormation should not reply with any other strings except for the ones I have listed here. Of course, this is that command line arguments function I was talking about earlier, and this is just to coerce strings into integer values so I can do things like setting the interval in the max inline in a BASH statement.
Okay, so long story short, I just check if the stack is done. I recurse if it's not, but there's no problems yet. I exit with a success if it's done, and I exit with an error if it's anything besides just a continuation or a success. So we've gone over the template, we've gone over what the invocation should look like and why we want to do it because we want to be able to run end to end tests inside of our continuous integration system, which likely speaks BASH. Then I've showed you an example and gone over a little bit symbolically what is happening in the logic for doing a poll function.
Let's look at what our test script is actually going to do. First, it's going to describe the stacks based on the stack name that's passed in here. Then our test, if there's an error, it will of course just exit out. It will check if the stack has found similar like we just saw where we had to do a stack data stack zero here before on the other file in the index. This is a trick to turn the output value array, the stack API returns for CloudFormation. It returns them as an array of key-value hashes where the output key and the output value are defined independently. I want this into not in array format, but a hash format so we can do this kind of access here. So I just convert everything into hash format. I then try to invoke the Lambda function using the Lambda name output that I should be getting from my stack here. I do a request-response style invocation so that the BASH line, excuse me, so that the function will finish the execution of the Lambda before replying back. Then I say log type none because I don't want to manually inspect the logs that come back from the invocation. And I send it the stringified version of the test object payload, which is just this FUBAR with a testable a prop "Hello world" so we can check that the same piece of data is written back to the table.
Now, given that we looked at our Lambda function code inside of the CloudFormation template, we know that the Lambda function is just going to execute a put against the DynamoDB table. So we should be getting this error here will be non-null and have some content if we have a problem doing that put inside of the Lambda function. Otherwise, it means that the Lambda believes that it succeeded so we can test. If the Lambda actually did succeed, we should be able to get the record from DynamoDB, which is exactly what I'm doing here. I just go and I run a get function here. And my continuation I should see if I get an error. If I'm unable to see the object that I'm looking for, and I get some sort of error, then, of course, I'm going to exit zero and my test script fails. Otherwise, if I do see that I got my object back, then I want to run a test on it and make sure that the testable prop, which was equal to "Hello world" is still equal to testable prop when I get the data back. If it's not, I should exit again. And if it is, I should clean up the record so I can use the created stack and actually continue operating over things. For instance, if I need to move this into production or if I need to immediately start using this as a staging environment, I don't want a dumb record in there. Dummy record rather, where we have nonsense for our ID and nonsense for the defined property with "Hello world." So I then delete the record and just make sure that I'm able to again delete it and I don't have an error. If I'm able to delete it, then I have fully done an end to end test of my entire database system, plus API locally, without having any architecture created in the first place.
So to give you an idea of how cool this is, this means that any developer at any time they make a commit, if we include this inside of our testing script and we give our CI the ability to create these CloudFormation templates, then the CI on every single commit can spin up a completely new stack and run a very extensive test suite that fully tests everything.
Another way that you can use this is instead of doing every single commit, you can just when it's time to promote from master to production, or from a staging environment to production, you may want to run final verification tests to make sure that your staging code can handle the load and can do certain failover testing, which you can also do automatically testing every time you do a commit, but perhaps your use case that would be prohibitively expensive or slow.
What you can do with this system is just make a one-off script where before you do a promotion, you can set up your own testing suite where you run a load test rather than, for instance, this is a code test here or a functionality test from end to end, we can also do some sort of load testing script here on line 26.
So our capabilities are pretty much unlimited when we start realizing that we can completely headlessly and without observing it at all, let an entire cloud infrastructure build itself out, test itself, and then if we're just trying to do testing, we can actually run a line down here where CloudFormation goes and kills itself too. So we can run the full test and destroy it, and invoke almost nearly no cost because we're deleting the stack within a couple of minutes after we've created it.
So let's go see what this actually looks like when we run it. First I'm going to more manually set a few things so I can make sure that this runs correctly during the demonstration. The first things I'm going to do is set the region to US West 2, since my default region on this local machine is not that value. Then I'm also going to set my profile. Okay, so now that we can actually make sure that this will work, let's try running it.
Okay, so the first thing we should notice is that our execution where we did our request-response and ran this operation actually worked immediately because we got the stack errand back with the correct region. We should be able to tab over to the console here and see that there's actually a create in progress and view the stack events where we're actually creating the resources associated with the stack that we looked at a minute ago.
So we can see that the resources are creating, we can go and audit the template and make sure it's working correctly. And we didn't use any parameters here, but during your CICD deploy, this might be helpful to see what the system is automatically creating, since most more complicated stacks will actually have parameters. We were just keeping it simple.
So while we're doing that, we can also do another demonstration where we're going to tie into this SNS event that we were talking about where, rather than waiting and just effectively polling, which is what we're doing while we're refreshing, we can also tie in and get an email when the stack is finished. So I'm going to create this while we go and review. Now in advanced resources, we'll do my work email, we can create a topic and we should expect that stack events are emailed to my email. We could also use a Lambda function to listen to this. I don't need to set any of the other advanced options, but I do need to give that new IAM capability and hit create.
So now we're waiting for me to get that email. We saw that we got the stack create complete. We went from 20:59:52 here. It took us a little more than a minute to finish creating the stack there. Let's go look at the output of our BASH function. So we saw that we had this beginning, whoops excuse me. So we saw that we had the beginning of the stack ping wait cycle, where every 15 seconds we were running a check, right? We saw that we got the create in progress status back because we had this ping looping cycle with the asynchronous requests. Then we saw finally after four of these pings, which we should expect because we saw we did the delta of the time between when it was created and finished. We see that it took us about a minute to detect via this polling method that stack finished, exited with a success, and then started running an integration test, which we also reviewed where we found the outputs. That is, we found the name of the table and the name of the Lambda that was generated inside of the stack. We invoked the Lambda using our invoke request-response model where we actually put the object into the DynamoDB table. Then we got the object back during a get function, checked that the properties matched, then correctly were able to delete out of the stack. So this would be analogous to you creating a Rails application and putting an RDS database behind it and then actually going and checking your controllers after the full stack had been finished creating.
So let's go and look inside of the DynamoDB table and just make sure that that has been deleted. That is, let's make sure that our delete function worked like we expected. So we have our CA sample one that we just created. If we go look inside of items, we can see that there are actually no more items, which is exactly what we wanted. So our test passed and then it cleaned up after itself, which is great.
So if we look back in the CloudFormation management console, we see that CA2 is finished. We should expect that our SNS integration that takes advantage of this broadcast should have broadcasted back to me, the user, via an email. So we can actually see that we were able to subscribe to the SNS topic here. We got undefined because I left the blank blank when we said define topic. This is actually completely valid. It's just weird looking. But you would normally name this with a semantically meaningful string. I could put this as "My CA demo stack" or something.
I'm actually not going to confirm the subscription here. I'm going to do the cleanup and just delete both the stacks, since we actually went through the entire demonstration here. So what we found is that now we actually have the ability to create these CloudAcademy 1 scripts extremely quickly and do an integration test on them. So I can actually do another one of these, right? So say, for instance, I want to create a whole bunch of these at once, I can just do a copy here, change this to CloudAcademy 2, create another tab, and do the same thing.
I skipped ahead so the CloudAcademy 2 and 3 are done. We can actually see that we were able to create two additional stacks. This would be like using concurrency inside of your continuous integration or continuous deployment system, and it should be pretty interesting to you that you can create three of these stacks in parallel and test them independently. These could be different versions of commits. I've used the same ones just because of the restrictions on complexity for doing this screen recording here, but this could be a very sophisticated testing system where you're doing full-stack testing automatically on entire infrastructures at once.
So thus ends our demonstration of how we can integrate continuous integration systems in CloudFormation. Hopefully, you learned a thing or two about techniques used to both poll or do event-driven systems for notifications.
For our next video in the course, we'll be going over nested stacks, a technique that you can use to create increasingly complex models, but still have a one-step deploy.
Nothing gets me more excited than the AWS Cloud platform! Teaching cloud skills has become a passion of mine. I have been a software and AWS cloud consultant for several years. I hold all 5 possible AWS Certifications: Developer Associate, SysOps Administrator Associate, Solutions Architect Associate, Solutions Architect Professional, and DevOps Engineer Professional. I live in Austin, Texas, USA, and work as development lead at my consulting firm, Tuple Labs.