Deployment on GCP
The course is part of these learning paths
Computing services such as virtual machine instances, container orchestration systems, serverless, etc., gain a lot of attention in the tech world, but storage and networking are also essential for almost all applications. Data storage is a broad topic covering a wide variety of storage mechanisms for different use cases. Networking is vital for service communication, and security is always important, though typically an afterthought.
As the technologies used to build distributed systems keep improving, data storage offerings continue to grow, evolve, and inspire new services. Having a better understanding of these different services can help us build better applications.
This course will help prepare you for the Google Professional Cloud Developer Certification exam, which requires a working knowledge of building cloud-native systems on GCP, and covers a wide variety of topics, from designing distributed systems to knowing how to create different storage resources.
This course focuses on the third section of the exam overview, concentrating specifically on the last four points, which cover data storage creation, networking, and security services.
- Create data storage resources
- Deploy and implement networking resources
- Automate resource provisioning with Deployment Manager
- Manage service accounts
- IT professionals who want to become cloud-native developers
- IT professionals preparing for Google’s Professional Cloud Developer exam
- Software development experience
- Proficient with at least one programming language
- SQL and NoSQL experience
- Networking experience (subnets, CIDR notation, and firewalls)
- Familiarity with infrastructure-as-code concepts
Hello and welcome. In this lesson, we're going to cover the automated provisioning of cloud resources with Deployment Manager.
Reproducibility is an important concept for systems engineers. Custom-built systems that are manually configured become snowflakes. They're difficult to operate, they're even more difficult to reproduce, and typically, the entropy of these systems increases over time. Now, there's a lot of reasons why. However, if a system is reproducible, it does help to counter some of the entropy because at a minimum you have an established starting point.
Deployment Manager allows infrastructure to be defined in configuration files. The industry term most often used to describe this is infrastructure as code. By having the infrastructure defined in code we gain the ability to configure consistent environments. If you're familiar with Terraform, then Deployment Manager is fairly similar. The way it works is we configure the desired state of our infrastructure in a YAML file. They define which resources should exist and the values for each argument. Notably, the configuration specifies the desired state of our infrastructure and it's up to Deployment Manager to figure out how to make that happen.
If we were to script out creating resources, we might add code that checks to see if the resource already exists and then figures out what to do if that's the case. With Deployment Manager, we tell it which resources should exist and if they don't, it goes and creates them. If they do, it checks to see if there are any changes that it needs to apply. So, Deployment Manager allows us to specify the desired state of our infrastructure and it will build it. To make configurations more dynamic, Deployment Manager supports Python and Jinja-based templates. Jinja is a template engine with its own syntax for conditionals, loops, displaying dynamic data, etc. Python templates require a GenerateConfig function that accepts a context object containing metadata and it will return a dictionary.
Because the template is processed by Google servers it runs in a sandbox, meaning that there are a list of built-in libraries that we can use in our template and sys calls aren't supported. If you need an external library, it has to be fully included in the template. When creating multiple resources, we often run into dependencies. Certain resources require another resource to exist before it can be created. For example, if we create a compute engine instance we can specify the network. If we're creating both the network and the instance in the same configuration, then we need a mechanism that can allow them to refer to each other.
Deployment Manager uses a special syntax called references that works with YAML configurations and templates. They allow us to access other resource information in the configuration. It starts out with ref followed by the specific resources that we want to actually interact with using dot notation. We can reference nested properties and even specific entries in a list. Once you have a configuration that you wanna create you need to create a deployment. Using the SDK, deployments are created with
gcloud deployment-manager deployments create. If your configuration specifies a resource that already exists, by default, Deployment Manager will acquire that resource so that it can manage it via configuration. However, if you need to, you can disable that. Once you have a deployment, the resources can be updated by making changes to the configuration and calling
gcloud deployment-manager deployments update.
Having your infrastructure in code means that code changes can have a heavy impact. I suspect most of us have made a system change or some code change that had an unexpected consequence. As an example, early on in my career, I ran a delete statement on a SQL database and, while distracted, I forgot the WHERE clause. Changes introduce risk, which is why infrastructure as code tools tend to provide a way to review the effects of processing the configuration. Deployment Manager provides a preview option that will show which changes resources will undergo without actually applying those changes. So, by creating a configuration that defines all of the required resources, we're able to turn our infrastructure into code, and we can even put it under version control. Once infrastructure's in code, it becomes very easy to automate changes and ensure that environments are consistent.
All right, that is going to do it for this lesson. Thank you so much for watching and I will see you in another lesson.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.