Deploying and Implementing Networking Resources
Deploying and Implementing Compute Engine Resources
The course is part of these learning paths
This course has been designed to teach you how to deploy network and compute resources on Google Cloud Platform. The content in this course will help prepare you for the Associate Cloud Engineer exam.
- To understand key networking and compute resources on Google Cloud Platform
- Be able to explain different networking and compute features commonly used on GCP
- Be able to deploy key networking and compute resources on Google Cloud Platform
- Those who are preparing for the Associate Cloud Engineer exam
- Those looking to learn more about GCP networking and compute features
To get the most from this course then you should have some exposure to GCP resources, such as VPCs and Compute Instances. However, this is not essential.
Welcome back. In this lesson, what we're going to do is launch a compute engine instance using a custom network configuration. We're going to connect this compute engine instance to the virtual network that we deployed earlier. To begin the deployment, what we need to do is browse to the VM instances page under compute engine over here on the left.
Now, from here, what we want to do is click the Create button. I'll call my instance vm1. And you'll notice, if I try to use capital letters here, it's going to tell me that the name must be lowercase, numbers, and hyphens. Now over here on the right side, I can see the monthly estimate for my VM instance as configured, and then in these two dropdowns, I can specify the region, which is the geographical location where the VM is going to run, and the zone, which is an isolated location within that region. Now, what the zone does, as you can see here, is determine what computing resources are available and where data is stored and used. Now, in the machine type box, here, I can customize my VM. I can select the CPUs and memory that my VM should use.
What I'm going to do here is select two CPUs and 7.5 gig of memory. What this is, is the N1-standard-2 size. And when I make that change, I can see my monthly estimate changes as well. Now, if I hover over the icon next to container, I can see that I can deploy a container to this VM instance by using a container-optimized OS image. I'm not going to do that for this lesson here. What I am going to do, is under boot disk, here, I'm going to change this boot disk. We can see right now that the current boot disk is a 10 gig standard persistent disk running the Debian/GNU Linux 9 image. What I'm going to do here, is change this. And for this exercise, we're going to deploy a Windows Server 2016 Datacenter image. So, we'll go ahead and scroll down here, and we can see Windows Server 2016 Datacenter. So, we'll go ahead and select it here. And then down here, we can select the boot disk type and the size. If we select the dropdown here, we can select a standard persistent disk or an SSD. I'll leave this standard persistent here and 50 gig will suffice for this exercise. So, we'll click Select.
Now, as we scroll down the page here, we have an option here for identity and API access. If we hover over the icon next to identity and API, we can see that any applications that run on the VM, they use the service account to call Google Cloud APIs. Now, from here, we can select the service account that we want to use, along with the level of API access that we want to allow. For this demonstration, we're going to leave this at its default setting. We don't need to do anything special here. In the firewall section, what we can do is add tags and rules to allow specific network traffic to and from the internet. What I'm going to do is allow HTTP traffic in case I want to do some kind of demonstration later on, using this VM. Now, to customize the configuration of the VM, regarding the network we're going to connect to, any additional NICs, any additional disks, et cetera, what we can do here is select the management, security, disks, network, and sole tenancy dropdown here.
Now, from here, under Management, we can provide a description for our VM, any labels for our VM, and if we hover over the icon for labels, we can see that you use them to organize projects and to kind of group resources together. We're not going to do any labeling or description here and then we can also specify deletion protection. What deletion protection does is ensure that a VM can't be deleted. You can see here if we check the box, nothing really changes. This is just something that happens on the underside or under the covers, so to speak. So, we'll uncheck this box here. We can also specify metadata for our VM and then under availability policy, we can specify premptability, on-host maintenance, and automatic restart. Now, that being said, a pre-emptible VM, although it's going to be cheaper, will only last for 24 hours. And, essentially, it can be terminated if system demands need that. This option might be good for development environments or quick testing. And then of course, the on-host maintenance and automatic restart options relate to infrastructure maintenance that's performed within the Google platform. I'm going to leave these options at their defaults. But, if we select the dropdown, you can see we can either migrate VM instances or terminate them in the event that they need to go down for any kind of infrastructure maintenance. The automatic restart is either on or off. Essentially, automatic restart tells the compute engine to automatically restart any VM instances if they're terminated for non-user initiated reasons. Typically, this means maintenance events or hardware failures. We'll leave these options at their defaults and then switch over to security, here.
We can see, here, we have options for shielded VM. Now, you can see here that they're turned off here. They're not even enabled. Now, if we hover over shielded VM here, we can see that these features include trusted UEFI firmware, and also options for secure boot, TPM and integrity monitoring. We can also specify SSH keys, which we're not going to do here. We'll do this at a later time in another lesson. And then, under disks, we can specify deletion rules for the boot disk. Essentially, we can tell Google Cloud Platform to delete the boot disk when the instance is deleted. This is on by default, which makes sense.
Typically, you don't want the boot disk to sit around if you've killed off the instance. And then we can also specify encryption options. We can allow Google to manage our keys. We can select customer-managed keys, or we can select customer-supplied keys. We can also configure additional disks here. If we have an existing disk we want to attach, we can attach it here, or we can add a new disk. So, what I'll do here is add a new disk here, and I'll just leave the naming set to its default disk1. And then, under the type, here, we have a couple different options. We can use a local SSD scratch disk, a persistent SSD disk, or a standard persistent disk.
If we hover over type, here, we can see that what this tells us is that storage space is obviously less expensive for a standard persistent disk. SSD persistent disks are better for random IOPs or for streaming throughput, with lower latencies. What we'll do is, we'll leave this set to standard persistent disk, and then, obviously, for source type, we can use either an image or we can use a blank disk. We're going to use a blank disk here, and for mode, we'll leave the option at read/write. We're not going to make a read-only disk. Now, the deletion rule, here, is a little different than the boot disk. By default, the deletion rule, here, says to keep the disk when you delete the instance. What I want to do here is delete my disk if we delete the instance from my lab environment.
Now a good reason to leave this default of keep disk is that for example, if you have a file server you've deployed, what you want to do is make sure than when you delete the instance, that you don't delete any data that may be critical. So, if you have a separate disk that you've provisioned to store shares and whatnot, if you delete the instance, you may want to keep that data. So, if you leave this when deleting instance deletion rule set to keep disk, you don't have to worry about losing that data if you delete the instance as part of a migration or whatever kind of maintenance you're going to do. Of course, here, for size, we can specify the size of the disk.
I'm going to change this to 10 gigabytes. And when I do that, it tells me that this may result in reduced performance because I've entered a volume of less than 200 gig. This is a lab environment, and a demonstration, so I'm not really worried about performance here. And then as you can see here, we also have the encryption options that we were offered for our boot disk. We'll leave this at the default Google Managed Key.
And then what we can do is click Done to provision the disk here. Now, under networking, we can assign network tags, or we can set a custom host name for this instance. We'll leave the default here, but more importantly, this is where we can either add new network interfaces or edit the existing ones. So, for example, the default network interface is called default-default with an address of 10.128.0.0.
What I'm going to do is connect this to my test network, which is a 192 network. So, I select the dropdown here, and then select the test network. And then we can see it's automatically assigned to the default subnet of that network, and then from here, I have an option to specify a primary internal IP address. An ephemeral IP address won't change when you restart the instance. However, deleting and recreating an instance will change its internal IP.
Now, if we select the dropdown, here, we have a couple different options. We have an ephemeral automatic, an ephemeral custom, or we can reserve static internal addresses. If we hover over the icon, here, we can see that if we select the ephemeral automatic, what Google is going to do is assign an address from the subnetwork range, or, if we select ephemeral custom, we can manually enter an address. Now, if we select the third option, which is a static internal IP address, what this will do is allow our instance to keep its IP even when it's deleted and recreated. So, we'll go ahead and we'll reserve a static internal IP address. And then, what we have to do, when we do this, is give the IP address a name.
So, what I'm going to do is call this privateIPVM1. So, that tells me that it's a private IP for my VM1 virtual machine. We can see it's already associated with the default subnet and then we can either assign it automatically or let me choose. So, let's let me choose, here, and what I'll do is, I'll give it an address of 192.168.1.25. That falls within the range of my subnet. So, we'll go ahead, and we'll reserve that IP.
Now, for the external IP, what we can do is create an external IP address that's associated with my instance. What we can do is either use an ephemeral address, or we can select none. We can also select an unused static address. Now, if we select none, we're not going to have external internet access. So, what I can do here, is I'll just choose ephemeral.
Now, the network service tier gives us a couple different options. We can choose premium or standard. What this tier does is allow you to optimize network quality and performance. I don't need premium performance here for my lab environment, so I'll select standard. We don't need to do any IP forwarding, nor do we need to create a public DNS pointer record, so we'll just click Done, here.
If I wanted to create a new network interface, I could click add, here, and create a second NIC for my VM. I don't need that for this lab environment, so I'll cancel here. Now, if we select sole tenancy, we can see, here, we can specify tenancy affinity labels. We don't need to do that here. So, what we can do now, with our VM configured, we can look at the monthly estimate here, and suddenly the cost of my VM has gotten a little more expensive than what it was when we started. But, to provision our VM, we now scroll down and click create. What this is going to do is create a VM called VM1 on my test network, with the options that I've chosen throughout this configuration process.
So, with that said, you can see that lots of configuration can be done right from within the VM deployment wizard. You can pretty much deploy your VM with any combination of configuration options that you need.
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.