The course is part of these learning pathsSee 1 more
This course has been designed to teach you how to manage networking and compute resources on Google Cloud Platform. The content in this course will help prepare you for the Associate Cloud Engineer exam.
The topics covered within this course include:
- Adding subnets to a VPC
- Expanding existing subnets
- Reserving static addresses via the console and Cloud Shell
- Managing, configuring, and connecting to VM instances
- Adding GPUs and installing CUDA libraries
- Creating and deploying from snapshots and images
- Working with instance groups
- Learn how to manage networking and compute resources on Google Cloud Platform
- Prepare for the Google Associate Cloud Engineer Exam
- Those who are preparing for the Associate Cloud Engineer exam
- Those looking to learn more about managing GCP networking and compute features
To get the most from this course, you should have some exposure to GCP resources, such as VCPs, VM Instances, Cloud Console, and Cloud Shell. However, this is not essential.
Hi, everyone. Welcome back. In this lesson, I'm going to demonstrate how to add a GPU to an existing VM instance. So, let's get started. What I'm going to do here is browse to my VM instance called MyInstance. Now, notice here that my VM instance is currently stopped. I can't add a GPU to a running instance. Also, before I add a GPU, I need to change its host maintenance setting so that it terminates rather than live migrates. So what I need to do here is click Edit at the top and then scroll down here and set the "on host maintenance" option to "terminate".
In the Machine configuration section up top here, I need to click on the CPU platform and GPU link. Clicking Add GPU here displays a list of available GPUs. As you can see here, clicking the GPU Type dropdown displays four different types of GPU that are available. I can also select the number of GPUs in the Number of GPUs dropdown. What I'm going to do here is choose the Tesla K80 and select one as the number of GPUs that we're going to install. At the bottom of the instance page here, I can click Save to apply my changes. What this does is attach the GPU to my VM. I can now start my instance with my GPU attached.
Once this VM comes up, I can RDP to it and install the GPU driver so Windows can use it. To RDP to my VM, I just have to download the RDP file here. I can then use it to connect to my instance. At this point, I'm logged in to my VM. Now, for most driver installs on Windows machines, you'll obtain drivers by installing the NVIDIA CUDA Toolkit. I should also note that on Windows Server instances like this one, you must install the driver manually. I say this because on other instances you can also script the install. So, to manually install the drivers here, I'm going to download the NVIDIA CUDA Toolkit on this instance where I've installed or attached the new GPU. To download this toolkit, I need to open my browser here and browse to the NVIDIA developer site here. So, it's developer.nvidia.com/cuda-toolkit. And then from here, I'll download the toolkit and then I'll select my OS, the architecture, and then my server.
Now, I can also install from network or local. The local downloads the entire package whereas the network package is a smaller installer and what it does is it pulls the information - or the bits - from the internet as needed. So we'll go ahead and use this option here, and then we'll download it. Now, with the installer downloaded, I have to launch it just like any other installer, and I just have to work through the wizard here. Now, what the CUDA toolkit is doing here is checking my system for compatibility with the toolkit and the drivers included with it. So what I'll do here is agree to the licensing agreement, and then we can do an express or a custom installer. I'm just going to do the express here so it overwrites any current display drivers and installs the correct driver for my GPU.
Now, what this CUDA Visual Studio integration screen is telling me is that there are no supported versions of Visual Studio installed on my VM. Some components of the CUDA toolkit require it so what I'm going to do before I continue my installation here, I'm going to click on the Visual Studio link here. We'll cancel out of this installer and we'll download Visual Studio here, and we'll save it. And let's install this first. And we'll just work through the wizard here, and this process doesn't take too long, just a few moments. And we're just concerned about the core editor here so we'll click install. We don't need to install any workloads. And this part of the Visual Studio installation takes a few minutes to complete. It's not too terrible. And we're going to uncheck the start after installation here. And we now have our Visual Studio installed so we'll close this out and we'll go back into our downloads and relaunch our CUDA install here.
And again we'll go through the system compatibility check. With our system check complete we can agree to the license agreement again, and then we'll choose the express option and then the installer will prepare for the installation. Once it finishes preparation it begins the download of the rest of the installation packages that are needed. If you remember back when we downloaded the installer, we downloaded the network-based installer so now it's pulling down the installation packages it needs. And then at this point, it begins the installation, and we can see here it's now installing the graphics driver which is what we're looking for. As you can see, the installation of these CUDA libraries can take quite a while, especially when you're doing the minimized download. You can speed things up on the backend of this install by completing the full download or the local download, but instead, we chose the network version.
So we'll let this run and then when it completes we'll confirm that our graphics driver has been installed correctly, and then we can go ahead and click Next to finish the installation. After the installer finishes, I just need to verify that the GPU driver's installed correctly. I can do this by looking at the GPU in device manager so let's pull device manager up. And let's go over to display adapters here and we can see NVIDIA Tesla K80 is listed. If we look at the properties here, we can see the driver provider is in fact NVIDIA. As long as there are no error icons floating around, I can confirm that everything installed correctly, and with that, my GPU is now functioning on my VM instance.
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.