Managing Network Resources
Managing Compute Engine Resources
The course is part of this learning path
This course has been designed to teach you how to manage networking and compute resources on Google Cloud Platform. The content in this course will help prepare you for the Associate Cloud Engineer exam.
The topics covered within this course include:
- Adding subnets to a VPC
- Expanding existing subnets
- Reserving static addresses via the console and Cloud Shell
- Managing, configuring, and connecting to VM instances
- Adding GPUs and installing CUDA libraries
- Creating and deploying from snapshots and images
- Working with instance groups
- Learn how to manage networking and compute resources on Google Cloud Platform
- Prepare for the Google Associate Cloud Engineer Exam
- Those who are preparing for the Associate Cloud Engineer exam
- Those looking to learn more about managing GCP networking and compute features
To get the most from this course, you should have some exposure to GCP resources, such as VCPs, VM Instances, Cloud Console, and Cloud Shell. However, this is not essential.
Hi, everyone. Welcome back. In this lecture, I'm going to talk a little bit about graphics processing units, or GPUs. Google Compute Engine provides GPUs that you can add to VM instances. These GPUs are typically used to accelerate certain workloads that are run on compute instances. Workloads like data processing and machine learning are the types of workloads that benefit most from the use of GPUs. If you plan to create a compute instance that will leverage a GPU, you need to be sure that you choose which boot disk image that you're going to use for that instance. You also need to install the appropriate GPU driver on the instance. Some images like the Deep Learning VM images already have GPU drivers pre-installed, and they include packages like TensorFLow and PyTorch. Many public images, however, might require unique GPU drivers. Ultimately, you need to make sure that you identify what drivers you need for your images when leveraging GPUs.
Before you attempt to attach a GPU to an instance, you should check your quotas page to ensure there are enough GPUs available within your project. You can, of course, always request a quota increase if there are not enough available. Whenever you provision an instance that includes a GPU, you need to ensure that you can configure the instance to terminate on host maintenance. This is because an instance with a GPU attached cannot live migrate. It cannot live migrate because the instance, under the covers, is assigned to specific hardware devices. So keep that in mind when working with GPUs. In the next lesson, I'm going to show you how to attach a GPU to an existing compute instance. I'll also show you how to install the necessary CUDA libraries to support the GPU.
About the Author
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.