Mapping Needs to GCP Services
Google Cloud Platform (GCP) lets organizations take advantage of the powerful network and technologies that Google uses to deliver its own products. Global companies like Coca-Cola and cutting-edge technology stars like Spotify are already running sophisticated applications on GCP. This course will help you design an enterprise-class Google Cloud infrastructure for your own organization.
When you architect an infrastructure for mission-critical applications, not only do you need to choose the appropriate compute, storage, and networking components, but you also need to design for security, high availability, regulatory compliance, and disaster recovery. This course uses a case study to demonstrate how to apply these design principles to meet real-world requirements.
- Map compute, storage, and networking requirements to Google Cloud Platform services
- Create designs for high availability and disaster recovery
- Use appropriate authentication, roles, service accounts, and data protection
- Create a design to comply with regulatory requirements
This course is intended for anyone looking to design and build an enterprise-class Google Cloud Platform infrastructure for their own organization.
To get the most out of this course, you should have some basic knowledge of Google Cloud Platform.
Now that you have user authentication and permissions figured out, it’s time to plan how your applications will access the Cloud Platform services it needs to use. To avoid embedding credentials in an application, you need to use service accounts. For example, if an application uses Cloud Datastore as a database, then it needs to have authorization to use the Datastore API.
You would accomplish this by enabling Datastore API access on any VM instances that will be involved in the part of the application that uses the database. By default, all VM instances run as the Compute Engine default service account. If you want something different, then you can create your own.
A service account has an email address and a public/private key pair that it uses to prove its identity. Your instances use that identity when communicating with other Cloud Platform services. However, by default, an instance running as the Compute Engine default service account has limited scope in how it can interact with other services. For example, by default an instance can only read from Cloud Storage and can’t write to it.
To give an instance more permissions, you need to set the scope when you’re creating the VM. So, in the case of interacting with Datastore, you have to enable access to the Datastore API. You also have to enable the Datastore API at the project level, but you only have to do that once.
Then your application code has to obtain credentials from the service account whenever it uses the Datastore API. Google Cloud Platform uses OAuth 2.0 for API authentication and authorization. There are two ways to do it: Application Default Credentials and access tokens.
The easiest way is to use Google Cloud Client Libraries. They use Application Default Credentials (or ADC) to authenticate with Google APIs and send requests to those APIs. One great feature of ADC is that you can test your application locally and then deploy it to Google Cloud without changing the application code.
Here’s how it works. To run your code outside Google Cloud Platform, such as in your on-premise data center or on another cloud platform, create a service account and download its credentials file to the servers where the code will be running. Then set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path of the credentials file.
So while you’re developing locally, the application can authenticate using the credentials file and when you run it on a production instance, it will authenticate using the instance’s service account. This works because ADC allows applications to get credentials from multiple sources.
The second way is to use OAuth2 access tokens to directly connect to the API without going through a client library. One reason you’d have to use this method is if your application needs to request access to user data.
The way it works is the application requests an access token from the metadata server and then uses the token to make an API request. Tokens are short-lived, so your application needs to request new ones regularly.
If you need to write shell scripts that access other Cloud Platform services, then you can use gcloud and gsutil commands to make API calls. These two tools are included by default in most Compute Engine images and they automatically use the instance’s service account to authenticate with APIs.
So what service accounts would you need to create for GreatInside? The load balancer and the web instances communicate over HTTPS, so you don’t a service account for that. Since the Tomcat instances communicate with the MySQL database in Cloud SQL, you would need a service account for that. Similarly, the IIS instances communicate with SQL Server in Cloud SQL, so you’d need a service account for that, too. There may be a need for other service accounts when we add more features to our architecture, such as disaster recovery, but we’ll cover that later.
And that’s it for service accounts.
Guy launched his first training website in 1995 and he's been helping people learn IT technologies ever since. He has been a sysadmin, instructor, sales engineer, IT manager, and entrepreneur. In his most recent venture, he founded and led a cloud-based training infrastructure company that provided virtual labs for some of the largest software vendors in the world. Guy’s passion is making complex technology easy to understand. His activities outside of work have included riding an elephant and skydiving (although not at the same time).