Mapping Needs to GCP Services
The course is part of these learning pathsSee 3 more
Google Cloud Platform (GCP) lets organizations take advantage of the powerful network and technologies that Google uses to deliver its own products. Global companies like Coca-Cola and cutting-edge technology stars like Spotify are already running sophisticated applications on GCP. This course will help you design an enterprise-class Google Cloud infrastructure for your own organization.
When you architect an infrastructure for mission-critical applications, not only do you need to choose the appropriate compute, storage, and networking components, but you also need to design for security, high availability, regulatory compliance, and disaster recovery. This course uses a case study to demonstrate how to apply these design principles to meet real-world requirements.
- Map compute, storage, and network needs to Google Cloud Platform services
- Create designs for high availability and disaster recovery
- Use appropriate authentication, roles, service accounts, and data protection
- Create a design to comply with regulatory requirements
Now that you have user authentication and permissions figured out, it's time to plan how your applications will access the cloud platform services it needs to use. To avoid embedding credentials in an application you need to use service accounts. For example, if an application uses cloud data store as a database then it needs to have authorization to use the data store API.
You'd accomplish this by enabling data store API access on any VM instances that will be involved in the part of the application that uses the database. By default, all VM instances run as the Compute Engine default service account. If you want something different then you can create your own.
A service account has an email address and a public/private key pair that it uses to prove it's identity. Your instances use that identity when communicating with other cloud platform services, however, by default, an instance running as the Compute Engine default service account has limited scope in how it can interact with other services. For example, by default an instance can only read from cloud storage and can't write to it.
To give an instance more permissions you need to set the scope when you're creating the VM. So in the case of interacting with data store, you have to enable access to the data store API. It can't be enabled after a VM has been created, although Google is adding a feature to make that capability available in the future. You also have to enable the data store API at the project level, but you only have to do that once.
Then your application code has to obtain credentials from the service account whenever it uses the data store API. Google Cloud Platform uses OAuth 2.0 for API authentication and authorization. There are two ways to do it. Application default credentials and access tokens.
The easiest way is to use Google Cloud Client Libraries. These application default credentials or ADC, to authenticate with Google APIs and send requests to those APIs. One great feature of ADC is that you can test your application locally and then deploy it to Google Cloud without changing the application code.
Here's how it works. To run your code outside Google Cloud Platform, such as in your on premise data center or on another cloud platform, create a service account and download it's credentials file to the servers where the code will be running. Then set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path of the credentials file.
So while you're developing locally the application can authenticate using the credentials file. And when you run it on a production instance, you will authenticate using the instances service account. This works because ADC allows applications to get credentials from multiple sources.
The second way is to use OAuth2 access tokens to directly connect to the API without going through a client library. One reason you'd have to use this method is if your application needs to request access to user data.
The way it works is, the application requests an access token from the metadata server and then uses the token to make an API request. Tokens are short lived, so your application needs to request new ones regularly.
If you need to write shell scripts that access other cloud platform services then you can use gcloud and gsutil commands to make API calls. These two tools are included, by default, in most Compute Engine images and they automatically use the instance's service account to authenticate with APIs.
So what service accounts would you need to create for GreatInside? Amazingly, you don't have to create any. That's because none of the components call other Google Cloud services. The load balancer and the web instances communicate over HTTPS. The Tomcat instances communicate with the MySQL database, using JDBC. And the IIS instances communicate with SQL Server, using ODBC. There may be a need for service accounts when we add more features to our architecture, such as disaster recovery, but we'll cover that later.
And that's it for service accounts, at least for now.
Guy launched his first training website in 1995 and he's been helping people learn IT technologies ever since. He has been a sysadmin, instructor, sales engineer, IT manager, and entrepreneur. In his most recent venture, he founded and led a cloud-based training infrastructure company that provided virtual labs for some of the largest software vendors in the world. Guy’s passion is making complex technology easy to understand. His activities outside of work have included riding an elephant and skydiving (although not at the same time).