image
TerramEarth

Contents

GCP Architect Case Studies
1
Introduction
PREVIEW1m 29s
7
Summary
1m 6s

The course is part of this learning path

Start course
Difficulty
Intermediate
Duration
56m
Students
396
Ratings
4.6/5
starstarstarstarstar-half
Description

This course will help you prepare for the Professional Cloud Architect Exam. We cover the 4 case studies presented in the exam guide, explain what they are, why they are important, and how to use them to prepare for the exam.

Learning Objectives

Examine the 4 case studies presented in the exam guide:

  • EHR Healthcare
  • Helicopter Racing League
  • Mountkirk Games
  • TerramEarth

Intended Audience

Anyone planning to take the Professional Cloud Architect Exam.

Prerequisites

Basic knowledge of GCP.

Transcript

In this lesson, I am going to walk you through the case study for a fictional company called “TerramEarth”. Let’s start with the company overview:

“TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries. Their mission is to build products that make their customers more productive.”

Ok, so nothing here is really jumping out at me.  This seems a little bit too vague to identify any keywords. Let’s continue on to the solution concept:

“There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from the vehicles in real-time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500 megabytes of data per day.”

Ok, this section seems a lot more useful.  I see that they will be collecting telemetry data from the vehicles.  So this sounds like there could be some Internet of Things questions involved. It also looks like there could be some Big Data questions as well.  We see many different sensors per vehicle and there are 2 million vehicles.  So there could be a lot of data flowing in and being processed.  You need to start thinking about how you would collect all this data, how you would store it, and then how you would process and query it.  This case study sounds like it’s going to be very data-heavy.

So it looks like some of the data is going to be uploaded in real-time and then the rest is going to be batch uploaded at the end of the day. So you need to consider how you would potentially handle both scenarios. Now, here it says here that there is 200-500 MB per day. So if you take 200 MB * 2M = 400 million megabytes. That’s 40 petabytes a day? So I can foresee questions on storing and archiving huge amounts of data. You are probably going to be asked about creating and running data pipelines. You should be comfortable with cleaning up and transforming data. Things like that. Let’s move on to technical environment:

“TerramEarth’s vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems. The private data centers have multiple network interconnects configured to Google Cloud. The web frontend for dealers and customers is running in Google Cloud and allows access to stock management and analytics.”

Data aggregation and analysis is exactly what I would expect, based upon the previous sections. I see that clients are going to be from all over the world, so that means your services need to be running in multiple regions. This “multiple network interconnects” is interesting.  You are not going to be dealing with a single on-prem environment. There will be multiple. So start thinking about the implications of that. This stock management and analytics implies that we are going to be doing lots of things with the data we derive.  It’s not just going to be used for generating a few charts.  This might end up getting fed into a Cloud SQL database or maybe a machine learning model.  I would be expecting questions about any of the Google data tools. Now the business requirements are:

  • “Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
  • Decrease cloud operational costs and adapt to seasonality.
  • Increase speed and reliability of development workflow.
  • Allow remote developers to be productive without compromising code or data security.
  • Create a flexible and scalable platform for developers to create custom API services for dealers and partners”

Predict and detect suggests you need to be ready for AI & ML questions. “Rapidly ship parts for just-in-time repairs” suggests the use of real-time streaming predictions. It looks like they want to focus on decreasing costs.  So think about things like: How can I store a lot of data for cheap? How can I save money on all the data processing?  How can I scale down usage when we are in the off-season? It looks like they want more speed and reliability. That implies scalability. Think about redundancy as well. Maybe even a CI/CD pipeline.

They have remote developers. So that means they are going to need to use either VPNs or maybe they are going to use Zero trust instead. So should know the tradeoffs between VPN and the Identity aware proxy. Now security is very important. So you also want to think about IAM.  You want to think about keys.  And you want to think about secrets. Again, they want flexibility and scalability. They need custom APIs, so know how to create those.  And you should also learn the best practices for setting up APIs as well. For technical requirements we have the following:

  • “Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations 
  • Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments
  • Allow developers to run experiments without compromising security and governance requirements
  • Create a self-service portal for internal and partner developers to create new projects 
  • Request resources for data analytics jobs, and centrally manage access to the API endpoints
  • Use cloud-native solutions for keys and secrets management and optimize for identity-based access
  • Improve and standardize tools necessary for application and network monitoring and troubleshooting”

Again, we see that you might get questions about setting up APIs. They say they want a gradual migration, so I’m thinking this might involve creating microservices. We get confirmation of CI/CD pipelines here, so make sure you are familiar with all the services involved with those. They mention they are going to be using containers, so you want to know which compute resources support those.  Containers usually imply Kubernetes, but not always. This is interesting. They want to be able to run experiments.  Now I know App Engine makes this pretty easy, but there are other ways of doing that as well.  You want to think about how you can roll out different versions at the same time. Maybe have half of your users on one, half on the other. And then you want to measure the results of that.

Security and governance will be important. It looks like they will need employees to be able to create new projects, so you want to understand how to set up permissions correctly to allow that. And I see data analytics jobs and API endpoints mentioned once again. They specifically mention key and secret-based management. So that confirms what I thought before. Expect questions about generating and storing keys.  Know how to use Google Secret Manager. And this mention about identity-based management confirms my suspicion that you might be asked a question or two about Zero trust systems.

I see there is also the potential for questions on monitoring and troubleshooting.  So, you want to know how to set up monitoring on GCP.  You want to know how to enable logging and know where and how to search through the logs. Finally, let’s read through the executive statement:

“Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes. After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the cloud.”

Ok, so they say we need to improve the operations of online fleet management. That sounds like more IoT stuff. As we saw before, they want to share their data.  So think about the different ways to do that.  Is it through exposing an API?  Are you writing out files to a public bucket?  Maybe you need to create some service accounts to access BigQuery tables. This part here about increasing autonomous operations translates to AI & ML services, in my mind. And this last part means you might get some questions about migration. Alright, I think that covers just about everything in the TerramEarth case study.

About the Author
Students
31948
Courses
36
Learning Paths
14

Daniel began his career as a Software Engineer, focusing mostly on web and mobile development. After twenty years of dealing with insufficient training and fragmented documentation, he decided to use his extensive experience to help the next generation of engineers.

Daniel has spent his most recent years designing and running technical classes for both Amazon and Microsoft. Today at Cloud Academy, he is working on building out an extensive Google Cloud training library.

When he isn’t working or tinkering in his home lab, Daniel enjoys BBQing, target shooting, and watching classic movies.

Covered Topics