This course will help you prepare for the Professional Cloud Architect Exam. We cover the 4 case studies presented in the exam guide, explain what they are, why they are important, and how to use them to prepare for the exam.
Learning Objectives
Examine the 4 case studies presented in the exam guide:
- EHR Healthcare
- Helicopter Racing League
- Mountkirk Games
- TerramEarth
Intended Audience
Anyone planning to take the Professional Cloud Architect Exam.
Prerequisites
Basic knowledge of GCP.
In this lesson, I am going to walk you through the case study for a fictional company called “Helicopter Racing League”.
Let’s start with the company overview:
“Helicopter Racing League (HRL) is a global sports league for competitive helicopter racing. Each year HRL holds the world championship and several regional league competitions where teams compete to earn a spot in the world championship. HRL offers a paid service to stream the races all over the world with live telemetry and predictions throughout each race.”
So what are the key terms that I see here? Well first, I notice that this is a “global sports league”. Now that implies that our services need to support multiple regions. A global audience will require using global services. Things like CDNs and load balancers. Next I see that this will be a paid service. So there might be some questions about collecting money. Possibly storing credit card information. You should be familiar with storing and encrypting sensitive data. Also this means users are going to need to log in. So you could get some questions about user authentication and authorization as well.
And again here, we see that we need to support customers all over the world. And we see that there will be live telemetry captured. So possibly you might get some questions about internet-of-things. I don’t think this says if they are real helicopters or just like drones. But either way, if you are sticking sensors on the helicopters and capturing data from them in real time, that could definitely suggest an IoT question or two.
We also see here that we are going to use this live data to make predictions. So that sounds like you could get some AI & machine learning questions as well. This is starting to sound like it is going to require some very compute intensive loads. Next, let’s read the solution concept:
“HRL wants to migrate their existing service to a new platform to expand their use of managed AI and ML services to facilitate race predictions. Additionally, as new fans engage with the sport, particularly in emerging regions, they want to move the serving of their content, both real-time and recorded, closer to their users.”
Well, that confirms that AI and ML are potential topics for questions. You should make sure to be familiar with the main offerings available there. This says they want to migrate their existing services, so make sure you know how to handle migration in AI and ML as well.
Here I see the phrase “emerging regions”. Now that is interesting. So not only will the customers be all over the world, but they are going to be outside of your typical areas. That sounds like your services will need to have many different regions; not just the US and Eurape. This definitely sounds like you are going to have to make heavy use of CDNs because your viewers might not have access to reliable, high speed internet. So I could see getting some questions that ask you to design a streaming solution based around these constraints.
Here we can see that the content is going to be both real-time and recorded. So the recorded content can be cached, but the real-time videos cannot. So I would start thinking about how you can stream content to viewers in more remote areas. Let’s move on the existing technical environment:
“HRL is a public cloud-first company; the core of their mission-critical applications runs on their current public cloud provider. Video recording and editing is performed at the race tracks, and the content is encoded and transcoded, where needed, in the cloud. Enterprise-grade connectivity and local compute is provided by truck-mounted mobile data centers. Their race prediction services are hosted exclusively on their existing public cloud provider. Their existing technical environment is as follows:
Existing content is stored in an object storage service on their existing public cloud provider. Video encoding and transcoding is performed on VMs created for each job. Race predictions are performed using TensorFlow running on VMs in the current public cloud provider.”
So it seems like they are doing a lot of video editing and recording. Now, it doesn’t sound like that is going to happen in the cloud. However, they probably want to store all this footage in the cloud. Video can take up a lot of space. So you might get questions about storing massive amounts of data or uploading huge amounts of data. They also might ask you how to store all this in a cost effective way. You will want to know how to search it and retrieve it when it’s needed. Those sort of things.
We can see that we are potentially going have to deal with both encoding and transcoding. Now encoding is very compute heavy, but it can be done offline. So this sounds like a good use case for Spot VMs. Spot VMs would give you a bunch of temporary VMs for a cheaper price. Transcoding has to happen in real-time, so Spot VMs would not work for that. Instead you might have to use standard on-demand VMs for that on race day. Or you could look into using the transcoder API. Also, both encoding and transcoding video can be accelerated by using GPUs, so you might get a question about provisioning a VM with a GPU. So I would say that you should start thinking about how to accomplish the two tasks on GCP, and understand the different requirements of each.
Now having “truck mounted mobile data centers” is really quite interesting. I’m not sure exactly what the implications are. It’s possible that this might be an extra challenge to connect them to your GCP environment. You won’t be able to make the connection permanent, so I’m guessing you can’t use Interconnect. You might be stuck using a VPN connection. Also if you were to set up some firewall rules, you might not be able to hard code any IP addresses for white lists. This definitely has some potentially interesting implications. It’s probably worth sitting down and taking some time to think about how a mobile data center would be different from a standard on-prem environment.
Here we see they are currently using an object storage service, so you definitely should be familiar with Cloud Storage. Since we are talking about a lot of storage space being needed, I could also see you getting some questions on object storage classes or object lifecycle management. Again here we see that you could get asked about using VMs to do the encoding and transcoding. And here we see they are using Tensorflow for machine learning. You might be asked how they could best migrate this over to GCP. I would say there definitely is the possibility of getting some ML and AI questions. Ok, so let’s go through the business requirements:
“HRL’s owners want to expand their predictive capabilities and reduce latency for their viewers in emerging markets. Their requirements are:
- Support ability to expose the predictive models to partners
- Increase predictive capabilities during and before races:
- Race results
- Mechanical failures
- Crowd sentiment
- Increase telemetry and create additional insights
- Measure fan engagement with new predictions
- Enhance global availability and quality of the broadcasts
- Increase the number of concurrent viewers
- Minimize operational complexity
- Ensure compliance with regulations
- Create a merchandising revenue stream”
So, we already covered AI & ML. Latency could be an issue for some of the customers. But we already talked about that. This is interesting, so not only will we be dealing with Tensorflow models, but they might also want to share their models with partners. So you might want to brush up on how to do that. Now here we see that the predictions need to happen in real time. Predictions are made both before a race and during a race. So ask yourself, do you know how to do streaming predictions? How can you minimize latency to achieve real-time predictions?
We already covered telemetry and insights. And we need to measure engagement and make predictions. We covered that. We mentioned global availability. Quality is going to be a challenge. So ask yourself how do you maintain high quality streaming to all of your customers? In addition to having a high quality stream you also have to worry about scaling those streams up. So we are talking about a lot of bandwidth here. You are going to need some mechanism to autoscale up and down as required. You don’t want to bump up against some upper limit by having too many viewers at once.
Here is something new. We need to minimize operations complexity. So you want to avoid custom solutions. Try to stick to using GCP services. Also, things should automatically scale up and down as needed. You are not going to want to require a lot of manual intervention for dealing with problems. Managed services will be key. Since this is a paid service and you are dealing with a global audience, compliance could be an issue. You are going to have to deal with a lot of different local laws. You also will be handling money so there are regulations about that as well. Basically you need to be able to do audits and verify that the company is in compliance.
And finally there is going to be some sort of merchandising stream. Now, I’m not sure if this means they will be selling t-shirts or if they are just going to be showing ads. Or maybe both. But you should take some time to think about the impact that these things could have on the system architecture. Ok, let’s go through the technical requirements:
- “Maintain or increase prediction throughput and accuracy
- Reduce viewer latency
- Increase transcoding performance
- Create real-time analytics of viewer consumption patterns and engagement
- Create a data mart to enable processing of large volumes of race data”
We already talked about AI & ML. I already mentioned latency. We covered transcoding. And real-time analytics. Oh, here is something new. You should know what a data mart is and how to set one up. There is going to be a large amount of data, both video footage and telemetry data. So that sounds like there could be some Big Data questions in there as well. Finally, let’s go through the executive statement:
“Our CEO, S. Hawke, wants to bring high-adrenaline racing to fans all around the world. We listen to our fans, and they want enhanced video streams that include predictions of events within the race (e.g., overtaking). Our current platform allows us to predict race outcomes but lacks the facility to support real-time predictions during races and the capacity to process season-long results.”
So yes, we talked about high quality video streaming. And that we need to do predictions. And of course these predictions need to be in real-time. Ok, this is new. Some of our prediction models need to be able to process data from all the races over the season. So not every prediction model is just going to work with current race data. Make sure you understand how to handle that. And that’s it. I think we have covered the Helicopter Racing League case study pretty thoroughly.
Daniel began his career as a Software Engineer, focusing mostly on web and mobile development. After twenty years of dealing with insufficient training and fragmented documentation, he decided to use his extensive experience to help the next generation of engineers.
Daniel has spent his most recent years designing and running technical classes for both Amazon and Microsoft. Today at Cloud Academy, he is working on building out an extensive Google Cloud training library.
When he isn’t working or tinkering in his home lab, Daniel enjoys BBQing, target shooting, and watching classic movies.