1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Developing Solutions for Google Cloud Platform with App Engine

Performance

The course is part of this learning path

Google Cloud Platform for Developers
course-steps 2 certification 1 lab-steps 1 quiz-steps 2

Contents

keyboard_tab
Introduction
App Engine
4
Services15m 53s
7
Storage11m 19s
8
Datastore9m 28s
Wrap Up
11
Summary8m 35s
play-arrow
Start course
Overview
DifficultyIntermediate
Duration2h 1m
Students776

Description

Building Solutions for Google Cloud Platform with App Engine

As developers, the learning never ends. Just when we get used to a certain technology, it goes and changes. We’re always needing to learn new languages, frameworks, APIs, and platforms.

And if you’re also in charge of deploying your code, then you need to understand how to setup and configure web servers, deal with scaling issues and manage databases. Frankly, it can be exhausting!

So, why invest the time in learning yet another set of technologies? That’s the question I find myself asking whenever some new trend comes along. I want to know how it’ll make my job, and the jobs of my peers better or easier.

Throughout my career I’ve been responsible for deploying my own code. And I think that’s why the Google Cloud Platform resonates with me so well. It’s a platform that understands developers.

So, why take the time to learn something like App Engine? I think the answer is simple. Because you can take all your development experience and apply it to a platform as a service that removes most of the obstacles to getting code running in production.

The value of having Google ensure your app is highly available is worth a lot to me. We get a rich set of tools for developing locally, and simple deployments, with application versioning. All while using the same programming languages and frameworks that we’re used to.

If you’re looking for a native cloud platform for building out highly scalable applications, or mobile back-ends, then you’ve come to the right place. App Engine provides all that and more.

This course focuses on teaching you about the tools App Engine provides to build out highly scalable systems.

We’ll be building out Python web applications using Flask, and using Cloud Datastore as our database. There is a bit of a learning curve to getting started. And that’s what this course is for, to minimize the amount of time spent learning the platform, so you can get back to writing code.

The source code lives on: Github so feel free to download it, and follow along.

We’ll cover a lot of data in this 2 hour course. And by the end of it, you should feel comfortable getting started building out App Engine applications of your own.

And if you’re looking to get your Qualified Solutions Developer certification, this is going help you with that too.

So, if this sounds useful to you, let’s get started!

Course Objectives

In this course

  • We’ll create an App Engine application, and deploy it
  • We’ll be Developing a REST API with Cloud Endpoints
  • We’ll learn about the different Authentication and Authorization methods available on App Engine.
  • We’ll learn about the monitoring and management tools available.
  • We’ll cover the different storage options available.
  • We’ll review Cloud Datastore more in depth.
  • We’ll look into ways to improve application performance.
  • And We’ll learn about Task Queues.

Intended Audience

This is a intermediate level course because it assumes:

  • You have development experience

What You'll Learn

Lecture What you'll learn
Intro What will be covered in this course
Getting Started Creating our first App Engine application
Cloud Endpoints How to create RESTful APIs with Endpoints
Services How to break our applications down into separate services
Authentication How can we authenticate users?
Managing Applications How do we manage App Engine apps?
Storage How do we use the different storage options?
Datastore A more in depth look at datastore
App Performance How can we make our apps more responsive?
Task Queues How can we run tasks outside of a user request?
What's next? Where do we go from here?

 

Transcript

Welcome back to Developing Solutions for Google Cloud Platform. I'm Ben Lambert, and I'll be your instructor for this lesson. In this lesson, we're gonna talk about app engine performance and optimization.

Now, there are a lot of factors that we need to consider when we're talking about performance of an application. Two of them are related to data caching and scaling. We're gonna dive into these both and we're gonna start with caching. App Engine gives us access to Memcache to handle caching, which is a distributed in memory data store. Memcache is fast. It's somewhere around 10 times faster than Datastore. And that shouldn't be surprising, because it is an in memory cache, so access is very quick. Memcache is gonna be useful in our apps if we need to do things like caching popular web pages, or if we wanted to cache frequently changed data like page counts, and also if we have frequently fetched entities.

We can interact with Memcache through the Memcache API directly, which would look something like this. We import the library, and then we add a value, and set a key. The key has a size limit. It has to be under 250 bytes, and the value has a maximum of 1 megabyte. And then we can have a total fetched results size come back, but that has a limit of 32 megs. So, just keep those limitations in mind. Using something like the increment method will make it easy to increment things like page counters, and display the cached data back to the users. We can also use Memcache indirectly through the NBD or Objectify data store libraries. This is gonna allow us to easily cache entities. And this is really cool because we get the added performance of using cached data and it's baked right into the library.

Like everything else, we have some best practices. First, we'll want to handle failures gracefully. Memcache should not be used as a persistent storage option. So if we try and access our data and it isn't there, we should be ready to deal with that. It's worth mentioning again, there are events that can cause data to be dropped from Memcache before it's expiration. This is usually caused by memory pressure, causing old data to be dropped so that new records can be inserted. Second, whenever we can, we should be using batch operations. These are things such as getAll, putAll, and deleteAll. These will be more efficient than doing things one record at a time. And finally, we'll get more performance if we distribute the load across a keyspace by sharding it into smaller pieces. And then we can use batch methods to fetch that data. So, Memcache will help us gain performance improvement by caching data for us.

Let's see how we're using Memcache in our demo. If we look at the image model and we scroll down, we can see that we have a class method called update image count cache, and it will update the cache to reflect the current number of photos based on the data store query, and then it returns that number. And the image count method checks to see if Memcache has a value set. And if not, we run the method to update the cache, and we get the value, and then we return that. And then, data store entity hooks will allow us to increment or decrement the number, depending on if we delete or save an image. Because the only time that we save a record in this service is on create, this update hook should work out for us and not run multiple times. And now we can see, in our main.py, our context sets the image count to the results of our method. The API for Memcache makes it easy to get and set data by key, and this will really improve the performance of our apps if we use it wisely. Alright, that about covers caching.

Our next topic is going to be scaling. With App Engine, we have three types of scaling. We can select which one we want to use on a per service basis. We have manual, basic, and automatic. With manual, we can specify the amount of instances that we want, and App Engine will start them up. This allows us to trust the state of the app a bit more, because those instances will remain running. Basic allows us to specify an idle timeout period and a number of maximum instances. This is really useful for things like intermittent work because instances are started up when they're needed, and then after the idle timeout period, they're turned down. So keep in mind, it could happen that all of the instances are turned down and a request will cause one to be started up, meaning the first request may take a little while. And then we have automatic. And automatic has a lot of options that we can configure, such as the minimum and maximum number if idle instances. And automatic uses metrics to determine if we need to start up new instances or not, and latency seems to be the most important metric in that.

Regardless of which method, the first request to a new instance takes longer because the libraries and resources have to be loaded. We've talked about this previously when we migrated the versions. We can use the warmup method for inbound services in our app .yaml. And we can create a warmup route that will be called when the warmup request is sent. Now this is going to allow us to run any code that we might want to start, such as populating a cache.

Let's check out how to use scaling in our demo app. If we look at the task service in the tasks .yaml, we can see that it's using the basic scaling setting, and setting the amount of maximum number of instances and an idle timeout. And that's it. If we deploy this, it'll have the instances started up when the request comes in, and it's gonna shut them down after the 10 minute idle period. And if we look at our app .yaml for our default service, we don't have any scaling options set and it's using automatic by default. So this gives us the flexibility to adjust things only if we need to.

Okay, let's summarize what we've learned. We can use Memcache to handle our data caching needs, though it does have some limitations, as we've mentioned. But if we can work with these limitations, we'll have a very fast in memory data storage mechanism. And we can scale our applications with either manual, basic, or automatic scaling. Okay, so far, we've been doing all of our work from inside of the requests. But what if we want to handle some tasks outside of the user request? That's gonna be the topic of our next lesson when we cover task queues. So if you're ready to keep going, let's get started.

About the Author

Students32047
Courses29
Learning paths16

Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.

When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.