This course touches on the similarities and differences between AWS, GCP, and Azure serverless functions. The idea is to give you insight into what using the different offerings look like. The primary learning objective is to help you make an informed decision about which provider to use and if it fits your use case. The 30-minute course covers the three different providers (content largely taken from the comparing cloud platforms course--but cut down for content), deploying a sample application on each, and the ecosystem around deploying serverless applications. The provider lesson uses the same application and demonstrate deploying and other operational characteristics of each provider.
Intended Audience
- IT Pros
- Developers
Prerequisites
If you are not already familiar with serverless start with "Getting Started with Serverless Computing” before continuing with this course. You’ll also need some experience with cloud providers as well. Nothing specific is required, though you should have a general idea of how cloud computing works.
Learning Objectives
- Understand the state of serverless in 2017 List trade-offs between AWS, Google, and Microsoft platforms
- Learn to build, deploy, and run applications on the three platforms
- Learn open source tools for building serverless applications
This Course Includes
30 minutes of high-definition video.
What You'll Learn
Course Intro: What to expect from this course.
Serverless Overview: In this lesson, we’ll introduce you to the key players and prepare you for the hands-on demos in the following lessons.
AWS Lambda: In this lesson, you’ll learn about serverless on Lambda and deploy a Lambda function.
Azure Functions: In this lesson, we’ll discuss serverless Azure functions and learn how to deploy an HTTP function.
Google Cloud Functions: In this lesson, you’ll learn about serverless on Google Cloud and deploy an HTTP trigger function.
Ecosystems and Tools: In this lesson, you’ll learn about serverless framework and Apex.
Conclusion: A summary and review of what you have learned.
Hello, and welcome back to the Serverless Survey course. I'm Adam Hawkins, and I'm your instructor for this lesson. This lesson covers Google Cloud functions. Our objectives are to deploy an HTTP trigger function, integrate with Stackdriver, and test out the function emulator. Also keep in mind that Google Cloud functions are in beta at the time of this recording, so things may change when they enter production. Disclaimer out of the way now, so lets get into it. Here we are in the Google Cloud console.
We need to take care of a few things before we can start. I'll create a new project for this demo. It's not strictly required, but I prefer it so that it keeps everything organized. Now enter the project you just created and then create a new cloud storage bucket to store the source code. Any name will do, just make sure it validates. Last step is to enable the beta functions in API. After that, we're ready to rock. Google Cloud functions are dead simple to start with. You only need a few clicks to create your first function.
Now create the function and select HTTP, like we've done before. TCP filled in a valid function for us. Experienced Node.js programmers will find this familiar. The function is a standard Express.js route handler. So you can work with the request and response objects just as you would in a traditional Express application. Finally, set the storage bucket to the one we previously created. Hit Create, and off it goes. The initial deploy may take a few minutes so I'll fast forward through the loading spinners. Now we have the Function dashboard in front of us. We see yet to populated are metrics errors, source editing and a test panel. Pull up the Trigger tab to get the HTTP URL for this function.
This is all we need to curl it. Open up a cloud shell terminal and use curl with the message parameter and the function will echo it back. This is literally all there is to it. We have an HTTP API in just a few minutes. Not bad, right? So let's turn our attention now to some other common use cases. Let's look at logging. Now I've started a curl loop in my terminal to generate constant requests. Now we should be able to navigate over to the logs and stream them in real time. Each log entry includes the function name and execution ID. This is sort of like a trace ID, which you may use for filtering. You can also expand each log line entry for detailed meta information.
So now that we can handle text streams, let's see what we can do with air handling. I'll update the source code to throw on exception based on the message parameter. We can do this entirely through the online code editor. Save the function and head back to the log stream. Now I'll fire off a curl again with a failure message from the clog shell. Now notice how these appear in the log stream? You can also filter on log levels or search for text. All in all, you get what you need for the majority of use cases. It may also report specific errors to Stackdriver.
This is remote error collection, so you have essential air log, and you may also configure notifications or pages for specific errors. Now we'll need to write a little bit more code for this. So the console and Vim will make life easier. We'll also deploy via the CLI from now on. Now I've split my screen with the console on the left and the terminal on the right. We'll need both for this demo. First, I'll create a new directory for the source code in the terminal. I'll copy the function of .json and package that JSON from the editor into that directory.
Now I'll take a snippet from the Google Cloud function docs. This function generates a specifically formatted log entry that Stackdriver can pick up. It's tagged with function name and version from package.json. Next, I'll update this special failure case code to use the function to trigger a Stackdriver error report. Also, we'll need to add the Google Cloud login module to package.json. Finally, we need to use the gcloud CLI to deploy our function. We can even take some of the parameters to the ones we've seen in the UI. It takes code from the current directory as well. And I'll set the correct entry point into the exported function, the HTTP trigger, the stage bucket from earlier, and the project ID.
Let's trigger the new behavior with the CLI. The call CLI takes the same data you'd specify in the HTTP request or in the test console, here also at the message to trigger the air logging case. That will create a Stackdriver error which we'll see in a second. But, for now, we can check the log for the CLI as well. Here we see the recent 500 response entry. Now let's pop back over to the Stackdriver UI. Stackdriver provides you with what you'd expect from remote air tracking. You've got timeline, samples and other meta information. Here, this is an integrated solution for tracking exceptions in your application. We have one last thing to cover in this lesson.
There is also an alpha function emulator that can deploy, run and debug your functions on your local machine before you deploy them to production. It's an installable MPN module. Let's work through some examples. We'll deploy a function to the local emulator, call it and get the logs. Here, you can see effort in a sample function that echos the provided data for testing. The functions command is the emulator. And I'll start the emulator. Now you can use the function command similar to the gcloud CLI. Function deployed deploys the function in the current directory to the emulator. Now we can trigger it just like any other function when they also retrieve the logs.
The emulator also provides debugging and mocking functional dependencies. I suggest you check out the official documentation for more information about how you can use the emulator in your workflows. Time to wrap up the lesson. The Google Cloud Functions beta only supports the Node.js runtime with three triggers. However, that is a strong primitive. But it does require tooling for more complex applications.
Note that Google Cloud Functions just gives you functions and names. So imagine things like environments or runtime configuration's completely up to you. Hopefully, more features make it into the official launch. Regardless, this is a fantastic segue into the next lesson on tooling and the echosystem around serverless computing. Catch you in the next one.
Adam is backend/service engineer turned deployment and infrastructure engineer. His passion is building rock solid services and equally powerful deployment pipelines. He has been working with Docker for years and leads the SRE team at Saltside. Outside of work he's a traveller, beach bum, and trance addict.