1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Introduction to Azure Functions

In Context

Contents

keyboard_tab
Intro
1
Course Intro
PREVIEW2m 54s
Serverless in Context
2
In Context
PREVIEW10m 8s
Getting the Most From Azure Functions
15
Use Cases
2m 27s
16
Pricing
3m 33s
Summary

The course is part of these learning paths

AZ-203 Exam Preparation: Developing Solutions for Microsoft Azure
course-steps 20 certification 1 lab-steps 7
Developing, Implementing and Managing Azure Infrastructure
course-steps 10 certification 7 lab-steps 2
play-arrow
In Context
Overview
DifficultyIntermediate
Duration1h 34m
Students908
Ratings
4.8/5
star star star star star-half

Description

An Introduction to Azure Functions

Serverless Computing has emerged as a capable and low-friction means to execute custom logic in the public cloud. Whether you're using Amazon Lambda, Google Cloud Functions, or Azure Functions, you have a wide variety of target languages, ecosystem integrations, and deployment mechanisms to choose from. All this while leaving the heavy lifting of server provisioning and maintenance to the experts, which gives you plenty of time to focus on your differentiated application functionality.

In this "Introduction to Azure Functions" course, you’ll learn how to build Azure Function applications in the cloud. You'll discover the core feature set of Functions and see how to integrate with a variety of sibling Azure services. You'll explore Function topics like security, monitoring, deployment, and testing best practices. You'll also learn about ideal Functions use cases and the pricing model. Finally, you'll learn about how we've arrived at the serverless computing model, and where serverless is likely to go in the future. By the end of this course, you’ll have a solid foundation to continue exploring Functions on your own, and incorporating Azure Functions capability into your work.

An Introduction to Azure Functions: What You'll Learn

Lecture What you'll learn
Intro What to expect from this course
Serverless Computing In Context Understanding what serverless computing is, and how we got here
Core Features A high-level overview of what Azure Functions is, and its basic capabilities
Creating Your First Function A demo of creating your first function in the Azure portal
Security A review of security features in Azure Functions
Using API Key Management A demo of configuring an Azure Function to require API key use
HTTP Proxies A discussion of lightweight HTTP Proxy support
Proxying Azure Blob Storage A demo of using Functions' HTTP Proxy support to front Azure blob storage
Triggers and Bindings Event-based triggering of functions and declarative binding of inputs and outputs
Triggering on Queues and Binding to DocumentDB A demo of Triggering with Azure Queues and Binding Function Output to DocumentDB
Testing and Debugging Tools and techniques for working with Functions during the development cycle
Deployment Options for deploying Azure Function apps into production
Deploying From a Local Git Repo A demo of deploying a complete Azure Function app to the cloud, from a local Git repository
Monitoring Tools for monitoring Azure Functions during dev, test, and release
Use Cases A discussion of ideal use cases for serverless compute and Azure Functions
Pricing A review of how Functions are priced, and a demo of determining price using the Azure Pricing Calculator
Serverless in the Future A short discussion on the future of serverless in the cloud
Summary Course wrap up

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Let's begin with a discussion of serverless computing in the cloud. What it is, how we got here, and some comparison with other options at your disposal.

Let's start with a definition.

Serverless is a cloud computing code execution model in which the cloud provider fully manages the infrastructure needed to serve requests, and requests are billed by an abstract measure of the CPU, memory, and I/O required to satisfy the request rather than on a per virtual machine, per hour basis.

Regardless of technology stack, serverless computing platforms have several common features.

The most important is that you focus on creating and managing applications and executable code versus on focusing on managing servers and other infrastructure needed to run your apps. The servers are still there, and thus, serverless is not quite a correct term, but the responsibility for starting them, ensuring they remain healthy, patching the underlying operating system, scaling out to more instances needed, and so on, is handled by the cloud provider and not you.

Instead, your job is to write code to execute within the restricted sandbox of the target serverless platform. Typical restrictions include the choice of languages in which you author code as well as the function signatures and input or output arguments you can define. Serverless platforms also typically restrict the amount of RAM a given execution can consume, as well as impose a maximum invocation time limit.

Another common feature of serverless technologies is tight integration with sibling services of their cloud platform. These can be networking or messaging technologies, relational or no SQL databases, object or Blob storage, or even monitoring and telemetry capabilities. In the modern cloud, no technology is an island, and tight integration between services is often a significant platform differentiator.

Because the atomic unit of serverless computing is a single function, pricing is more narrowly scoped as well. With serverless, you typically pay only for compute resources used during execution of your code. Contrast this with other virtual machine-centric compute technologies where you pay by the hour for use of the VM, regardless of whether your code is actually running for that entire hour or not. Code that runs intermittently or in short bursts can achieve significant cost savings in a serverless deployment model versus a VM centric one.

And finally, another common feature of serverless compute platforms is the use of trigger-based invocation of serverless code. These triggers can take the form of incoming HTTP requests, which allow the serverless platform to serve up REST APIs, for example, to more exotic options like WebHooks or platform-specific events like the creation, modification, or removal of data from cloud storage, or the arrival of a message on a cloud-hosted queue.

Another common option is the ability to invoke serverless logic on a timer. This can be useful for writing serverless functions to perform devops-type work, like moving data from one repository to another, cleaning up unused virtual machines, and so on.

In order to understand why serverless computing is important and how we got here, it's useful to consider the options that came before it. While it's impossible to do full justice to the prior 50 plus years of commercial computing in a single slide, we can paint a bit broadly and consider the following spectrum of choices.

Note that, relatively speaking, choices to the left side tend to support a wider range of hosted technologies, but are more difficult to provision and deploy due to the high capital costs and the long lead times, while choices to the right tend to narrow your technology options in favor of a restricted execution sandbox, but compensate for this with often much faster deployment times and a more granular cost model typically incurred as operational expense.

For many years, infrastructure was organized around a classic data center model, where servers were relatively expensive and limited commodities. To requisition new hardware, lead times of weeks or even months were not uncommon, and as a consequence, applications were often deployed wherever they would fit instead of on hardware optimized for the nature of the work being done. This was a very infrastructure-centric model and was not optimized to rapidly deliver business value. Some organizations still utilize this model today.

Over time, virtualization became a popular way to improve deployment density of applications on data center infrastructure. This allowed better, effective use of still precious and limited resources within the data center environment. Since more applications could now be deployed on a given set of hardware, virtualization provided incremental improvements to developer agility and software time to market. However, this was still largely an infrastructure-first model that focused far too much attention on non-differentiating aspects of building and running software.

As cloud computing first rose to prominence, the majority of first adopters started with the familiar paradigm of virtual machines in the cloud, or infrastructure as a service. This combined the advantages of conceptual familiarity with the ability to leave server provisioning and infrastructure maintenance to team of dedicated experts at Amazon, Microsoft, and other providers. This also saw the advent of technology infrastructure as an ongoing, consumption-based operational expense instead of merely a large, one time capital outlay. Still, this first generation of cloud services was not significantly different from a developer experience standpoint. Infrastructure provisioning was certainly more dynamic, but you still deployed software to a VM running an operating system, which had to be secured, patched, and monitored on a regular basis.

Platform as a service started to change this by offering an abstraction above the VM in which applications are deployed and executed. By defining a restricted sandbox in which conforming applications run, cloud providers can offer developers a streamlined experience for deploying, versioning, securing, running, debugging, and monitoring their code. These sandboxes evolved to accommodate many popular development languages and platforms, and today, they work well for many common application scenarios but not necessarily all.

Functions as a service, or serverless computing, evolved from a desire to more narrowly define the scope of deployed, executable code. Instead of an entire application, why not atomically deploy and execute a single function? This is useful for many scenarios where full-blown applications might be overkill: small devops tasks, recurring maintenance or data import jobs, small API service layers, and so on.

Two points here.

First, understand that functions and serverless computing didn't just appear out of thin air. They evolved over time in response to limitations in what came before them, which is another way of saying, ultimately, something will come after functions, too, to address whatever shortcomings we collectively find in them.

Second point. There's no need to give up on older approaches just because they're not the latest fad. Serverless compute is very useful in certain circumstances. I'll discuss some of those later in the course, but IaaS and PaaS are still quite useful in the cloud, too. Don't forget about them.

Serverless computing is also a natural consequence of evolving industry practice away from servers and infrastructure as fragile, coddled resources that live forever, and toward the concept of throwaway or immutable infrastructure, which can and often is replaced regularly during testing and deployment cycles. The metaphor pets versus cattle is often used to describe this dichotomy.

This description is instructive because it captures the mindset shift needed to fully benefit from cloud development best practices, such as serverless. Pet servers tend to inspire emotional attachment in their handlers and require lots of effort to maintain. It can also be challenging to deploy new applications or new versions of existing applications onto them. Maintenance windows, unscheduled down time, and upgrade headaches are all symptoms of pet server environments.

In my personal experience, such situations are still quite prevalent in many organizations. Instead, if we think of maintaining infrastructure like a herd of cattle, we free ourselves from attachment to any specific server or server configuration. Instead, we manage our infrastructure as a group. If one member has problems, we can replace it easily without compromising the health or capabilities of the overall herd. If we need to augment the herd with additional members or even replace members outright, we can do so easily.

This preference for infrastructure as faceless resources managed in aggregate allows us to focus our attention on the differentiators for our business, namely the applications and services we provide and the code used to realize those. This dovetails very nicely with the core value proposition of serverless, so we can think of this concept of immutable, throwaway infrastructure as a precursor and enabler of serverless computing.

About the Author

Josh Lane is a Microsoft Azure MVP and Azure Trainer and Researcher at Cloud Academy. He’s spent almost twenty years architecting and building enterprise software for companies around the world, in industries as diverse as financial services, insurance, energy, education, and telecom. He loves the challenges that come with designing, building, and running software at scale. Away from the keyboard you'll find him crashing his mountain bike, drumming quasi-rythmically, spending time outdoors with his wife and daughters, or drinking good beer with good friends.