Running Apache Spark on Azure Databricks

In this article, we’ll cover how to set up an Azure Databricks cluster and how to run queries in an interactive notebook. However, this article only scratches the surface of what you can do with Azure Databricks. If you would like to learn more, including how to create graphs, run scheduled jobs, and train a machine learning model, then check out my complete, video-based Running Spark on Azure Databricks course on Cloud Academy.

Watch this short video taken from the course to get an idea of what you’ll learn:

Apache Spark and Azure Databricks

Apache Spark is an open-source framework for doing big data processing. It was developed as a replacement for Apache Hadoop’s MapReduce framework. Both Spark and MapReduce process data on compute clusters, but one of Spark’s big advantages is that it does in-memory processing, which can be orders of magnitude faster than the disk-based processing that MapReduce uses. There are plenty of other differences between the two systems, as well, but we don’t need to go into the details here.

Not only does Apache Spark handle data analytics tasks, but it also handles machine learning. It has a library called MLlib that includes a variety of pre-built algorithms, such as logistic regression, naive Bayes, and random forest. At the moment, it doesn’t include neural networks. However, you can still create neural networks on Spark using other machine learning frameworks, such as TensorFlow.

In 2013, the creators of Spark started a company called Databricks. The name of their product is also Databricks. It’s basically a managed implementation of Apache Spark in the cloud, so you don’t have to worry about building clusters yourself. It also has a user-friendly interface for running code on clusters interactively.

Microsoft has partnered with Databricks to bring their product to the Azure platform. The result is a service called Azure Databricks. One of the biggest advantages of using the Azure version of Databricks is that it’s integrated with other Azure services. For example, you can train a machine learning model on a Databricks cluster and then deploy it using Azure Machine Learning Services.

Setup

Now let’s see how to set up an Azure Databricks environment. You need to perform two tasks:

  1. Create a Databricks workspace
  2. Spin up a compute cluster

In the Azure portal, search for databricks. When it comes up, click on it.

Search for Azure Databricks

Then click Add.

Create Azure Databricks Service for Apache Sparks

The Workspace name can be anything. It doesn’t have to be globally unique. Let’s call it course. Then, either create a new resource group to put it in or use an existing one. For the pricing tier, choose either Trial or Standard.The Trial tier is free for 14 days.

When Azure is finished creating the workspace, click on it. Then, when you click the Launch Workspace button, it will take you to the Databricks portal, which is separate from the Azure portal. Alright, now we can create a cluster. Click Create Cluster. Then you’ll see this screen.
Create Apache Spark Cluster
You can give the cluster any name you want. Let’s call it spark. The Cluster Mode can be either Standard or High Concurrency. We’re only going to run one job at a time, so leave it on Standard.

For the Databricks Runtime Version, you can leave it with the default, which might be different for you than this version. You can also leave the Python version with the default.

Make sure the Terminate after __ minutes of inactivity box is checked. It can be expensive to run a cluster, so you’ll want to automatically shut the cluster down if it’s been inactive for a while. The default is 120 minutes, but you can change it to something lower, like 60, so it will shut down after being idle for a shorter period than two hours.

Under Worker Type, you can see that there are lots of options for what kind of virtual machines to put in the cluster. Leave it on the default type. You’ll notice that the cluster will always have a minimum of two workers and can autoscale up to a maximum of eight workers.

OK, now click Create Cluster. It will take a little while to finish.

Running queries

Once your cluster is ready, you can execute code on it. You can do that by using a notebook. If you’ve ever used a Jupyter notebook before, then a Databricks notebook will look very familiar.

Let’s create one so you can see what I mean. The notebook will reside in a workspace, so click Workspace, open the dropdown menu, go into the Create menu, and select Notebook.

Create Azure Databricks Notebook

Let’s call it test. For the language, you can choose Python, Scala, SQL, or R. We’re going to run some simple queries, so select SQL.

Create Apache Spark Notebook

A notebook is a document where you can enter some code, run it, and the results will be shown in the notebook. It’s perfect for data exploration and experimentation because you can go back and see all of the things you tried and what the results were in each case. It’s essentially an interactive document that contains live code. You can even run some of the code again if you want.

Alright, let’s run a query. Since we haven’t uploaded any data, you might be wondering what we’re going to run a query on. Well, there’s actually lots of data we can query even without uploading any of it. Azure Databricks is integrated with many other Azure services, including SQL Database, Data Lake Storage, Blob Storage, Cosmos DB, Event Hubs, and SQL Data Warehouse, so you can access data in any of those using the appropriate connector. However, we don’t even need to do that because Databricks also includes some sample datasets.

To see which datasets are available, you can run a command in the command box. There’s one catch, though. When we created this notebook, we selected SQL as the language, so whatever we type in this command box will be interpreted as SQL. The exception is if you start the command with a percent sign and the name of another language. For example, if you wanted to run some Python code in a SQL notebook, you would start it with %python and it would be interpreted properly.

Similarly, if you want to run a filesystem command, then you just need to start it with %fs. To see what’s in the filesystem for this workspace, type: %fs ls

The ls stands for list and will be familiar if you’ve used Linux or Unix.

To execute the command, you can either hit ShiftEnter, or you can select Run cell from this menu. I recommend using ShiftEnter because not only is that faster than going to the menu, but it also automatically brings up another cell for you so you can type another command.

Cloud Academy Running Spark on Azure Databricks

You’ll notice that all of the folders start with dbfs (Databricks File System), which is a distributed filesystem installed on the cluster.You don’t have to worry about losing data when you shut down the cluster, though, because DBFS is saved in Blob Storage.

The sample datasets are in the databricks-datasets folder. To list them, type:

%fs ls databricks-datasets

The one we’re going to use shows what the prices were for various personal computers in the mid-1990s. Use this command to see what’s in it: 

%fs head --maxBytes=1000 dbfs:/databricks-datasets/Rdatasets/data-001/csv/Ecdat/Computers.csv
SQL Query Code

The head command shows the first lines in a file, up to the maxBytes you specify, which is 1,000 bytes in this case. If you don’t specify MaxBytes, then it will default to about 65,000 bytes.

The first line contains the header, which shows what’s in each column, such as the price of the computer, its processor speed, and the size of its hard drive, RAM, and screen.

To run a query on this data, we need to load it into a table. A Databricks table is just an Apache Spark DataFrame, if you’re familiar with Spark. You can also think of it as being like a table in a relational database.

To load the csv file into a table, run these commands:

DROP TABLE IF EXISTS computers;
CREATE TABLE computers
USING csv
OPTIONS (path "/databricks-datasets/Rdatasets/data-001/csv/Ecdat/Computers.csv", header "true", inferSchema "true")

The first command checks to see if a table named computers already exists, and if it does, then it drops (or deletes) it. You don’t have to do this, of course, because you haven’t created any tables yet, but it’s a good idea to do it. Why? Because if you wanted to run the code in this cell again, then the table would already exist, so you’d get an error if you didn’t drop the table first.

The second command creates the table. Note that it says there’s a header in the file. By setting header to true, it will name the columns for us, so we won’t have to do that ourselves. The inferSchema option is even more useful. It figures out the data type of each column, so we don’t have to specify that ourselves either.

To see what’s in the table, run a SQL query. The simplest command is:

select * from computers

If this were a really big table, then you might not want to run a select * on it since that reads in the entire table, but it’s okay in this case.

SQL Query Results

This is the same data we saw when we ran the head command, but now it’s in a nicely formatted table.

Learn more

Check out the full video-based Running Spark on Azure Databricks course on Cloud Academy.

Apache Sparks Course

 

 

 

 

 

 

 

 

Avatar

Written by

Guy Hummel

Guy is a certified cloud architect on all three of the major public cloud platforms: AWS, Azure, and Google Cloud Platform. He launched his first training website in 1995 and he's been helping people learn IT technologies ever since. Guy’s passion is making complex technology easy to understand.


Related Posts

Nisar Ahmad
Nisar Ahmad
— November 12, 2019

Kubernetes Services: AWS vs. Azure vs. Google Cloud

Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications. Cloud service providers — such as Amazon Web Ser...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Kubernetes
Alisha Reyes
Alisha Reyes
— November 6, 2019

New on Cloud Academy: AZ-900 Exam Update; MS-100 Exam Prep; PRINCE2 Foundation; Azure, Kubernetes, and Google Hands-on Labs; and Much More

This month, our Content Team really kicked it into overdrive with tons of new content. If you're Team Azure, then you'll be amazed at the number of Azure Courses and Hands-on Labs we published this month alone!  At any time, you can find all of our new releases by going to our Training ...

Read more
  • AZ-900
  • Azure
  • Google Cloud Platform
  • Kubernetes
  • MS-100
  • New content
  • PRINCE2
  • Product Feature
Joe Nemer
Joe Nemer
— October 30, 2019

How to Get Hands-on Experience on AWS, Azure, and GCP: Lab Challenges

Meaningful cloud skills require more than book knowledge. Hands-on experience is required to translate knowledge into real-world results. We see this time and time again in studies about how kids and adults best learn — doing the actual learning task is key. Hands-on Labs and Lab Challe...

Read more
  • AWS Labs
  • Azure
  • Google Cloud Platform
  • Hands-on Labs
Avatar
Cloud Academy Team
— October 23, 2019

Which Certifications Should I Get?

As we mentioned in an earlier post, the old AWS slogan, “Cloud is the new normal” is indeed a reality today. Really, cloud has been the new normal for a while now and getting credentials has become an increasingly effective way to quickly showcase your abilities to recruiters and compan...

Read more
  • AWS
  • Azure
  • Certifications
  • Cloud Computing
  • Google Cloud Platform
Alisha Reyes
Alisha Reyes
— October 1, 2019

New on Cloud Academy: ITIL® 4, Microsoft 365 Tenant, Jenkins, TOGAF® 9.1, and more

At Cloud Academy, we're always striving to make improvements to our training platform. Based on your feedback, we released some new features to help make it easier for you to continue studying. These new features allow you to: Remove content from “Continue Studying” section Disc...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
  • ITIL® 4
  • Jenkins
  • Microsoft 365 Tenant
  • New content
  • Product Feature
  • Python programming
  • TOGAF® 9.1
Avatar
Jeremy Cook
— September 17, 2019

Cloud Migration Risks & Benefits

If you’re like most businesses, you already have at least one workload running in the cloud. However, that doesn’t mean that cloud migration is right for everyone. While cloud environments are generally scalable, reliable, and highly available, those won’t be the only considerations dri...

Read more
  • AWS
  • Azure
  • Cloud Migration
Joe Nemer
Joe Nemer
— September 6, 2019

Google Cloud Functions vs. AWS Lambda: The Fight for Serverless Cloud Domination

Serverless computing: What is it and why is it important? A quick background The general concept of serverless computing was introduced to the market by Amazon Web Services (AWS) around 2014 with the release of AWS Lambda. As we know, cloud computing has made it possible for users to ...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
Alisha Reyes
Alisha Reyes
— August 30, 2019

New on Cloud Academy: CISSP, AWS, Azure, & DevOps Labs, Python for Beginners, and more…

As Hurricane Dorian intensifies, it looks like Floridians across the entire state might have to hunker down for another big one. If you've gone through a hurricane, you know that preparing for one is no joke. You'll need a survival kit with plenty of water, flashlights, batteries, and n...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
  • New content
  • Product Feature
  • Python programming
Alisha Reyes
Alisha Reyes
— August 22, 2019

How to Unlock Complimentary Access to Cloud Academy

Are you looking to get trained or certified on AWS, Azure, Google Cloud Platform, DevOps, Cloud Security, Python, Java, or another technical skill? Then you'll want to mark your calendars for August 23, 2019. Starting Friday at 12:00 a.m. PDT (3:00 a.m. EDT), Cloud Academy is offering c...

Read more
  • AWS
  • Azure
  • cloud academy content
  • complimentary access
  • GCP
  • on the house
Avatar
Andrew Larkin
— August 13, 2019

Content Roadmap: AZ-500, ITIL 4, MS-100, Google Cloud Associate Engineer, and More

Last month, Cloud Academy joined forces with QA, the UK’s largest B2B skills provider, and it put us in an excellent position to solve a massive skills gap problem. As a result of this collaboration, you will see our training library grow with additions from QA’s massive catalog of 500+...

Read more
  • AWS
  • Azure
  • content roadmap
  • Google Cloud Platform
Avatar
Andrew Larkin
— August 7, 2019

Disadvantages of Cloud Computing

If you want to deliver digital services of any kind, you’ll need to estimate all types of resources, not the least of which are CPU, memory, storage, and network connectivity. Which resources you choose for your delivery —  cloud-based or local — is up to you. But you’ll definitely want...

Read more
  • AWS
  • Azure
  • Cloud Computing
  • Google Cloud Platform
Orion Withrow
Orion Withrow
— July 24, 2019

How to Effectively Use Azure Management Groups, Subscriptions, and Resource Groups

When used individually, Azure Management Groups, Subscriptions, and Resource Groups are very powerful. But when used together, they can establish the entire organizational structure of Azure. In this article, I will explain Azure Resource Manager, Management Groups, Subscriptions an...

Read more
  • Azure
  • azure management groups
  • azure resource groups
  • azure subscriptions