Running Apache Spark on Azure Databricks

In this article, we’ll cover how to set up an Azure Databricks cluster and how to run queries in an interactive notebook. However, this article only scratches the surface of what you can do with Azure Databricks. If you would like to learn more, including how to create graphs, run scheduled jobs, and train a machine learning model, then check out my complete, video-based Running Spark on Azure Databricks course on Cloud Academy.

Watch this short video taken from the course to get an idea of what you’ll learn:

Apache Spark and Azure Databricks

Apache Spark is an open-source framework for doing big data processing. It was developed as a replacement for Apache Hadoop’s MapReduce framework. Both Spark and MapReduce process data on compute clusters, but one of Spark’s big advantages is that it does in-memory processing, which can be orders of magnitude faster than the disk-based processing that MapReduce uses. There are plenty of other differences between the two systems, as well, but we don’t need to go into the details here.

Not only does Apache Spark handle data analytics tasks, but it also handles machine learning. It has a library called MLlib that includes a variety of pre-built algorithms, such as logistic regression, naive Bayes, and random forest. At the moment, it doesn’t include neural networks. However, you can still create neural networks on Spark using other machine learning frameworks, such as TensorFlow.

In 2013, the creators of Spark started a company called Databricks. The name of their product is also Databricks. It’s basically a managed implementation of Apache Spark in the cloud, so you don’t have to worry about building clusters yourself. It also has a user-friendly interface for running code on clusters interactively.

Microsoft has partnered with Databricks to bring their product to the Azure platform. The result is a service called Azure Databricks. One of the biggest advantages of using the Azure version of Databricks is that it’s integrated with other Azure services. For example, you can train a machine learning model on a Databricks cluster and then deploy it using Azure Machine Learning Services.

Setup

Now let’s see how to set up an Azure Databricks environment. You need to perform two tasks:

  1. Create a Databricks workspace
  2. Spin up a compute cluster

In the Azure portal, search for databricks. When it comes up, click on it.

Search for Azure Databricks

Then click Add.

Create Azure Databricks Service for Apache Sparks

The Workspace name can be anything. It doesn’t have to be globally unique. Let’s call it course. Then, either create a new resource group to put it in or use an existing one. For the pricing tier, choose either Trial or Standard.The Trial tier is free for 14 days.

When Azure is finished creating the workspace, click on it. Then, when you click the Launch Workspace button, it will take you to the Databricks portal, which is separate from the Azure portal. Alright, now we can create a cluster. Click Create Cluster. Then you’ll see this screen.
Create Apache Spark Cluster
You can give the cluster any name you want. Let’s call it spark. The Cluster Mode can be either Standard or High Concurrency. We’re only going to run one job at a time, so leave it on Standard.

For the Databricks Runtime Version, you can leave it with the default, which might be different for you than this version. You can also leave the Python version with the default.

Make sure the Terminate after __ minutes of inactivity box is checked. It can be expensive to run a cluster, so you’ll want to automatically shut the cluster down if it’s been inactive for a while. The default is 120 minutes, but you can change it to something lower, like 60, so it will shut down after being idle for a shorter period than two hours.

Under Worker Type, you can see that there are lots of options for what kind of virtual machines to put in the cluster. Leave it on the default type. You’ll notice that the cluster will always have a minimum of two workers and can autoscale up to a maximum of eight workers.

OK, now click Create Cluster. It will take a little while to finish.

Running queries

Once your cluster is ready, you can execute code on it. You can do that by using a notebook. If you’ve ever used a Jupyter notebook before, then a Databricks notebook will look very familiar.

Let’s create one so you can see what I mean. The notebook will reside in a workspace, so click Workspace, open the dropdown menu, go into the Create menu, and select Notebook.

Create Azure Databricks Notebook

Let’s call it test. For the language, you can choose Python, Scala, SQL, or R. We’re going to run some simple queries, so select SQL.

Create Apache Spark Notebook

A notebook is a document where you can enter some code, run it, and the results will be shown in the notebook. It’s perfect for data exploration and experimentation because you can go back and see all of the things you tried and what the results were in each case. It’s essentially an interactive document that contains live code. You can even run some of the code again if you want.

Alright, let’s run a query. Since we haven’t uploaded any data, you might be wondering what we’re going to run a query on. Well, there’s actually lots of data we can query even without uploading any of it. Azure Databricks is integrated with many other Azure services, including SQL Database, Data Lake Storage, Blob Storage, Cosmos DB, Event Hubs, and SQL Data Warehouse, so you can access data in any of those using the appropriate connector. However, we don’t even need to do that because Databricks also includes some sample datasets.

To see which datasets are available, you can run a command in the command box. There’s one catch, though. When we created this notebook, we selected SQL as the language, so whatever we type in this command box will be interpreted as SQL. The exception is if you start the command with a percent sign and the name of another language. For example, if you wanted to run some Python code in a SQL notebook, you would start it with %python and it would be interpreted properly.

Similarly, if you want to run a filesystem command, then you just need to start it with %fs. To see what’s in the filesystem for this workspace, type: %fs ls

The ls stands for list and will be familiar if you’ve used Linux or Unix.

To execute the command, you can either hit ShiftEnter, or you can select Run cell from this menu. I recommend using ShiftEnter because not only is that faster than going to the menu, but it also automatically brings up another cell for you so you can type another command.

Cloud Academy Running Spark on Azure Databricks

You’ll notice that all of the folders start with dbfs (Databricks File System), which is a distributed filesystem installed on the cluster.You don’t have to worry about losing data when you shut down the cluster, though, because DBFS is saved in Blob Storage.

The sample datasets are in the databricks-datasets folder. To list them, type:

%fs ls databricks-datasets

The one we’re going to use shows what the prices were for various personal computers in the mid-1990s. Use this command to see what’s in it: 

%fs head --maxBytes=1000 dbfs:/databricks-datasets/Rdatasets/data-001/csv/Ecdat/Computers.csv
SQL Query Code

The head command shows the first lines in a file, up to the maxBytes you specify, which is 1,000 bytes in this case. If you don’t specify MaxBytes, then it will default to about 65,000 bytes.

The first line contains the header, which shows what’s in each column, such as the price of the computer, its processor speed, and the size of its hard drive, RAM, and screen.

To run a query on this data, we need to load it into a table. A Databricks table is just an Apache Spark DataFrame, if you’re familiar with Spark. You can also think of it as being like a table in a relational database.

To load the csv file into a table, run these commands:

DROP TABLE IF EXISTS computers;
CREATE TABLE computers
USING csv
OPTIONS (path "/databricks-datasets/Rdatasets/data-001/csv/Ecdat/Computers.csv", header "true", inferSchema "true")

The first command checks to see if a table named computers already exists, and if it does, then it drops (or deletes) it. You don’t have to do this, of course, because you haven’t created any tables yet, but it’s a good idea to do it. Why? Because if you wanted to run the code in this cell again, then the table would already exist, so you’d get an error if you didn’t drop the table first.

The second command creates the table. Note that it says there’s a header in the file. By setting header to true, it will name the columns for us, so we won’t have to do that ourselves. The inferSchema option is even more useful. It figures out the data type of each column, so we don’t have to specify that ourselves either.

To see what’s in the table, run a SQL query. The simplest command is:

select * from computers

If this were a really big table, then you might not want to run a select * on it since that reads in the entire table, but it’s okay in this case.

SQL Query Results

This is the same data we saw when we ran the head command, but now it’s in a nicely formatted table.

Learn more

Check out the full video-based Running Spark on Azure Databricks course on Cloud Academy.

Apache Sparks Course

 

 

 

 

 

 

 

 

Avatar

Written by

Guy Hummel

Guy is a certified cloud architect on all three of the major public cloud platforms: AWS, Azure, and Google Cloud Platform. He launched his first training website in 1995 and he's been helping people learn IT technologies ever since. Guy’s passion is making complex technology easy to understand.


Related Posts

Avatar
Cloud Academy Team
— July 9, 2020

Which Certifications Should I Get?

As we mentioned in an earlier post, the old AWS slogan, “Cloud is the new normal” is indeed a reality today. Really, cloud has been the new normal for a while now and getting credentials has become an increasingly effective way to quickly showcase your abilities to recruiters and compan...

Read more
  • AWS
  • Azure
  • Certifications
  • Cloud Computing
  • Google Cloud Platform
Alisha Reyes
Alisha Reyes
— July 2, 2020

New Content: AWS, Azure, Typescript, Java, Docker, 13 New Labs, and Much More

This month, our Content Team released a whopping 13 new labs in real cloud environments! If you haven't tried out our labs, you might not understand why we think that number is so impressive. Our labs are not “simulated” experiences — they are real cloud environments using accounts on A...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Joe Nemer
Joe Nemer
— June 19, 2020

Kickstart Your Tech Training With a Free Week on Cloud Academy

Are you looking to make a jump in your technical career? Want to get trained or certified on AWS, Azure, Google Cloud Platform, DevOps, Kubernetes, Python, or another in-demand skill?Then you'll want to mark your calendar. Starting Monday, June 22 at 12:00 a.m. PDT (3:00 a.m. EDT), ...

Read more
  • AWS
  • Azure
  • cloud academy content
  • complimentary access
  • GCP
  • on the house
Joe Nemer
Joe Nemer
— June 12, 2020

Azure Certifications: Our Experts Explain Which Is Best for You

How do you choose an Azure certification? It can be hard to get started when choosing an Azure certification. There are so many to sift through, so many interesting options, and it requires a time commitment to just understand the cert landscape.To help guide you through the select...

Read more
  • AZ-900
  • Azure
  • Certifications
Alisha Reyes
Alisha Reyes
— June 11, 2020

New Content: AZ-500 and AZ-400 Updates, 3 Google Professional Exam Preps, Practical ML Learning Path, C# Programming, and More

This month, our Content Team released tons of new content and labs in real cloud environments. Not only that, but we introduced our very first highly interactive "Office Hours" webinar. This webinar, Acing the AWS Solutions Architect Associate Certification, started with a quick overvie...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Rebecca Willis
Rebecca Willis
— June 3, 2020

Azure vs. AWS: Which Certification Provides the Brighter Future?

More and more companies are using cloud services, prompting more and more people to switch their current IT position to something cloud-related. The problem is most people only have that much time after work to learn new technologies, and there are plenty of cloud services that you can ...

Read more
  • AWS
  • Azure
  • certification
Alisha Reyes
Alisha Reyes
— June 2, 2020

Blog Digest: 5 Reasons to Get AWS Certified, OWASP Top 10, Getting Started with VPCs, Top 10 Soft Skills, and More

Thank you for being a valued member of our community! We recently sent out a short survey to understand what type of content you would like us to add to Cloud Academy, and we want to thank everyone who gave us their input. If you would like to complete the survey, it's not too late. It ...

Read more
  • AWS
  • Azure
  • blog digest
  • Certifications
  • Cloud Academy
  • OWASP
  • OWASP Top 10
  • Security
  • VPCs
Alisha Reyes
Alisha Reyes
— May 11, 2020

New Content: Alibaba, Azure Cert Prep: AI-100, AZ-104, AZ-204 & AZ-400, Amazon Athena Playground, Google Cloud Developer Challenge, and much more

This month, our Content Team released 8 new learning paths, 4 courses, 7 labs in real cloud environments, and 4 new knowledge check assessments. Not only that, but we introduced our very first course on Alibaba Cloud, and our expert instructors are working 'round the clock to create 6 n...

Read more
  • alibaba
  • AWS
  • Azure
  • gitops
  • Google Cloud Platform
  • lab playground
  • programming
Alisha Reyes
Alisha Reyes
— May 1, 2020

Introducing Our Newest Lab Environments: Lab Playgrounds

Want to train in a real cloud environment, but feel slowed down by spinning up your own deployments? When you consider security or pricing costs, it can be costly and challenging to get up to speed quickly for self-training. To solve this problem, Cloud Academy created a new suite of la...

Read more
  • AWS
  • Azure
  • Docker
  • Google Cloud Platform
  • Java
  • lab playgrounds
  • Python
Alisha Reyes
Alisha Reyes
— April 30, 2020

Blog Digest: AWS Breaking News, Azure DevOps, AWS Study Guide, 8 Ways to Prevent a Ransomware Attack, and More

  New articles by topicAWS Azure Data Science Google Cloud  Cloud Adoption Platform Updates & New Content Security Women in TechAWSBreaking News: All AWS Certification Exams Now Available Online As an Advanced AWS Technology Partner, C...

Read more
  • AWS
  • Azure
  • blog digest
  • Certifications
  • Cloud Academy
  • programming
  • Security
Alisha Reyes
Alisha Reyes
— April 9, 2020

New on Cloud Academy: AWS Solutions Architect Exam Prep, Azure Courses, GCP Engineer Exam Prep, Programming, and More

Free content on Cloud Academy More and more customers are relying on our technology and content to keep upskilling their people in these months, and we are doing our best to keep supporting them. While the world fights the COVID-19 pandemic, we wanted to make a small contribution to he...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
  • programming
Avatar
Logan Rakai
— April 7, 2020

How to Effectively Use Azure DevOps

Azure DevOps is a suite of services that collaborate on software development following DevOps principles. The services in Azure DevOps are:Azure Repos for hosting Git repositories for source control of your code Azure Boards for planning and tracking your work using proven agil...

Read more
  • Azure
  • DevOps