Apache Spark is an open-source framework for doing big data processing. It was developed as a replacement for Apache Hadoop’s MapReduce framework. Both Spark and MapReduce process data on compute clusters, but one of Spark’s big advantages is that it does in-memory processing, which can be orders of magnitude faster than the disk-based processing that MapReduce uses.
In 2013, the creators of Spark started a company called Databricks. The name of their product is also Databricks. It’s a cloud-based implementation of Spark with a user-friendly interface for running code on clusters interactively.
Microsoft has partnered with Databricks to bring its product to the Azure platform. The result is a service called Azure Databricks. One of the biggest advantages of using the Azure version of Databricks is that it’s integrated with other Azure services. For example, you can train a machine learning model on a Databricks cluster and then deploy it using Azure Machine Learning Services.
In this course, we will start by showing you how to set up a Databricks workspace and a cluster. Next, we’ll go through the basics of how to use a notebook to run interactive queries on a dataset. Then you’ll see how to run a Spark job on a schedule.
- Create a Databricks workspace, cluster, and notebook
- Run code in a Databricks notebook either interactively or as a job
- People who want to use Azure Databricks to run Apache Spark for analytics
- Prior experience with Azure and at least one programming language
The GitHub repository for this course is at https://github.com/cloudacademy/azure-databricks.
I hope you enjoyed learning about Azure Databricks. Let’s do a quick review of what you learned.
Apache Spark is an open-source framework for doing big data processing. Azure Databricks is a managed implementation of Spark in the cloud.
A Databricks workspace is where you store your notebooks and other related items. Although a notebook has a default programming language, you can add code from other languages by starting it with a percent sign and the name of the other language.
DBFS (or the “Databricks File System”) is a distributed filesystem that’s installed on a Databricks cluster and backed by Azure Storage.
A job is a way of running an entire notebook at scheduled times. It also keeps a record of previous runs. In most cases, it’s less expensive to run a job on a new cluster than on an existing cluster because you get charged the automated workload price, which is less than the interactive price. It’s also usually a good idea to select the autoscaling option so you don’t have to guess how many nodes the cluster should have.
To learn more about Azure Databricks, you can read Microsoft’s documentation. Please give this course a rating, and if you have any questions or comments, please let us know. Thanks and have fun with Azure Databricks!
Guy launched his first training website in 1995 and he's been helping people learn IT technologies ever since. He has been a sysadmin, instructor, sales engineer, IT manager, and entrepreneur. In his most recent venture, he founded and led a cloud-based training infrastructure company that provided virtual labs for some of the largest software vendors in the world. Guy’s passion is making complex technology easy to understand. His activities outside of work have included riding an elephant and skydiving (although not at the same time).