How can Azure HDInsight solve your big data challenges?
Big data refers to large volumes of fast-moving data in any format that haven’t yet been handled by your traditional data processing system. In other words, it refers to data which have Volume, Variety and Velocity (commonly termed as V3 in Big Data circles). Data can come from just about anywhere: application logs, sensor data, archived images, videos, streaming data like twitter trends, weather forecast data, astronomical data, biological genomes, and almost anything generated by human or machine. Handling data on this scale is a relatively new problem. Azure’s HDInsight is, appropriately, a new tool that aims to address this problem.
New Challenges and New Solutions: Coping with Variety and Velocity
Whether human-readable or not, managing fast moving data generated at a massive scale – while maintaining data integrity – will require a different kind of processing mechanism than we would have been used for a traditional database or data-mining. Older solutions handled Volume well enough, but Variety and Velocity are relatively new problems. Since not everyone can afford super computers, brute force approaches probably aren’t going to be practical. This challenge inspired the development of Hadoop, which made it possible to process big data on industry-standard servers, while guaranteeing reliable and scalable parallel and distributed computing on smaller budgets.
Without diving too deeply into the technologies of big data management, we can’t go on without at least mentioning the biggest and most important players:
- Hadoop (batch processing),
- NoSQL (HBase, MongoDB, and Cassandra for distributed non-ACID databases),
- Storm and Kafka (real time streaming data),
- Spark (in-memory distributed processing), and
- Pig scripts and Hive queries.
Besides the standard Apache Hadoop framework, there are many vendors who provide customized Hadoop distributions, like Cloudera, Hortonworks, MapR, and Greenplum. Many established cloud vendors provide some variety of Hadoop Platform as a Service (PaaS), like AWS’s Amazon Elastic Map Reduce (EMR), and Google Big Query.
The purpose of this post is to introduce you to Azure HDInsight, which is based on the Hortonworks Data Platform (HDP). The flexibility of Azure cloud and the innovation of Hortonworks can make Azure HDInsight a very interesting and productive space to process your big data.
Azure’s Big Data Solutions
Azure provides various big data processing services. The most popular of them is HDInsight, which is an on-demand Hadoop platform powered by Hortonworks Data Platform (HDP). Besides HDInsight (on which we’re going to focus our attention in this post) Azure also offers:
- Data Lake Analytics
- Data Factory
- SQL Data Warehouse
- Data Catalog
Azure HDInsight: A Comprehensive Managed Apache Hadoop, Spark, R, HBase, and Storm Cloud Service
As we mentioned, Azure provides a Hortonworks distribution of Hadoop in the cloud. “Hadoop distribution” is a broad term used to describe solutions that include some MapReduce and HDFS platform, in addition to a full stack featuring Spark, NoSQL, Pig, Sqoop, Ambari, and Zookeeper. Azure HDInsight provides all those technologies as part of its Big Data service, and also integrates with business intelligence (BI) tools like Excel, SQL Server Analysis Services, and SQL Server Reporting Services.
As a distribution, HDInsight comes in two flavors: Ubuntu and Windows Server 2012. Users can manage Linux-based HDInsight clusters using Apache Ambari and, for Windows users, Azure provides a cluster dashboard.
Cluster configuration for HDInsight is categorized into four different offerings:
1. Hadoop Cluster for query and batch processing with MapReduce and HDFS.
Hadoop clusters are made up of two types of nodes: Head Nodes that run Name Nodes and Job Trackers in a cluster (two nodes/cluster minimum); and Worker Nodes – also called Data Nodes – representing a Hadoop cluster’s true work horses. The minimum number of worker nodes is one, and you can scale worker nodes up or down according to need.
2. HBase for NoSQL-based workloads. NoSQL is a term defining a certain kind of data store and data processing engine that do not rely on traditional ACID properties but, instead, on CAP theorem.
3. Storm for streaming data processing (remember the twitter data or sensor data use-case).
4. HDInsight Spark for in-memory parallel processing for big data analytics. Use cases for HDInsight Spark are Interactive data analysis and BI, Iterative Machine Learning, Streaming and real-time data analysis etc. In recent days, Apache Spark has taken over Hadoop based data analytics because of its capability to handle complex algorithms, faster in-memory processing and graph computing.
You can also customize the cluster with scripts employing various technologies and languages. Spark, Solr, R, Giraph, and Hue can be used to install additional components. HDInsight clusters also include the following (Apache) components:
- Amabri for cluster provisioning, management, and monitoring.
- Hive – and SQL query-like language that runs against data which, in turn, converts to MapReduce algorithms to process the data on HDFS.
- Pig for data processing using user-created scripts.
- Zookeeper for cluster co-ordination and management in a distributed environment.
- Oozie for workflow management.
- Mahout for machine learning.
- Sqoop (SQL on Hadoop) for data import and export from SQL storage.
- Tez – a successor of MapReduce that runs on YARN (Yet-Another-Resource-Negotiator) for complex and acyclic graph processing.
- Phoenix – a layer over HBase to query and analyze data kept in SQL stores. Unlike Hive, Phoenix transforms queries into NoSQL API calls for processing and then converts them for MapReduce programming.
- HCatalog provides a relational view of data in HDFS. It often used with Hive.
- Avro – a data serialization format for the .NET environment.
The current version of HDInsight is 3.4, other software versions in the stack are up to these version numbers:
Azure HDInsight offers all the best big data management features for the enterprise cloud, and has become one of the most talked about Hadoop Distributions in use. While users can quickly scale clusters up or down according to their needs, they will pay only for resources they actually use, and avoid the capital costs required to provision complex hardware configurations and the professionals needed to maintain them. HDInsight lets you crunch all kinds of data – whether structured or not – overcoming the burden of Hadoop cluster configuration.
If you’d like to learn more about Big Data and Hadoop, why not take our Analytics Fundamentals for AWS course? We’re currently building some great Azure content, but in the meantime this course provides a solid foundation in Big Data analytics for AWS. Check it out today!
New on Cloud Academy: AWS Solution Architect Lab Challenge, Azure Hands-on Labs, Foundation Certificate in Cyber Security, and Much More
Now that Thanksgiving is over and the craziness of Black Friday has died down, it's now time for the busiest season of the year. Whether you're a last-minute shopper or you already have your shopping done, the holidays bring so much more excitement than any other time of year. Since our...
Understanding Enterprise Cloud Migration
What is enterprise cloud migration? Cloud migration is about moving your data, applications, and even infrastructure from your on-premises computers or infrastructure to a virtual pool of on-demand, shared resources that offer compute, storage, and network services at scale. Why d...
6 Reasons Why You Should Get an AWS Certification This Year
In the past decade, the rise of cloud computing has been undeniable. Businesses of all sizes are moving their infrastructure and applications to the cloud. This is partly because the cloud allows businesses and their employees to access important information from just about anywhere. ...
AWS Regions and Availability Zones: The Simplest Explanation You Will Ever Find Around
The basics of AWS Regions and Availability Zones We’re going to treat this article as a sort of AWS 101 — it’ll be a quick primer on AWS Regions and Availability Zones that will be useful for understanding the basics of how AWS infrastructure is organized. We’ll define each section,...
Application Load Balancer vs. Classic Load Balancer
What is an Elastic Load Balancer? This post covers basics of what an Elastic Load Balancer is, and two of its examples: Application Load Balancers and Classic Load Balancers. For additional information — including a comparison that explains Network Load Balancers — check out our post o...
Advantages and Disadvantages of Microservices Architecture
What are microservices? Let's start our discussion by setting a foundation of what microservices are. Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs). ...
Kubernetes Services: AWS vs. Azure vs. Google Cloud
Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications. Cloud service providers — such as Amazon Web Ser...
AWS Internet of Things (IoT): The 3 Services You Need to Know
The Internet of Things (IoT) embeds technology into any physical thing to enable never-before-seen levels of connectivity. IoT is revolutionizing industries and creating many new market opportunities. Cloud services play an important role in enabling deployment of IoT solutions that min...
Which Certifications Should I Get?
As we mentioned in an earlier post, the old AWS slogan, “Cloud is the new normal” is indeed a reality today. Really, cloud has been the new normal for a while now and getting credentials has become an increasingly effective way to quickly showcase your abilities to recruiters and compan...
How to Go Serverless Like a Pro
So, no servers? Yeah, I checked and there are definitely no servers. Well...the cloud service providers do need servers to host and run the code, but we don’t have to worry about it. Which operating system to use, how and when to run the instances, the scalability, and all the arch...
AWS Security: Bastion Hosts, NAT instances and VPC Peering
Effective security requires close control over your data and resources. Bastion hosts, NAT instances, and VPC peering can help you secure your AWS infrastructure. Welcome to part four of my AWS Security overview. In part three, we looked at network security at the subnet level. This ti...
Top 13 Amazon Virtual Private Cloud (VPC) Best Practices
Amazon Virtual Private Cloud (VPC) brings a host of advantages to the table, including static private IP addresses, Elastic Network Interfaces, secure bastion host setup, DHCP options, Advanced Network Access Control, predictable internal IP ranges, VPN connectivity, movement of interna...