1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Google Cloud Dataflow

Conclusion

Contents

keyboard_tab
Introduction
1
Introduction
PREVIEW3m 34s
More Complex Pipelines
Dealing with Time
6
Windowing
6m 13s
7
Streaming
12m 45s
Conclusion

The course is part of these learning paths

Google Professional Cloud Developer Exam Preparation
course-steps
13
certification
1
lab-steps
13
Google Data Engineer Exam – Professional Certification Preparation
course-steps
13
certification
1
lab-steps
6
description
1
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 9m
Students1698
Ratings
4.8/5
starstarstarstarstar-half

Description

Most organizations are already gathering and analyzing big data or plan to do so in the near future. One common way to process huge datasets is to use Apache Hadoop or Spark. Google even has a managed service for hosting Hadoop and Spark. It’s called Cloud Dataproc. So why do they also offer a competing service called Cloud Dataflow? Well, Google probably has more experience processing big data than any other organization on the planet and now they’re making their data processing software available to their customers. Not only that, but they’ve also open-sourced the software as Apache Beam.

Cloud Dataflow is a serverless data processing service that runs jobs written using the Apache Beam libraries. When you run a job on Cloud Dataflow, it spins up a cluster of virtual machines, distributes the tasks in your job to the VMs, and dynamically scales the cluster based on how the job is performing. It may even change the order of operations in your processing pipeline to optimize your job.

In this course, you will learn how to write data processing programs using Apache Beam and then run them using Cloud Dataflow. You will also learn how to run both batch and streaming jobs.

This is a hands-on course where you can follow along with the demos using your own Google Cloud account or a trial account.

Learning Objectives

  • Write a data processing program in Java using Apache Beam
  • Use different Beam transforms to map and aggregate data
  • Use windows, timestamps, and triggers to process streaming data
  • Deploy a Beam pipeline both locally and on Cloud Dataflow
  • Output data from Cloud Dataflow to Google BigQuery

The Github repository is at https://github.com/cloudacademy/beam.

 

Transcript

I hope you enjoyed learning how to use Cloud Dataflow. Now you know how to use PCollections, pre-written, custom, and composite transforms, runners, windowing with timestamps and filters, streaming with triggers and watermarks, and BigQuery I/O.

To learn more about Cloud Dataflow, you can read Google’s documentation as well as the documentation at beam.apache.org. Also, watch for new big data courses on Cloud Academy, because we’re always publishing new courses.

If you have any questions or comments, please let me know in the Comments tab below this video. Thanks and keep on learning!

About the Author
Students55415
Courses61
Learning paths63

Guy launched his first training website in 1995 and he's been helping people learn IT technologies ever since. He has been a sysadmin, instructor, sales engineer, IT manager, and entrepreneur. In his most recent venture, he founded and led a cloud-based training infrastructure company that provided virtual labs for some of the largest software vendors in the world. Guy’s passion is making complex technology easy to understand. His activities outside of work have included riding an elephant and skydiving (although not at the same time).