Designing Data Flows in Azure
Data Flow Basics
Designing a Data Flow Solution
The course is part of this learning path
This Designing Data Flows in Azure course will enable you to implement the best practices for data flows in your own team. Starting from the basics, you will learn how data flows work from beginning to end. Though we do recommend an idea of what data flows are and how they are used, this course contains some demonstration lectures to really make sure you have got to grips with the concept. By better understanding the key components available in Azure to design and deploy efficient data flows, you will be allowing your organization to reap the benefits.
This course is made up of 19 comprehensive lectures including an overview, demonstrations, and a conclusion.
- Review the features, concepts, and requirements that are necessary for designing data flows
- Learn the basic principles of data flows and common data flow scenarios
- Understand how to implement data flows within Microsoft Azure
- IT professionals who are interested in obtaining an Azure certification
- Those looking to implement data flows within their organizations
- A basic understanding of data flows and their uses
Related Training Content
For more training content related to this course, visit our dedicated MS Azure Content Training Library.
In this next lesson, we're going to touch on the data lifecycle. It's important to understand the data lifecycle because the different stages of the lifecycle will affect data flow. Overall, there are really five key stages in the data lifecycle. You have the initial collection of data, the preparation of that collected data, the ingestion of the data into storage, processing or transformation of the data into usable form, and then you have analysis of the transformed data. During collection, data is acquired from other processes or maybe even user input. Such data might be in varied formats, or it may be unstructured. Preparation of the collected data may or may not happen next, depending on the process. In cases where ETL is in play, and data needs to be transformed before it is ingested or loaded, there is certainly a preparation step that occurs. Once data has been collected and prepared, it needs to be ingested into storage.
In the context of this discussion, the data would typically be ingested into cloud storage. Once the data has been ingested into storage, it needs to be processed or, if ELT is being used, transformed into a usable format. Finally, once the data has progressed through all previous stages of the lifecycle, it can be analyzed and interpreted. With the typical data lifecycle in mind, you then need to think about some things as you design a data flow that encompasses this data. You need to think about where the data is coming from, and in what format it's arriving. You need to determine how it needs to be transformed and if so, how that needs to be done. You need to think about the ultimate destination of this data and its analysis. Where's the data going and what questions does it need to answer? Only after considering all these concepts can you begin to formulate a data flow plan and come up with data flow requirements.
About the Author
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.