Big Data Storage
The course is part of this learning path
Course two of the Big Data Specialty learning path focuses on storage. In this course, we outline the key storage options for big data solutions. We determine data access and retrieval patterns, and some of the use cases that suit particular data patterns such as evaluating mechanisms for capture, update, and retrieval of catalog entries. We learn how to determine appropriate data structure and storage formats, and how to determine and optimize the operational characteristics of a Big Data storage solution.
Amazon Aurora is now MySQL and PostgreSQL-compatible.
- Recognize and explain big data access and retrieval patterns.
- Recognize and explain appropriate data structure and storage formats.
- Recognize and explain the operational characteristics of a Big Data storage solution.
This course is intended for students looking to increase their knowledge of the AWS storage options available for Big Data solutions.
While there are no formal prerequisites for this course, students will benefit from having a basic understanding of cloud storage solutions. Our courses on AWS storage fundamentals and AWS database fundamentals will give you a solid foundation for taking this present course.
This Course Includes
- Over 90 minutes of high-definition video.
- Real-Life Scenarios using AWS Reference Architecture
What You'll Learn
- Course Intro: What to expect from this course.
- Amazon DynamoDB: How you can use Amazon DynamoDB in Big Data scenarios.
- Amazon DynamoDB Reference Architecture: A real-life model using DynamoDB
- Amazon Relational Database Service: A look at how Amazon RDS works and how you can use it in Big Data scenarios.
- Amazon Relational Database Service Reference Architecture: A real-life model using RDS.
- Amazon Redshift: An overview of Amazon Redshift works and how you can use it in Big Data scenarios.
- Amazon Redshift Reference Architecture: A real-life model using Redshift.
Hello, and welcome to another Big Data on AWS course from Cloud Academy. In this course, we focus on Amazon Big Data services, which are designed to store data. This course is part of a larger learning path that covers the broad range of big data services available from AWS.
This course assumes you have a good understanding of cloud computing in AWS and that you are proficient with provisioning and using services within AWS. Ideally, you will also have some background and understanding of big data. There are a large number of AWS big data services available, and this course is designed to provide the initial core concepts required for each of these services and to assist people in passing the AWS Big Data Specialty Exam.
A little bit about me. My name is Shane Gibson, and I've worked in the area of data and business intelligence for over 20 years, and for the last three years, I've been focusing on how we can use Agile processes and cloud computing technologies to accelerate the delivery of data and content to our users. I was born and still live in New Zealand. I love craft beer and good coffee, and you can learn more about me by following either my Twitter or my LinkedIn.
At the end of this course, you'll be able to describe in detail how Amazon Big Data services can be used to store data within a Big Data solution. In this Big Data on AWS learning path, we cover the many AWS big data services that can be used to collect, store, process, analyze, visualize and secure big data. In this course, we provide three modules which cover the big data storage services of Amazon, Amazon DynamoDB, Amazon RDS, and Amazon Redshift.
Each of these three big data storage services can be used on their own or in combination with each other to provide storage capabilities for your big data solution. Each of these storage services have specific strengths that make them more suitable for the storage of different types and volumes of data, and we discuss these as we progress through the course. In each of the modules, we cover which processing and storage patterns to storage services fits within, the architecture of the service, as well as the core concepts that will help you understand that service in detail.
We also cover the service limits for each service where applicable. At the end of the three modules, we will have a wrap-up with a quick overview of a reference architecture which uses these three services. So let's begin and find out how we can store big data using the Amazon Web Services capabilities.
Shane has been emerged in the world of data, analytics and business intelligence for over 20 years, and for the last few years he has been focusing on how Agile processes and cloud computing technologies can be used to accelerate the delivery of data and content to users.
He is an avid user of the AWS cloud platform to help deliver this capability with increased speed and decreased costs. In fact its often hard to shut him up when he is talking about the innovative solutions that AWS can help you to create, or how cool the latest AWS feature is.
Shane hails from the far end of the earth, Wellington New Zealand, a place famous for Hobbits and Kiwifruit. However your more likely to see him partake of a good long black or an even better craft beer.