1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. AWS Big Data Specialty - Storage

Dynamo DB Reference Architecture

The course is part of these learning paths

Solutions Architect – Professional Certification Preparation for AWS
course-steps 45 certification 6 lab-steps 19 quiz-steps 5 description 2
AWS Big Data – Specialty Certification Preparation for AWS
course-steps 14 certification 1 lab-steps 4 quiz-steps 4
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 47m
Students929
Ratings
4/5
star star star star star-border

Description

Course Description:

Course two of the Big Data Specialty learning path focuses on storage. In this course we outline the key storage options for big data solutions. We determine data access and retrieval patterns, and some of the use cases that suit particular data patterns such as evaluating mechanisms for capture, update, and retrieval of catalog entries. We learn how to determine appropriate data structure and storage formats, and how to determine and optimize the operational characteristics of a Big Data storage solution.

Intended audience:

This course is intended for students looking to increase their knowledge of the AWS storage options available for Big Data solutions. Pre-requisites - While there are no formal pre-requisites students will benefit from having a basic understanding of cloud storage solutions. Recommended courses - Storage Fundamentals , Database Fundamentals

Learning objectives:

• Recognize and explain big data access and retrieval patterns.

• Recognize and explain appropriate data structure and storage formats.

• Recognize and explain the operational characteristics of a Big Data storage solution.

This Course Includes:

Over 90 minutes of high-definition video.

Real-Life Scenarios using AWS Reference Architecture

What You'll Learn:

  • Course Intro: What to expect from this course.
  • Amazon DynamoDB: How you can use Amazon DynamoDB in Big Data scenarios.
  • Amazon DynamoDB Reference Architecture: A real-life model using DynamoDB
  • Amazon Relational Database Service: A look at how Amazon RDS works and how you can use it in Big Data scenarios.
  • Amazon Relational Database Service Reference Architecture: A real-life model using RDS.
  • Amazon Redshift: An overview of Amazon Redshift works and how you can use it in Big Data scenarios.
  • Amazon Redshift Reference Architecture: A real-life model using Redshift.

Transcript

And so, just before we close out this module on Amazon DynamoDB, let's have a quick look at an example architecture from AWS. And in this scenario, what we're seeing is a model where we can use DynamoDB as part of the Big Data services to process time series data. So, in this scenario, what we're looking at is where we have data being streamed from sensors, such as power meters or industrial meters, or even satellites, and the data's being streamed in using simple queueing services, one of the number of the Amazon services, and the data effectively landing into the DynamoDB database.

And at the same time, we could be loading data in from applications such as Scarta, maybe it's a flow of samples to be used to join and process and from then, we can then move the data through into other services such as History, Amazon MapReduce, and through into Redshift. So you can see here how Dynamo is kind of a midpoint, right in the middle for receiving that data before streaming it on to other services that can use it. So that brings us to the end of the Amazon DynamoDB module.

As we've seen, Amazon DynamoDB is a NoSQL database in the cloud and it's suitable for anybody who needs a reliable and fully managed, NoSQL solution. The DynamoDB service is designed to provide automated storage scaling and low latency and is particularly useful when your application must read and store massive amounts of data and you need the speed and reliability behind that. So that's the end of the DynamoDB module. I look forward to talking to you soon.

About the Author

Students1380
Courses4

Shane has been emerged in the world of data, analytics and business intelligence for over 20 years, and for the last few years he has been focusing on how Agile processes and cloud computing technologies can be used to accelerate the delivery of data and content to users.

He is an avid user of the AWS cloud platform to help deliver this capability with increased speed and decreased costs. In fact its often hard to shut him up when he is talking about the innovative solutions that AWS can help you to create, or how cool the latest AWS feature is.

Shane hails from the far end of the earth, Wellington New Zealand, a place famous for Hobbits and Kiwifruit. However your more likely to see him partake of a good long black or an even better craft beer.