Skip to main content

Cloud Data Warehouse with AWS Redshift

Amazon Redshift is a fully managed petabyte scale cloud data warehouse service offered by Amazon Web Services. It removes the overhead of months of efforts required in setting up the data warehouse and managing the hardware and software associated with it.
In this series of posts, we will be setting up a Redshift cluster, ingest some volume of data and play around with it. We will also take a look at some of the advanced options available such as understanding query plan to improve performance, workload management, cluster re-sizing, integration with other AWS Services.
Cloud Data Warehouse with Redshift
Image courtesy: Amazon Web Services

Redshift based Cloud Data Warehouse Architecture

Let’s begin with a brief introduction of the Redshift architecture.

  • Leader Node – the leader node parses the query, develops the query execution plan and distributes it to the compute nodes. The Leader Node is provisioned automatically by the service and is not billed
  • Compute Node – this is the node that stores data and executes the query. Each Compute Node has its down compute, memory and storage
  • Client Applications – client applications can be the standard ETL, BI  and analytics tools
  • Internal Networking – All the nodes are internally connected through a 10g network enabling faster data transfer between the nodes. The compute nodes are also not exposed to the client applications. Client applications always talk to the Leader Node.

Here are some key features of Amazon Redshift:

Columnar Storage

In row wise database storage (typically used in OLTP databases), data blocks store values sequentially for consecutive columns that make up a single row. This works for OLTP applications where most transactions read/write most of the columns in a row. Amazon Redshift employs columnar storage where data blocks store values of a single column of multiple rows. This means that reading the same number of column field values for the same number of records requires less I/O operations when compared to row-wise storage. This provides increased I/O performance and savings in storage space.

MPP Architecture

Redshift employs a Massively Parallel Processing (MPP) architecture that can distribute SQL operations across all available resources (nodes) resulting in very high query performance. A Redshift cluster comprises of a Leader Node automatically provisioned whenever there is more than one compute node.The leader node parses and develops execution plans to carry out database operations, in particular, the series of steps necessary to obtain results for complex queries.The leader node compiles code for individual elements of the execution plan and assigns the code to individual compute nodes. The compute nodes execute the compiled code and send intermediate results back to the leader node for final aggregation.

Scalable

The number of nodes in a Redshift cluster can be dynamically changed through the AWS Management Console or the API. We can add more nodes to the cluster for increased performance or if we need more storage. We can start with single 160GB DW2.Large node and scale all the way up to a petabyte. During the scaling activity, the cluster is placed in a read only mode and all the data is copied to a new cluster. Once the new cluster is fully operational, the old cluster is terminated and this process is entirely transparent to the clients. During this activity, the query performance can be slower.

Compression

Data stored in Redshift is automatically (by default) compressed. Compressed data reduces disk usage and data is uncompressed after loading it into memory during query execution. Since Redshift employs columnar storage, Redshift can apply appropriate compression encodings that are tied to the column type.

Security

Redshift comes with loads of security features including:

  • Virtual Private Cloud: You can launch Redshift within VPC and control access to the cluster through the virtual networking environment
  • Encryption: Data stored in Redshift can be encrypted. This can be configured when creating the tables in Redshift
  • SSL: To encrypt connections between clients and Redshift, SSL encryption can be used
  • Data in transit encryption: Redshift uses hardware accelerated SSL while connecting to Amazon S3 or DynamoDB (during import, export, backup)

Fully Managed

From backups to monitoring to applying patches to upgrades, Redshift is fully managed by AWS. Data stored in Redshift is replicated in all the cluster nodes and automatically backed up as Snapshots and stored (for user defined time period) in S3.  Redshift continuously monitors the health of the cluster and automatically re-replicates data from failed drives and replaces nodes as necessary.

Written by

47Line is building solutions solving critical business problems using “cloud as the backbone”. The team has been working in Cloud Computing domain for last 6 years and have proven thought leadership in Cloud, Big Data technologies.

Related Posts

— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...

Read more
  • AWS
  • EC2
  • re:Invent 2018
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS
Albert Qian
— August 9, 2018

AWS Summit Chicago: New AWS Features Announced

Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...

Read more
  • AWS
  • AWS Summits
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless