Amazon Redshift Concurrency Scaling
The course is part of this learning path
This course covers Amazon Redshift's Concurrency Scaling feature, which adds query processing power to a cluster for specific users or queues as needed. You'll get introduced to concurrency scaling, what it does, its basic purpose, and how to activate it. We'll round things off with a demo from the AWS platform which will guide you through the process of setting up concurrency scaling in your environment.
- Understand the fundamentals of Amazon Redshift concurrency scaling
- Learn how to set up and activate concurrency scaling
This course is intended for anyone who wants to enhance their knowledge of Amazon Redshift, specifically how to implement concurrency scaling.
To get the most out of this course, you should have some basic experience with Amazon Redshift.
Hello, I'm Stephen Cole, a trainer here at Cloud Academy and I'm here, today, to teach you about Amazon Redshift's Concurrency Scaling feature. Amazon Redshift is a cloud-based data warehouse from AWS that is used to run mission-critical business intelligence dashboards, analyze real-time streaming data, and do predictive analytics.
When using Redshift, one of the challenges people face is managing performance during peak usage times. Thankfully, Redshift will automatically queue queries until sufficient resources are available. However, this means that, during these peak usage times, Redshift's performance is impacted.
There are a few options to deal with this. One of them is to leave the cluster as is and foster the expectation that, during peak times, some reports will take more time than usual to complete. While there is nothing wrong with this, it can delay important business decisions.
I suppose that you could over-provision the Redshift cluster. In the cloud, over-provisioning is a waste of resources and, in turn, a waste of money. So, probably not the best solution.
Scaling the Redshift Cluster using either the Classic or Elastic Resize operation is a possibility. The challenge here is that, depending on the amount of data in the cluster, it can take a fair amount of time and, while the resize operation is in process, the cluster is in read-only mode. This is fine for reporting but, while resizing, data cannot be added to the cluster.
In 2019, AWS added another way to provide capacity to Redshift Clusters. It's called Concurrency Scaling and adds query processing power to a cluster to specific users or queues as needed. It happens within seconds, is transparent, and provides users with fast, consistent performance even when queries start to number in the hundreds. Charges are based on usage and, for every 24 hours a cluster is running, it accumulates one hour's worth of concurrency scaling credits. When the workload demand subsides, Amazon Redshift automatically shuts down Concurrency Scaling resources to save cost.
Concurrency Scaling is a feature that can be turned on inside the Redshift Cluster that elastically adds compute power for queries based on demand. By elastic, this means you can configure Redshift to automatically add capacity as needed and as demand decreases, automatically remove that capacity. While this is happening, write operations continue normally. Users will see the most current data whether or not the queries are running on the main cluster or if they're running on the Concurrency Scaling cluster.
Though Redshift adds and removes Concurrency Scaling clusters based on demand, it's possible to configure which queries are sent to a Concurrency Scaling cluster and which ones have to wait their turn. The maximum number of additional concurrency scaling clusters ranges from 1 to 10. Though, this is a soft limit. If you need more than 10, reach out to AWS and make a request.
When needed, Redshift will turn on enough clusters to handle the required capacity. When submitting a query, Redshift creates a query plan and this plan is used to determine approximately how much capacity is required. The cost of Concurrency Scaling is granular. That is, charges accumulate on a per-second basis.
While there is no free tier, AWS lets you accumulate a full hour of credit for every 24 hours the cluster is running. When Concurrency Scaling happens, the charges are applied to the free credit balance first. When these credits are exhausted, you will be billed. The free credit is divided equally between the Concurrency Scaling cluster nodes in use.
For example, if two Concurrency Scaling clusters are active, one hour of credit provides 30 minutes of free usage. If four Concurrency Scaling clusters are active, one hour of credit results in 15 minutes of free usage.
If you've worked with Amazon Redshift, you might have heard about SQA, Short Query Acceleration. SQA is different than a Concurrency Scaling cluster. With SQA enabled, Amazon Redshift uses Machine Learning to predict the execution times of queries in advance. Redshift can then automatically route short queries to a dedicated queue so that they're not starved behind long-running queries.
Concurrency Scaling is similar in that it used Machine Learning to predict when queuing might start to happen. It then deploys additional resources when this queuing is detected or predicted. Even when Concurrency Scaling clusters are added, Redshift continues to perform SQA. The result is that it eliminates starvation in queuing with consistently fast performance.
To set up Concurrency Scaling using the AWS Console, use the search function to go to the Amazon Redshift dashboard. Inside the dashboard, Concurrency Scaling is configured using a Parameter Group. Parameter Groups are configured inside Workload Management. Workload Management is under the CONFIG menu. The default Parameter Group cannot be edited. You'll have to create a new one. Click on the Create button. This will open a new window called Create parameter group.
Give the Parameter Group a name and a description. I've named mine redshift-pg and given it a short description. Click on Create. In the Workload management window, I've selected my new parameter group. Workload management is selected by default. I can tell because it is currently orange. If the Parameters tab is active, click on the Workload management tab to change it. To enable Concurrency Scaling, click on on the button Edit workload queues.
In this window, I can see that the Concurrency scaling mode is off. I should mention that I only have one queue in this cluster. If I had multiple queues, I could turn on Concurrency Scaling for ones that need extra processing power. Also, the options for Concurrency Scaling are off and auto. I have no idea why auto is used instead of on. I'll change the setting to auto and click on Save.
Once saved, I'm returned to the Workload management window. However, I'm not quite finished yet. Here, I need to click on the Parameters tab because it needs to be edited. I can see that the value for max_concurrency_scaling_clusters is currently 1. That won't help at all. To edit the parameters, there's a button in the top right corner of this window. Click it.
In this window, I can see that I can change the value of max_concurrency_scaling_clusters. The minimum value is 0. This would, in effect, turn off Concurrency Scaling. The maximum value is 10. Though, if you need more, reach out to AWS to have the limit raised. I'll change mine to 5. Remember, Redshift will activate the number needed. So, think of this number as a type of guardrail. It will not automatically spin up 5 clusters. It will only turn on as many as needed UP TO a total of 5.
Then, I'll click on Save and this will take me back to the Workload management window. The last step is to apply the updated Parameter Group to my Redshift cluster. To do this, open the CLUSTERS window. In this window, towards the bottom, I'll click on the name of my cluster to bring up the details window.
In the details window, Cluster performance is highlighted automatically. To edit the cluster and update the parameter group, click on the Properties tab. Towards the middle of this window, there's a heading: Database configurations. In the middle of this window is a section called Parameter group. Currently, it is set to the default. On the top right corner of this window, click on Edit. It has three options. The first one is Edit parameter group. Click it. It opens a new window with a dropdown list with two options.
The first one is the default parameter group. The second one is the one I just created. I'll select the new one and then click Save changes. Once saved, there is a message on the console saying the cluster is being modified. Once the modification is complete, a new message will appear saying that the cluster has to be rebooted. Since this is a test cluster, I'll go ahead and restart now.
In my experience, good judgment comes from bad judgment. I don't think I need to tell you that restarting a Redshift cluster in a production environment is a bad idea. I hope that this is common sense. Please, let it be common sense.
Okay, just in case… DO NOT RANDOMLY RESTART A REDSHIFT CLUSTER IN A PRODUCTION ENVIRONMENT. SUCH THINGS MUST BE PLANNED. Doing so could result in this being your last day of work. You've been warned. Even AWS wants to warn you. I'm sure, so I'll reboot.
The cluster goes into a Modifying state. Then, it will go into an Unavailable state. Personally, it's at this point that, even in a test cluster, I get a feeling of dread and uncertainty. I know that I've done anything correctly. But, what if I missed a step? The fear, really, comes from years of working in on-premises data centers.
Recovering from failure there sometimes means going back to bare metal. There are times I have nightmares about such things. However, part of the beauty of the cloud is that you're only as vulnerable as your last snapshot or backup. Okay, that's probably overly simplistic. But, disaster recovery is much more streamlined in and around the cloud.
Finally, my Redshift cluster becomes available. My prayers have been answered! After all of this effort, I have a cluster that can intelligently respond to spikes in concurrent demand.
Before I close, I want to address one final thing. That is, Concurrency Scaling will give your queries extra processing power. If you have poorly designed queries, it means is that your bad queries run faster and cost more. That is probably not the desired outcome. My point is Concurrency Scaling can give a Redshift cluster a boost but it is not a substitute for good table design and solid query creation.
Concurrency Scaling is also best for giving high-priority queries the equivalent of an express lane on a highway. Not every query needs access to a Concurrency Scaling cluster.
That covers Concurrency Scaling, what it does, its basic purpose, and how to activate it. For Cloud Academy, I'm Stephen Cole. Enjoy your cloud journey. I'm looking forward to seeing how you change the world.
Stephen is the AWS Certification Specialist at Cloud Academy. His content focuses heavily on topics related to certification on Amazon Web Services technologies. He loves teaching and believes that there are no shortcuts to certification but it is possible to find the right path and course of study.
Stephen has worked in IT for over 25 years in roles ranging from tech support to systems engineering. At one point, he taught computer network technology at a community college in Washington state.
Before coming to Cloud Academy, Stephen worked as a trainer and curriculum developer at AWS and brings a wealth of knowledge and experience in cloud technologies.
In his spare time, Stephen enjoys reading, sudoku, gaming, and modern square dancing.