1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Design and Implement a Storage Strategy for Azure 70-532 Certification

Tables

Contents

keyboard_tab
Introduction
1
Overview
PREVIEW1m 36s
2
What Are Blobs?
PREVIEW5m 17s
Implement Azure Storage Blobs and Azure Files
Implement Storage Tables
10
Tables
8m 50s
Implement Azure Storage Queues
Manage Access
Monitor Storage
Implement SQL Databases
14
Conclusion
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 18m
Students145

Description

Course Description

This course teaches you how to work with Azure Storage and its associated services.

Course Objectives

By the end of this course, you'll have gained a firm understanding of the key components that comprise the Azure Storage platform. Ideally, you will achieve the following learning objectives:

  • How to comprehend the various components of Azure storage services.
  • How to implement and configure Azure storage services. 
  • How to manage access and monitor your implementation. 

Intended Audience

This course is intended for individuals who wish to pursue the Azure 70-532 certification.

Prerequisites

You should have work experience with Azure and general cloud computing knowledge.

This Course Includes

  • 1 hour and 17 minutes of high-definition video.
  • Expert-led instruction and exploration of important concepts surrounding Azure storage services.

What You Will Learn

  • An introduction to Azure storage services.
  • How to implement Azure storage blobs and Azure files.
  • How to implement storage tables.
  • How to implement storage queues.
  • How to manage access and monitor storage.
  • How to implement SQL databases.  

Transcript

Hello, and welcome back. We'll now cover developing with tables in Azure In this section we'll cover key objectives related to Azure tables.

We'll first cover how SDKs for tables support, create, read, update, and delete, or CRUD functionality. We'll then look at how multiple related updates to a table is supported using transactions. After that, we'll describe how to access all records in a partition, and how to query table data using OData. Finally, we'll look at the topic of scaling Azure tables.

In dot.net, working with tables is very similar to working with blobs. You need to supply connection string for the storage account in a config file and reference the same packages. For table specific functionality, you need to reference the Microsoft.WindowsAzure.Storage.Table namespace. You also need to create a CloudTableClient which gives access to all the functionality required to work with tables and the entities within tables.

You can use transactions to ensure a set of operations are grouped together such that they are all applied or nothing is applied, perhaps due to an internal error halfway through the operation. This allows your application to ensure that the data remains internally consistent. Transactions are supported for a set of inserts, updates, and delete operations on entities but only where these entities have the same partition key within the table. To implement this, you need to create a TableBatchOperation at the required operations and invoke the ExecuteBatch method.

Tables should be designed carefully with regard to what you define as the partition key. As we have seen, partition keys drive what can be done with transactions. Partition keys also determine how the data is stored within Azure, with all data with a specific partition key stored together. This implies that to optimize for performance, you should always query using the partition key using another property would cause a full table scan. Working with partition keys enable some useful operations, such as loading all entities with the same partition key.

The open data protocol, known as OData, is a standard that defines a set of best practices for building and consuming restful APIs. Azure tables support OData to deliver a standardized simple query interface using REST. Direct access is not possible as access will be denied. To use this options, you need to provide credentials, typically the storage account key.

A simple example is getting a list of tables. We show what you need to provide to get the tables in the movies storage account which returns a list of tables in JSON format. The partition key is the principle element used to group data. Tables are spread across multiple servers, but entities with the same partition key are co-located.

There are three broad choices when setting up partition keys which effect scaling. The first of these is single value. When using a single value, then there will be just one partition for the entire table. This is ideal for small tables and helps batch operations which only operate at the partition level. This approach is not suitable for large tables as they cannot spread over multiple servers. If you use multiple values, then the table can be spread over multiple servers. Each collection of rows with a unique partition key could be on a different server. This helps with load-balancing as table operations can spread out over multiple servers. If you use a unique value for each entity, this will result in very small partitions. This is highly scalable, can readily use many servers, but this does not allow you to use batch operations and queries across the table may be slower as you may need to get data from multiple servers.

In this demo, we'll cover working with tables including creating tables, inserting them, updating them, and deleting data. We'll also cover batch operations which are embedded a the transaction and querying data using partitions. In this case, we have created a console application called CreateTable. Similarly to our blob application, we need to setup the config file and add the required packages. We had a slightly different set of using statements such as the Microsoft.WindowsAzure.Storage.Table namespace to access the table storage types. We, then, reference the same storage account we used for our blob examples, but instead of a blob client, we create a tableClient. We then add a reference to a table and used a CreateIfNotExists method to create the table called directors. And, that's it. We've got our table reference with which we can work against. If we run the code, the directors table would then be present and viewable inside Cloud Explorer, as you can see on this little screenshot.

We now move onto inserting data into the table. Here, we have another program called AddEntity. This is very similar to CreateTable, but, in this case, we'll be creating and adding an entity into the directors table. We start by defining a DirectorEntity that inherits from TableEntity which is a key base class when working with tables. We define a constructor that sets up genre as the partition key and uses a string of the fullName as the row key. We need also to include a parameter-less constructor.

In this simple example, we have one further property, Movies, which is a string holding a list of movies. The Main method is very similar to CreateTable, but after creating the table reference, we create a new director entity, d-i-r-1, and populate it with suitable data. We then create an insertOperation, and, finally, we execute it. When we run this code, the entity is then added and we're able to see it in Visual Studio.

We can then move on to inserting multiple items into a table. This screenshot shows how transactions and batch operations are used. We create another console application called AddEntities. This is very similar to AddEntity, but, in this case, we need to create multiple entities, and then add them in a TableBatchOperation which is another key class in the Azure tables domain. Such an operation groups the changes into a single transaction. We use DirectorEntity as before, the changes to the Main method are to create two directories and then create a TableBatchOperation, then call the Insert method on both, and, finally, call the ExecuteBatch to apply the inserts as one single operation. When we run the code, you can see from the screenshot that the entities are added.

Let's have a look at how we can retrieve multiple entities from a table. We'll demonstrate how to retrieve all entities that have the same partition key. Here, we have another console application called GetEntities. This is very similar to AddEntities, but, in this case, we've altered the Main method so that we first create a table query which filters results to those where the partition key is scifi. We can use more complex expressions to restrict the results further. We then use a for-loop to print out three properties for each entity with formatting to align them into columns.

We now move on to the update part of CRUD. We will demonstrate how to update an individual entity. Here, we have another console application called UpdateEntity. This is very similar to AddEntity, but, in this case, we need to change the Main method so that we first retrieve an entity from the directors table. In this case, we retrieve the MelBrooks entity. We then change the Movies property and create an insert or replace TableOperation against this entity which we then execute against the table. If we then view the table, we'll find that the Movies field is updated. The timestamp property will also show the time this was changed.

Finally, let's have a look at the delete part of CRUD. Here, we show how to delete an individual entity. We've created another console application called DeleteEntity which is very similar to previous UpdateEntity application. Firstly, we retrieve the MelBrooks entity. We then create a delete TableOperation, referencing the MelBrooks entity we first retrieved and, then, execute it against the table. If we now inspect the table, we find that the MelBrooks entity has been deleted.

That concludes the demonstration for this section, but, if you stay tuned, the next section coming up is about queues.

About the Author

Isaac has been using Microsoft Azure for several years now, working across the various aspects of the service for a variety of customers and systems. He’s a Microsoft MVP and a Microsoft Azure Insider, as well as a proponent of functional programming, in particular F#. As a software developer by trade, he’s a big fan of platform services that allow developers to focus on delivering business value.