Implement Azure Storage Blobs and Azure Files
Implement Storage Tables
Implement Azure Storage Queues
Implement SQL Databases
This course teaches you how to work with Azure Storage and its associated services.
By the end of this course, you'll have gained a firm understanding of the key components that comprise the Azure Storage platform. Ideally, you will achieve the following learning objectives:
- How to comprehend the various components of Azure storage services.
- How to implement and configure Azure storage services.
- How to manage access and monitor your implementation.
This course is intended for individuals who wish to pursue the Azure 70-532 certification.
You should have work experience with Azure and general cloud computing knowledge.
This Course Includes
- 1 hour and 17 minutes of high-definition video.
- Expert-led instruction and exploration of important concepts surrounding Azure storage services.
What You Will Learn
- An introduction to Azure storage services.
- How to implement Azure storage blobs and Azure files.
- How to implement storage tables.
- How to implement storage queues.
- How to manage access and monitor storage.
- How to implement SQL databases.
Hello and welcome back. We'll now look into some more advanced topics when working with blobs. In this section, we'll first cover the functionality available to work with metadata attached to containers and blobs. We'll then cover when to use page blobs and how they differ from blob blobs. We'll then go on to describe the streaming features available when developing blobs and using asynchronous methods. Lastly, we'll discuss the security options available.
Metadata is available for containers and blobs. It is a set of key value pairs. Some of this metadata is available as soon as the blob or container is created. This metadata represents the important system properties of these objects. You also have the ability to add user defined metadata. This can be used where your application design requires extra data to be available describing the content of your data.
The SDK provides the full facilities to read the metadata and add user defined metadata as required. Note there are some limits to this. For example the total size of the use of defined metadata on blobs is limited to just 8K.
Page blobs are different from blob blobs in that they are designed to be used like a disk rather than individual files. They are optimized for frequent random access for both reads and rights as you might expect from something that behaves like a disk. It's specifically designed to hold VHDs and is in fact used by Azure Vms for disk storage. The page blob is subdivided into pages of size 512 bytes which can be accessed independently.
The overall maximum size for a page blob is one TB. Blobs can be downloaded using the Download to Stream method. This allows you to partially load a blob into memory and begin processing it rather than needing to load the entire blob. The coding options also include using the asynchronous approach. In particular, you can copy blobs and list blobs using asynchronous methods. For example, there is the method list blobs segmented async. A secure system needs not only authorization and authentication control such as access keys but requires protection of the data passed across a public network like the internet for example. To address this requirement, we can use HTTPS.
We have seen, so far, the use of shared access keys which once obtained, give you full access to the data. An alternative is to use a shared access signature to grant users specific permissions for a limited time period. This topic will be covered in detail later in the course.
Let's have a look at some of the points we've just discussed. Here is a simple application in which we'll access and enumerate container properties and metadata. After using the standard code to reference the storage account and Sci-Fi container, we'll first look at how to obtain container properties. We first have to call fetch attributes. This will help us read the metadata.
With the attributes loaded, we can now access all the properties such as the last modified date and the e-tag from the property's object. We can then set user defined metadata by adding entries into the container.metadata dictionary and persisting the changes by calling the set metadata method. We can enumerate all of the metadata items simply by looking over the container.metadata dictionary. Lastly, we can remove items from the metadata collection by removing them from the metadata dictionary and again calling the set metadata.
If you run this application, you'll get the output that you can see here on the screen. Here, we have a console application called Upload Page Blob. We're going to demonstrate how to create a page blob with an upload and image file in 512 byte chunks. The sample illustrates the level of control you have over this process. And this does give you additional possibilities such as starting to read the file when it's not fully uploaded.
After setting up the usual variables, we first load the image file into a memory stream. We then create a page blob just bigger than the image file. Note that we do not need to make sure that the size specified in this case of 327,680 is a multiple of 512. We then use a wild loop to read and upload the file in 512 byte chunks.
Here, is a console application called Download Blob Stream. This application will stream the contents of the file as the file is being downloaded. After setting up the usual variables, we first download the blob into the memory stream and then we set the stream position to its start and then create a stream reader to read the stream. We then use a wire loop to read the lines and then print them out.
If you run the Download Blob Stream application and break it in a debugger after reading two lines this is the output you should see. Here is the last sample of code. We've created a console application called Copy Blob Async and this will demonstrate how to launch a blob copy operation. A blob copy operation is asynchronous so we can't simply wait for the Start Copy Method to return after invoking it. One option is to keep checking the status periodically until the status indicates that the process has been completed. After setting up the usual variables, we first created a reference to the source and target blob. We then start the copy and wait for the copy state to change from pending. This code has been simplified and doesn't include any error checking and the likes which we would normally use in the real world scenario, but it does demonstrate the basics of a blob copy operation.
So now that you've seen these sample codes, you should be able to go ahead and start practicing with them but for now if you just hold on for the next video we'll start talking about blob content delivery networks, hierarchies and scaling.
Isaac has been using Microsoft Azure for several years now, working across the various aspects of the service for a variety of customers and systems. He’s a Microsoft MVP and a Microsoft Azure Insider, as well as a proponent of functional programming, in particular F#. As a software developer by trade, he’s a big fan of platform services that allow developers to focus on delivering business value.