1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Introduction to Azure Storage

Blob storage demo


Course Intro
Storage overview
Blob storage
Table storage
Queue storage
File storage
Disk storage
Getting the Most From Azure Storage

The course is part of these learning paths

AZ-303 Exam Preparation: Technologies for Microsoft Azure Architects
AZ-104 Exam Preparation: Microsoft Azure Administrator
DP-201 Exam Preparation: Designing an Azure Data Solution
AZ-103 Exam Preparation: Microsoft Azure Administrator
3 Pillars of the Azure Cloud
more_horizSee 7 more
Start course
Duration1h 47m


The Azure Storage suite of services form the core foundation of much of the rest of the Azure services ecosystem. Blobs are low-level data primitives that can store any data type and size. Tables provide inexpensive, scalable NoSQL storage of key/value data pairs. Azure queues provide a messaging substrate to asynchronously and reliably connect distinct elements of a distributed system. Azure files provide an SMB-compatible file system for enabling lift-and-shift scenarios of legacy applications that use file shares. Azure disks provide consistent, high-performance storage for virtual machines running in the cloud.

In this Introduction to Azure Storage course you'll learn about the features of these core services, and see demonstrations of their use. Specifically, you will:

  • Define the major components of Azure Storage
  • Understand the different types of blobs and their intended use
  • Learn basic programming APIs for table storage
  • Discover how queues are used to pipeline cloud compute node together
  • Learn to integrate Azure files with multiple applications
  • Understand the tradeoffs between standard/premium storage and unmanaged/managed disks



Alright, so let's take a little bit of a closer look at the Azure Blob service, and some of the features that it has. The first one that I wanna look at is something called SAS tokens, or shared access signature tokens. The idea behind SAS tokens is that this gives us a secure URI or token that we can hand out to applications or users that grant specific, narrow and generally time-bound access to specific resources within our storage account.

We saw a previous example in a prior demo of granting access to our storage account to a fully authenticated user. In the cases of SAS tokens, we can merely hand out a URI and any application or any user that has access to that URI can then use that URI to connect to the storage service, perform any operations that are enabled by that URI, and then kind of go on their way.

So, that's another common pattern for accessing storage artifacts, and again, it's more of a general-purpose pattern than necessarily handing out the account keys that are highly privileged keys that let you do anything across an entire storage account. So, let's go ahead and create a SAS token and then I'll show you an example of how to use it.

So, I'll drill into my storage account here. You can see on the left-hand side, we have this tab for shared access signature, so I'll click on that. Now, the SAS token that we're creating here is what's called an account-level token. You can also create specific resource-level tokens. I'm just going to create one that kind of grants privileges across the entire account, but just know that there are a few different types.

So, the one that I want, of course, is specific to Blob storage, so I'm going to disable some of the other options. I'm going to, I can specify the granularity, whether it's service, container, or individual object. I'll leave all three, just 'cause I'd like to be able to, essentially, what I want to do is I want to create a token that allows kind of browse access to the Blob information in my account, but no write or update information, no write or update capability.

So I'm going to disable writes, deletes, adds, and creates. So, I'm going to leave read and list as the operations that can be performed by anybody who has access to this token. I'll leave the starting and ending points as-is. By default, the start is exactly when I open this window, and the end will be eight hours later.

I could, of course, more narrowly constrain it if I needed to. I could also specify the set of IP addresses which are actually allowed to use this token. In this case, I'm just going to leave it open, just for demo purposes, but just know that you can restrict by IP address as well. And then, ultimately, I just choose the key that this is the individual account key, those kind of master keys that exist at the account level, we have two of them.

I just pick one that I want to use to actually sign this token. And so, I'll just pick the first one as the default. And I generate the key, or generate the token. So, I have a URI down here, if I zoom in a little bit, you can see it a little bit more clearly. And this is the URI that I'll hand out to any client application that I want to have access to, this particular access to my storage account.

So I'm going to copy it, and then what I'm going to do is I'm going to switch to an application called Storage Explorer. Now, Storage Explorer is just a desktop application that you can use to manipulate and manage and deal with your artifacts that are in any of a number of storage accounts, you can connect to multiple at the same time, and just kind of manage them from a desktop interface instead of in the portal.

Sometimes, just handy for imports and doing some additional work that you can't necessarily easily do inside the portal. You can download this tool, I have a link to it later in the course, but you can download this tool at storageexplorer.com if you're interested. Okay, so I'm going to, there are a number of ways you can connect to a storage account from inside Storage Explorer, but I'm going to use my SAS token that I just created.

So, I'll click on the left-hand side, say connect to Azure Storage, and I'm going to choose the option to connect via shared access signature. Click next, and I'll say that I want to use a URI, and I paste that in, and then say next, and it gives a little bit of a summary, and then I want to say connect.

And so, if you look over here on the left-hand side, we've connected, and you can see that here's the name of my account, joshintrostorage, it designates that it was connected via SAS, and so you can see, now, recall that, as we discussed, storage accounts allow you to create artifacts like blobs, tables, queues, files.

. . Ordinarily, I would see, if I connected with a fully privileged account or fully privileged access, then I would see all of those things on the left-hand side here, in this tree. But you can see that all I see in here are blob containers and blobs. The reason why is precisely because if we go back to my token, recall that the allowed services that I designated was only the blob service, so none of the rest of these I can see, so just know that that's what's happening there.

So, if I click on the images container, then I still have a single image. I'm using the same container that I used in the prior demo. I still have a single image in here that I can, if I want to, I can open that up. And , sure enough, you can still see Junie, standing by the stream, so we know that all is well, I have access to the blob information in this account.

So, I can do other things in here as well. Things like, I can examine metadata, so if I right-click on the blob itself and select properties, then I can see some of the properties of the blob, like its e-tag that's for concurrency control, we'll talk about that a little bit later in the course. I can see how big it is, the type of blob it is, and just some additional information as well.

You'll also have things like content type. That's what allows me to, when I open that blob, of course, it's just binary data, but if I assign a content type to it, then my operating system or my browser, whatever it is I'm using to open that blob kind of knows how to deal with it, because it knows that it's a JPEG, knows that it's an image, it knows to open it in an image-viewing application.

You can also add your own custom metadata. None of this functionality is specific to the Storage Explorer tool, this is all base functionality that exists within the blob service itself, so any tool, or any, you can certainly use the blob APIs to add the metadata yourself, in fact, that's all Storage Explorer is doing is essentially just giving you a user interface on top of those APIs.

But certainly, you can write your own code to add metadata or interrogate existing metadata as needed. Okay, so that's shared access tokens, and kind of lightweight navigation and manipulation of blobs. The other major feature that I wanted to show and talk about is the ability to expose blobs using HTTP endpoints.

So, we've demonstrated so far in, in a couple of demos, we've demonstrated providing access to fully authenticated users to a storage account and blobs, and now we've demonstrated how to use SAS tokens to provide kind of discrete, narrow access to, in this case, blobs. SAS tokens, for what it's worth, also, of course, work for things like, I should mention, work for files, queues and tables as well, but we've demonstrated how you can use it for blobs.

What we'd really like to do is show this other possibility for accessing blobs in particular, and that's the ability to expose them via HTTP endpoints. By default, any blob that you add to blob storage is not publicly accessible, you have to kind of opt in to that behavior, but you can do it, you can turn it on, and that basically allows you, for example, I can get a URL for this image of my dog, and then I can just go to any browser or any tool that's capable of making HTTP calls and then I can access that blob directly, without any extra tokens or any other authentication, I can opt in to that and then anybody can access it, anybody who has the HTTP endpoint.

So, to get that, again, here in the Storage Explorer tool, you can access that information very easily. So if I go to properties and go down to, you can see, in here there's a URI, which, if I copy, and then cancel, I will go back to my browser, and I'll try to open that up. Now, of course, what I get here is I get a 404, I get an error message, basically, from Azure.

But it's essentially a 404 saying hey, sorry, I don't know what this thing is, I can't find it, resource doesn't exist. Of course, the reason why is because, again, that HTTP endpoint behavior is an opt in behavior, you're not going to get it by default. Just a way to make it a bit more secure. So, let's go ahead and enable that, and and I'll show you how that works, it's very easy.

Let me browse back to my storage account and navigate to my blob. And here we are. And, oops, didn't want to drill all the way in here. What I wanted to do is I wanted to click on access policy. You set this at the container level, you don't actually set it at the individual blob level, you set it at its parent container level.

By default, the access type is private, which means that it's exactly what it sounds like, the only person who can access that blob from an HTTP endpoint standpoint would be an administrator, an Azure administrator who's fully authenticated. And, of course, as we showed a moment ago, just opening up a browser window and navigating to the URL, you're not authenticated, so we don't see anything.

So, the other options we have are we can designate the blob access type, which basically means that anybody can read an individual blob. As long as they have the URL, they can see it. The other option is the container level, which says that anybody can not only view all of the blobs individually in the container, but they can also navigate the container itself and actually get a listing of all of the blobs that exist for this particular container.

So that gives you a little bit of a means to provide some sort of pseudo file system kind of behavior if you wanted to, via HTTP. I'll just pick the blob access type here and click save. Okay, so now, if we go back to that browser window, we're at our same URI here, and I'll click refresh, and in this case, now we can see Junie by the stream again, because now we've enabled that HTTP access.


About the Author

Josh Lane is a Microsoft Azure MVP and Azure Trainer and Researcher at Cloud Academy. He’s spent almost twenty years architecting and building enterprise software for companies around the world, in industries as diverse as financial services, insurance, energy, education, and telecom. He loves the challenges that come with designing, building, and running software at scale. Away from the keyboard you'll find him crashing his mountain bike, drumming quasi-rythmically, spending time outdoors with his wife and daughters, or drinking good beer with good friends.