image
Shared Access Signatures Demo

Contents

The course is part of this learning path

Start course
Difficulty
Intermediate
Duration
30m
Students
157
Ratings
4.5/5
starstarstarstarstar-half
Description

Securing Azure Storage starts with an overview outlining the various authentication and authorization methods available to access Azure storage resources. We then look at each method, examining its benefits and disadvantages. Setting up access to storage resources using account keys and the various shared access signature variants demonstrates the practical implications and use cases of each access method. The course ends with a look at implementing Azure files as a mapped drive within an Azure virtual machine.

Learning Objectives

  • Overview of Azure account storage authentication and access
  • Create an account key rotation policy
  • See how to integrate storage account keys with Azure Key Vault
  • Implement Shared Access Signatures
  • Map a virtual machine drive to an Azure file share

Intended Audience

  • Students working towards the AZ-500: Microsoft Azure Security Technologies exam
  • Those wanting to learn how to secure an Azure storage account

Prerequisites

  • Be familiar with Active Directory concepts such as managed identities and role-based access control, Azure Key Vault, and the basics of Azure storage resources
Transcript

Shared access signatures – SASs - refine the account key concept by enabling you to target specific resource types and services. Let's start by looking at an account-level SAS, which gets its name from, well, being generated for use with all resources across the account. You can generate a SAS that targets an allowed service. These services map onto data storage types, so blob to containers, file to file shares, etc. Allowed resource types apply to the selected service from above but will have slightly different meanings depending on the service. Whichever service you select, you'll need to check the allowed resource type to interact with the top level of the allowed service. 

For blobs, this will allow you to list containers. For queues, list queues, tables, get table stats and the file service list shares. The container resource type applies to blob containers, individual queues, tables, and shares. The object type applies to individual blobs, queue messages, table entities, and individual files. Allowed permissions apply to the above-selected services and resources, with available permissions dynamically changing based on the selection. You can give extra blob permissions allowing the deletion of versions along with read, write, and filter index permissions. A SAS can be made active for a restricted duration using the start and expiry date time fields and only be authorized when used from a specified IP address range. The default protocol is HTTPS, but you could allow the use of HTTP if required, although this isn't recommended. 

I will give full service and container permissions on the blob service and generate a SAS using key1. Now I'll copy the SAS URL and head over to Azure Storage Explorer. If you don't have Azure storage explorer, there's a download link on the overview page under the open in explorer button. In storage explorer, I'll connect using the storage account or service option with a shared access signature URL. After pasting in the Service URL, the display name is automatically picked up. Click next, and we're presented with a summary of the service, resources, and permissions gleaned from the SAS. Like an account key, a SAS is fully self-contained and will be valid for any user who has it. Let's connect. We're in, and I have no highly classified data, just a test container with a couple of picture files. But if I did have crucial missing footage from the JFK assassination film and the SAS fell into the wrong hands – what are my options apart from moving the files? I can invalidate the SAS by regenerating the key it was generated with – key1. However, this will invalidate all other shared access signatures generated with key1, meaning you'll have to regenerate all the legitimate SASs used to access your storage. Once key1 has been regenerated, the account level, SAS is no longer valid.

A service SAS is generated in the context of a storage service and can be at the container or individual item level. Remembering that container has different meanings depending on the service context. In this case, the blob context, a container is a blob container. The ability to precisely target resources is a substantial improvement on account keys and account-level shared access signatures. 

Unfortunately, we still have the issue of revoking access if necessary. I'll first generate a service SAS on a single file. Select generate SAS from the resource's context menu. The signing method is account key; we'll use key1 again and only need read permission. Next, I'll generate the SAS and copy the blob SAS URL. I can paste the URL with all the authentication and permission information into a browser window to access the resource. The URL query parameters contain the permissions – sp equals r for read. ST for the start time and SE for the end time in universal time. SPR for protocol, in this case, HTTPS. SV is the Rest API version, and SR is the resource type the SAS is for – in this case, a blob. Finally, we have SIG for the token. 

One way to create a revokable SAS is to base it on an access policy. At the container level, I'll create an access policy. The add policy button enables you to create multiple policies with different attributes per container, which you can use to create multiple SASs. Give the policy a name and select the policy permissions. I'm not explicitly setting the start and end times, but we'll see it defaults to an eight-hour window. Click ok, and don't forget to save. Now, I'll create another SAS on the same blob, but his time, I'll select the stored access policy I just created. The permissions drop-down becomes disabled and is overridden by the access policy. I'll generate the SAS and paste the SAS URL into another browser tab to pull up the same image blob. We see the permissions parameter has been replaced with the SI parameter with the value of the access policy's name. I'll return to the Azure portal, delete the access policy, and click save. When I refresh the browser tab, we correctly get an authentication failure message. Instead of deleting the policy, I could have expired it. You can also add permissions to a policy to extend a SAS's access. However, the previous service SAS not created with a stored access policy still works.

Role-based access control is Azure's preferred authorization method, and user delegation enables you to use it in conjunction with a shared access signature. I'll create another SAS on a different blob, this time selecting user delegation key as the signing method. Everything else I'll leave as default, including just read permission. I'll generate the SAS and paste it into the browser. The error here is a permission mismatch which is a clue to user delegation's nature. 

You are delegating the shared access signature's permissions to the roles of the user that created the SAS. In other words, the SASs permissions need to be backed up by the RBAC authorization of the SASs creator. To view an object, the SAS will need read permission, and the delegated user's RBAC will need a storage data reader role. To modify a resource, the SAS will need write permissions, and the delegated user's RBAC will need a storage data writer role.

Returning to the portal and changing the authentication method from access key to Azure AD user account tells us we don't have the correct roles assigned. I'll go into access control for the container and assign an appropriate role. Typing storage in the search field displays the available built-in roles. I just want to see the blob, so I'll assign the storage blob data reader role to myself. Remember, this role applies to the user creating the SAS, not the user accessing the resource, as anyone possessing the SAS can use it. I assigned this role at the container level, but you can assign roles at the storage account level.

With the blob data reader role set up, we can now successfully change the authentication method to Azure AD User Account. In the other browser, we can see the image after hitting refresh. As you'd expect, removing the blob reader role from the user who created the SAS will remove the signature's access to the resource. Going back into the test container's access control and removing the role assignment has the desired effect. Be patient, as it takes a couple of minutes or so for the role assignments to take effect.

About the Author
Students
19315
Courses
65
Learning Paths
12

Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a  Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.