Implement Azure Storage Blobs and Azure Files
Implement Storage Tables
Implement Azure Storage Queues
Implement SQL Databases
This course teaches you how to work with Azure Storage and its associated services.
By the end of this course, you'll have gained a firm understanding of the key components that comprise the Azure Storage platform. Ideally, you will achieve the following learning objectives:
- How to comprehend the various components of Azure storage services.
- How to implement and configure Azure storage services.
- How to manage access and monitor your implementation.
This course is intended for individuals who wish to pursue the Azure 70-532 certification.
You should have work experience with Azure and general cloud computing knowledge.
This Course Includes
- 1 hour and 17 minutes of high-definition video.
- Expert-led instruction and exploration of important concepts surrounding Azure storage services.
What You Will Learn
- An introduction to Azure storage services.
- How to implement Azure storage blobs and Azure files.
- How to implement storage tables.
- How to implement storage queues.
- How to manage access and monitor storage.
- How to implement SQL databases.
Hello and welcome back. We'll now cover security aspects when developing using Azure. We will explore how we can manage and restrict access to data held in Azure. We'll first cover how to create Shared Access Signatures including handling renewals of SIS tokens and using validation to ensure data integrity. We'll then describe how to create Stored Access Policies and the regeneration of Storage Account Keys. Finally, we'll describe how you can use Cross-Origin Resource Sharing, known as CORS.
Azure offers comprehensive security features to allow you to protect your data. You have overall access control through a Storage Account Key. More granular control of access can be set up using a Shared Access Signature also known as SAS, S, A, S, to allow access to specific elements for a defined period. A service-level SAS allows access to specific resources in the storage account. An account-level SAS provides further rights over a service-level SAS. For example, creation rights, and might give access to multiple types such as Blobs and Queues.
You can use Stored Access Policies to help manage the creation of Shared Access Signatures. These can be used to group Shared Access Signatures and to provide additional restrictions for signatures that are bound by the policy. To allow access to data stored in Azure, in a different domain than a domain that was used by the web application, which would be blocked by default security rules in a standard browser, Azure provides support for Cross-Origin Resource Sharing known as CORS.
All storage is protected by use of a Storage Account Key but once you have this key you have unlimited access to all storage items for all time. Shared Access Signatures apply tighter control by restricting access to particular items and removing access after a specific time period. A SAS token can also include what operations you're allowed to perform. For example you may be given just list and read access but not access to write to the storage. The SAS token is a URI that defines, through its query parameters, all of the information necessary for authenticated access to Azure storage. To use this you just need to provide the SAS token into the constructor of the code you are using.
This example provides access to a specific Blob, Blob TXT, the SV parameter defines the service version. The SR parameter defines the type of storage object. In this case, a Blob with SR equals B. The Sig parameter is the signature that authorizes access. The ST and SE parameters define the period of access. The SP defines the type of access allowed with, in this case, read and write granted, RW. You could also add an SIP parameter to restrict access to a range of IP address and an SPR parameter to define the type of communication. For example, limited to only allow HTTP access.
SAS tokens will have an expiry date so we'll need to consider a token renewal approach. Clients should renew ahead of the expiry time to allow for any failed attempts to be retried. In designing a renewal approach you need to balance the increased security achieved by only allowing short lived SIS tokens against the increased inconvenience and effort of having to regularly renew the SIS tokens. In designing the access policy and associated code you need to be particularly careful when you grant by access in an SIS.
You need to avoid data corruption, either accidental or deliberate. You should therefore include suitable validation code to be run before data modified by a third party is used by your application. You can use Stored Access Policies to help manage the creation of Shared Access Signatures. These can be used to group Shared Access Signatures and to provide additional restrictions for signatures that are bound by the policy. The Stored Access Policies defined on a resource container, a Blob container, Table, Queue, or File Share. The SIS can be associated with a Stored Access Policy and then inherits the restrictions in that policy. Therefore one way to quickly terminate a number of SIS tokens is to delete the policy or set the policy to expire immediately. It is typical practice to have a policy with a very distant expiry time or even no expiry and use the SIS dates for normal control over expiration. Policy expiry date would only be used when you wanted to immediately terminate all related SIS tokens.
For security purposes you may want to periodically change the storage account keys, or if perhaps you believe that the keys have been compromised you may wish to regenerate them, invalidating the old keys. You get two keys to use to give you access to the storage account. This allows you to maintain continuous operations to applications during the regeneration process. In order to regenerate keys without causing a service outage, you can switch all access to use one of the keys. You can then regenerate the other key, once finished you can then switch to using this new key and then proceed to regenerate the other old key.
Cross-Origin Resource Sharing, or CORS, allows applications in one domain to access objects in a different domain. For security reasons, cross domain operations like this are blocked by browsers. You can use CORS on all storage items, be they Blobs, Files, Tables, or Queues to enable cross domain resource access in the browser. You do however need to enable this feature as it is disabled by default.
Let's now have a look at access control configuration, including creating and using SAS tokens, creating a Stored Access Policy in an enabling cause. We'll also demonstrate how to regenerate Storage Account Keys in the Azure portal.
Let's go ahead and create a SAS token for our Blobs. The code required to do this and for other types of storage will be very similar. As with all the previous examples, we'll do our standard set-up and get a reference to the Blobs we're interested in. We can now create the SAS as a Shared Access Blob Policy. We provide start and expiry dates and then permissions, read and write. Calling get Shared Access Signature with the constraints we set up gives us a token string. In our demo we combine the Blob's URI with the SAS parameters and write this to the console.
What if we tried to use the SAS token we just created and tried to perform a deletion, and action that is not granted by this token. Let's go ahead and define the Blob by providing the SAS token to the cloud Blob Blob constructor. We can now download the file which works fine as read was one of the permissions that was granted by the token. However, we'll see that if we try and delete this Blob by calling delete on it, we'll get an exception which will trap in this example.
Let's now have a look at creating a Stored Access Policy on the scifi container. After our standard set-up, and once we have a reference to the scifi container we can go ahead and define the policy. We start by defining a name for the Stored Access Policy. We then create a new set of policy parameters which include an expiry date and Blob access rights. We then get the existing permissions on the container and add our new policy to the existing collection of Shared Access Policies. After which we can call set permissions to update the container's permission set. The next line of code fetches the permissions on the container again and we can retrieve our new policy that we expect to be in the Shared Access Policy's container by name. We'll then print that policy to the console.
Now let's have a quick look at how we can go ahead regenerating access keys in the Azure portal. Regenerating the keys is as simple as going to the settings for the storage account and selecting the section named access keys. So let's select our Movies8 storage account. Now the settings are displayed we can click on keys, and we can click the button at the top that says regenerate secondary. So we click on that, select yes, and that's it, the key's regenerated. Remember that this action invalidates the previous key so ensure that you're not using it anywhere to avoid a service disruption.
Finally, let's have a look at how we can define a Cross-Origin Resource Sharing, or CORS, policy. Once we have a reference to our Blob Client, we need to call to get services property method. The service property's object returned by this call as a cause property which defines the Cross-Origin Resource configuration for our Blobs. We can create a cause policy by creating a new instance of the cause property's class against the cause property. The cause property's class has a cause rules collection which we can add our new rule to. Our new rule requires us to define the various properties which we won't talk about in-depth here.
This example creates a very permissive configuration allowing Cross-Origin Resource requests from any domain and allows resources to be fetched by pretty much any HTTP method, such as GET, POST, HEAD, and PUT. The specific details will depend on your scenario. Moving on and having configured our cause rule, we can add it to the cause rules collection and then call set service properties to apply the changes. In our example application we then re-fetch the service properties to show that the new service rule has been applied by printing it to the console.
Stay tuned for the next section, where we'll see how we can monitor Azure storage using metrics and log-in.
Isaac has been using Microsoft Azure for several years now, working across the various aspects of the service for a variety of customers and systems. He’s a Microsoft MVP and a Microsoft Azure Insider, as well as a proponent of functional programming, in particular F#. As a software developer by trade, he’s a big fan of platform services that allow developers to focus on delivering business value.