Implement Azure Storage Blobs and Azure Files
Implement Storage Tables
Implement Azure Storage Queues
Implement SQL Databases
This course teaches you how to work with Azure Storage and its associated services.
By the end of this course, you'll have gained a firm understanding of the key components that comprise the Azure Storage platform. Ideally, you will achieve the following learning objectives:
- How to comprehend the various components of Azure storage services.
- How to implement and configure Azure storage services.
- How to manage access and monitor your implementation.
This course is intended for individuals who wish to pursue the Azure 70-532 certification.
You should have work experience with Azure and general cloud computing knowledge.
This Course Includes
- 1 hour and 17 minutes of high-definition video.
- Expert-led instruction and exploration of important concepts surrounding Azure storage services.
What You Will Learn
- An introduction to Azure storage services.
- How to implement Azure storage blobs and Azure files.
- How to implement storage tables.
- How to implement storage queues.
- How to manage access and monitor storage.
- How to implement SQL databases.
Hello, and welcome back. We're now on the last section of this course, the topic of Azure SQL Database. In this section, we'll cover the key objectives associated with SQL Database. We'll start with what to consider when deciding on the SQL database tier to select. We'll then move on to how you can setup recovery options in your SQL database, and we'll describe the geo-replication features. Then finally, we'll cover importing and exporting data from your SQL database and the topic of scaling and elastic pools.
Choosing a tier starts with choosing the broad level. You might choose Basic for light workloads such as new projects, testing, development, or even learning. Standard for most online transaction processing or OLTP databases, or Premium for high scale, where performance is a key factor. The Basic tier only has a single low volume and low performance option. The other two levels provide options on size and performance in terms of Database Transaction Units, known as DTUs. DTUs define the available resources in terms of CPU, memory, and disk I/O. If your application grows, and you need more capacity and/or better performance, you can easily change the tier using the Azure portal.
Azure SQL database provides automated backups that are retained for seven days on the Basic tier, or 35 on the Standard and Premium tiers. You can therefore do point-in-time recoveries using these backups. You can use up to 200% of your maximum database storage as backup storage at no additional cost. Four backups are done weekly, with differential backups daily. The database logs are backed up every five minutes.
Point-in-time restore allows you to restore an existing database as a new database to an earlier point in time on the same logical server. You cannot overwrite the existing data. You can restore to an earlier point in time using the Azure Portal, Powershell, or using the REST API.
Azure SQL database provides further data protection by automatically replicating the backups to different regions, which is known as Geo-replication. This gives protection from failures occurring in a specific data center, or multiple data centers in a geographical area, as you can use Geo-restore to restore your database to a data center that is operational. You can optionally purchase active geo-replication which gives a lower recovery point objective, or RPO, being the amount of recent data lost due to failure, and a shorter estimated recovery time, or ERT. This is achieved by configuring up to four secondary databases in the same or different data centers, which are available for querying and for failover in the case of a data center outage or the inability to connect to the primary database.
You can enable and configure Active Geo-replication using the Azure Portal, Powershell, Transact-SQL, or even the REST API. SQL Database provides options to import and export the data. This is supported by using BACPAC files. BACPAC files will hold all the schema details, but when exporting, you can choose to exclude data from some or all of the tables. You can carry out an import and export using the Azure Portal or Powershell. Scaling is supported using elastic pools. Elastic pools mean that your database is hosted on a pool of servers rather than the single server. This allows you to design an application that is cost-effective and performing when the load of the database is highly variable. You'll only get charged when the data is accessed and the number of servers used will vary as the load changes. It's worth noting that there is a new preview offering called Elastic database tools, which provides more sophisticated scaling features built on top of sharded databases.
This brings us to the final demo of this course, and in this demo, we'll cover recovering an SQL database, setting up Active Geo-replication, and exporting data using the Azure Portal. I'm here in Visual Studio, and I've loaded up a movies8 database that I have in Azure, and I've opened Table which I've created with some sample data in there, which says New Changes. If we go to the Azure Portal, and restore to a point in time, we'll see that this table will no longer exist. Let's go to the Azure Portal now.
I'm here in the Azure Portal, and I've got our database loaded up. I've selected Restore, the option at the top, and now I'm gonna restore to the oldest point in time, which is 11:21. Once we've done that, we can give our new database a special name, so we'll say this one is restored, so movies_restored. We'll select the default, and just click OK. Now that's begun the process of restoring the database. When that's done, we can go back to Visual Studio and take a look at the new database that we'll create based on that restore point, and see that the table no longer exists. Now if we look in Visual Studio, I've already loaded up the new database, movies_restored. We can see that there is no table under here called dbo.Table.
Now let's have a look at configuring Geo-replication for an existing database. After opening up the Portal and you select the database, in our case we've selected the movies database, we can then go ahead and configure Active Geo-replication by selecting the Geo-Replication option under the Settings for the High-Availability section, so click on there, and then we'll choose a Target Region. In our case, we'll use Western USA. You can now define whether the secondary database is gonna be readable or not. This is the key option that defines whether the secondary database is up and running at all times, or whether the database is offline, and caught back up, that is not accessible while the primary's up, needs to be started up when the primary goes down.
For this example, let's just create a secondary, a readable secondary, so we'll leave that setting as it is. Then we'll click Configure Target Server. We now need to define the target server that will host the secondary database. We can optionally add the database to an elastic pool as well, but we won't be doing that in this section. Let's just give our database a name. After the secondary database is created and seeded, data will begin replicating from the primary database to the new secondary database, and now Geo-replication has been configured.
Finally, let's look at doing a data export. After we've gotten to the Portal and we see the screen that we can see now, we can select the database we want to export to to a BACPAC file. We click the Export option to start this process. We can provide a relevant name for this one, so let's just leave it as the default one which gives today's date and time. We can then configure it to be saved in a storage account. Let's select the movies8 storage account and we'll select our backups container, click Select, and then we provide the password to complete this process. Click OK, and we kick off the export process. When that completes, we'll have a BACPAC file within our data storage in the Blob storage area.
This concludes the final demonstration of this course, but stay tuned, and I'll give you a quick recap and some pointers to where you can learn some more information.
About the Author
Isaac has been using Microsoft Azure for several years now, working across the various aspects of the service for a variety of customers and systems. He’s a Microsoft MVP and a Microsoft Azure Insider, as well as a proponent of functional programming, in particular F#. As a software developer by trade, he’s a big fan of platform services that allow developers to focus on delivering business value.