Database Backups


HADR Introduction
Course Introduction
Always On Availability Groups
Database Backups
HADR Summary
Start course
1h 6m

High availability disaster recovery (HADR) is an integral part of any organization’s data strategy. It is also an essential string to every DBA's bow. In an online world that operates 24 hours a day, going offline or losing customers' data cannot be tolerated. This course examines the features that Azure provides to help you make sure your SQL databases, whether they are managed in the cloud or on-premise, are not the point of failure in your systems.

High availability disaster recovery encompasses two fundamental concepts. Firstly, how to minimize the amount of time your databases will be offline in the event of unforeseen events like hardware failures, power outages, or any number of natural disasters. Secondly, it looks at how to minimize data loss when any of these events occur. It is one thing to be offline, but it is another thing to lose data that is already in your custody.

In this course, we will cover the full range of Azure options from turnkey to custom and do-it-yourself solutions. If you have any feedback relating to this course, feel free to reach out to us at

Learning Objectives

  • Understand the concepts and elements of high availability and disaster recovery
  • Learn about hyperscale databases and how they are used
  • Learn how to combine on-premise with the cloud to replicate and sync your data
  • Understand what always on availability groups are and how they operate
  • Implement geo-replication and failover groups
  • Learn about the wide range of options you have for backing up your databases with Azure

Intended Audience

This course is intended for anyone who wants to implement high availability and disaster recovery procedures in place for their Azure SQL databases.


This is an intermediate to advanced course and so to get the most out of it, you should be familiar with SQL Server management studio, database operations like backup and restore, and T-SQL, and also have some familiarity with basic active directory concepts like users and permissions.


As we've seen, outside of the business-critical tier and costly high availability disaster recovery solutions, database backups play an essential role. While backups can be used for migrating data to another region or environment like restoring production data to test, it is a failure situation where we are interested in point of time recovery. That is restoring the database to the state that it was in when the failure happened, or at least as near to that as possible.

In the cloud environment, whether that is Azure SQL databases or SQL Managed Instances, full database backups are taken every week and differential backups every 12 to 24 hours. On top of that, transaction logs are backed up every five to 10 minutes, with the frequency dependent on the service level tier and the amount of database activity.

The database backups are then copied to Blob storage in a different Azure region so that if there is a calamitous region-wide outage, your data is still accessible to be restored. If you're running SQL Server on-premise or in a VM, then you'll have to perform the backup procedure yourself. You have a couple of options here, one is to use Management Studio which involves going to the database you want to backup, right-clicking and selecting Tasks and Backup.

You have several options here. You can perform a full backup, a differential backup, or a transaction log backup. I'm going to perform a full backup and overwrite the existing backup sets, and that is going to a destination in the default SQL Server backup folder. I'm just going to overwrite the backup file that is already there. It's exactly the same procedure for doing a differential backup as it is for a full backup. I'll just remove that file.

Obviously you will want to choose differential from the backup type. I'm going to overwrite an existing backup file, and I'll go into Media Options and also click overwrite all existing backup sets. Clicking on Script will allow you to script the backup for future use, and I'll click OK to kick off the backup.

Finally, we can do a transaction log backup. Traditionally, they have the TRN file extension. Obviously this is not practical if you want to automate the backup process so what you can do is use SQL commands to perform your various backups. So the first backup here will be a full backup and because I'm going to schedule a full backup to be once a week, so I'll set the expiration date to seven days from now.

The expiration date says when the backup set will expire and can be overwritten. I should really make the full backup name dynamic as well but this really just underlines the point that it will be overwritten every week.

Next, we will want to do a differential backup and I'll definitely need to make the file name dynamic using the current day, hours, and minutes. Finally, I wanna do a transaction log backup. So once again, the name will be dynamic and the only thing that changes here is that the extension becomes .trn and the backup database statement changes to backup log.

Obviously, it's completely impractical to go into Management Studio every time we want to do a differential log backup, so we need a way to automate this process and we can do that by creating a scheduled job. And that's very simple, we can create three jobs, one for the full backup, one for the differential, and one for the logs. We don't necessarily have to create three jobs, we could create one job with three steps and three schedules, and have some logic in there but just for this simple case, I will create a full backup job. And give it a schedule of once a week at midnight Sunday.

The differential backup is the same process, create a job and for the step, I'll just copy and paste that SQL into the job step and change the database to ATM_AKC. Change the file extension from trn to bak and backup log to backup database. That's all good and I'm going to create a schedule for this job that will occur every six hours daily. And the same process can be replicated for the transaction logs.

Before we move on from backing up in SQL Server Management Studio, I want to show you how to backup to an Azure Blob storage account. We need to authenticate with our storage account so first off we need to get our shared access signature token which you can generate from within the Azure portal.

Make sure you check the allowed resource types, you might think checking container by itself would be sufficient but it isn't. We need to create a credential that will allow the SQL Server instance to authenticate with the Blob storage account. This can be done with a script as seen here, where the shared access signature is your secret. You must remove the question mark at the beginning of the secret. After running the script, we can go down into the security node and we will see our new credential.

With the credential set up, let's back up the database. This is the same dialogue as we've seen before, but this time we want to select URL as our backup destination. This time when we click the Add button, we will enter the storage container URL as the destination.

When backing up to a URL, the overwrite media option is no longer available. Now I'll just click OK to perform the backup. Going back to the portal and looking at my container, I can see the backup file is sitting there. If you are running SQL Server on an Azure virtual machine, one option to look at is the Azure Site Recovery service. With the service, you can backup and replicate the whole virtual machine, or you can replicate on-premise physical and virtual machines to Azure.

Recovery services vaults give you similar functionality to Management Studio in terms of scheduling backup jobs for SQL Server running on an Azure virtual machine. Creating a recovery vault and connecting to your virtual machine will allow you to discover the databases running on the instance. You can then use a job scheduling interface similar to that in Management Studio to specify which databases, which type of backups, and how frequently you would like those backups to occur.

In terms of point in time recovery, it takes a combination of full, differential, and transaction log backup restores to get you back to where you were when a SQL Server database went offline. In the case of a database failure at 10:15 on Tuesday morning, to do a point in time restore, we would need to firstly restore Sunday nights full backup followed by the differential that was taken at midnight on Monday and then all the transaction log backups need to be restored in the order that they were created from the time of the last differential backup.

With the full and differential backups, you are getting the database back to the state it was at first thing on Tuesday morning and then with log backups applying all the actions, that is inserts, updates, and deletes that were performed against the database until the time of the last transaction log. The SQL statements for performing such an operation would look like these restore statements here so the first one is the full restore where you are replacing the existing database, then the next restore database statement is the differential backup, and then you would have to restore all the transaction logs.

In a case like this, you would probably want to have a script that will generate your log restore statements. In terms of the restore statement options, with no recovery means that the database is still in a recovering state which means that it is not yet usable. This is because we are doing incremental restores on top of the first full backup which has the replace option as obviously we are replacing the database. So it is only the very last restore log statement that is without the no recovery keyword because after that last log restore, we want to put the database back into an operational state.

You can use a transaction log backup that goes past the point in time you want to restore to and do a partial restore of that log using the WITH STOPAT option specifying the point in time to recover to. In terms of point in time restore, we can also do that from Management Studio by selecting Restore from the Tasks Context menu. You can see here that the database backups that I've done already are appearing in the backup sets to restore.

Over in the far-right column, we've got the expiration date that I set on the first full backup and we have a couple of transaction logs as well as the differential. You can select which files that you wish to restore with the checkboxes in the Restore column.

So if we go into Timeline and I select Specific Date and Time, we can move our point in time restore with the slider and as we move it back in time, we can see that the transaction log file was taken away from our backup set and as we move forward, it is added back in.

On the Options page, we can select whether or not to do a replace which is what I will do. We can also specify to keep replication settings or restore with Restricted User which means only the DB creator, the DB owner or the system admin are able to gain access to the database.

In the recovery state drop-down, the default is restore with recovery because you will most likely want to use the database after a restore but you can restore with no recovery or restore with standby which puts the database into a read-only state.

On the Files page, you can choose to restore to different data files or a different location. In the Azure portal, all of this complication is hidden from view and when you go to restore a database, you just select the restore point. Of course, within Azure, you are unable to restore directly over the existing database so you must restore to a new database and delete the old one and then rename the newly restored one.

Within the SQL Server blade on Azure portal under Manage Backups, you can configure how long you want to keep your backups with regard to point in time restoration. If you are backing up once a week, then 14 days or longer is not applicable for a disaster recovery scenario, but keeping long-term backups would be useful if you want to keep your databases small but still have the ability to go back and revisit historical data. 

When a backup is older than the configured retention period, it will be automatically deleted and in the case of a full backup, when it is deleted all the associated differential and transaction log backups also get deleted because as we saw earlier without the initial full backup, those supplementary backups are no longer of any use.


Course Introduction - Overview - High Availability and Disaster Recovery Concepts - Hyperscale - Combining On-Premises with the Cloud - Always On Availability Groups - Failover in the Cloud - Course Summary

About the Author
Learning Paths

Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a  Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.