Data Backup Using AWS S3cmd: a Simple And Effective Solution

Worried about the fact that your data backup plan is a little bit anemic?

AWS S3cmd can help.

Whether or not you’ve already got some kind of data backup system to protect your personal and work files from loss or attack, it’s probably not enough. Your data should ideally be securely and reliably stored far from your home or office. And preferably in more than one location. So even if you’ve already got a USB drive in a drawer somewhere holding an (outdated) archive, and some more kept on Google Drive, it can’t hurt to add another layer.

There are, however, things that get in the way. “It’s too complicated to set up” some will complain. Or “I’ve tried before, but I can never remember to actually update it regularly.”

Do you have an Amazon AWS account? Are you reasonably comfortable with your operating system’s command line? Then I’ve got a dead simple, dirt cheap, bullet-proof DIY data backup plan that you can set up just once a then completely forget about. (Although you should devote a few moments to check on it every now and them). And it’s cheap. $0.03 per GB per month cheap.

Download and install S3cmd

If you haven’t already, you’ll need to get S3cmd working on your system. This process was thoroughly tested on Linux (without causing harm to any animals), and things should work pretty much the same on Mac. I believe that the AWS CLI for Windows has similar functionality.

First of all, if you haven’t already, install Python and wget:

sudo apt-get install python python-setuptools wget

Then, using wget, download the S3cmd package (1.5.2 is currently the latest version):

wget http://sourceforge.net/projects/s3tools/files/s3cmd/1.5.2/s3cmd-1.5.2.tar.gz

Run tar to unpack the archive:

tar xzvf s3cmd-1.5.2.tar.gz

Move into the newly created S3cmd directory:

cd s3cmd-1.5.2

…and run the install program:

sudo python setup.py install

You are now ready to configure S3cmd:

s3cmd --configure

You’ll be asked to provide the Access Key ID and Secret Access Key of the AWS user account through which you’re planning to access S3, along with other authentication, encryption, and account details. The configure program will then offer to test your connectivity to S3, after which it will save your settings and you should be all ready to go.

Create your data backup job

Since a data backup without data doesn’t make a lot of sense, you’ll want to identify exactly which folders and files need backing up. You’ll also want to create a new bucket from the AWS console:
Data Backup: Create a bucket, select a bucket name and region
So let’s suppose that you keep all your important data underneath a directory called work files, and you’ve named your bucket mybackupbucket8387. Here’s what your backup command will look like:

s3cmd sync /home/yourname/workfiles/ s3://mybackupbucket8387/ --delete-removed

The trailing slashes on both the source and target addresses are important, by the way.
Let’s examine this command:

Sync tells the tool that you want to keep the files on the source and target locations synchronized. That means, that an update will first check the contents of both directories, and add copies of any files that exist in one, but not the other. The two addresses simply define which two data locations are to be synced, and –delete-removed tells the tool to remove any files that exist in the S3 bucket, but are no longer present locally.

Depending on how big your data backup will be, the first time you run this might take some time.

Update: to make sure you don’t accidentally remove the wrong files, it’s always a good idea to run a sync command with the dry-run argument before executing it for real:

--dry-run

There are cases when you might not want to use –delete-removed. Perhaps you would prefer to keep older versions of overwritten files archived and available. To do that, simply remove the –delete-removed argument from your command line and enable Versioning on your S3 bucket.
Data Backup: Versioning
If you’d like to reduce your costs even further and also build in an automatic delete for overwritten files that have been sitting long enough, you could use the AWS console to create a Lifecycle rule for your bucket that will transfer previous versions of files that are older than, say thirty days, to Glacier – whose storage costs are only $0.01 per GB per month.
Data Backup: Early deletion of Glacier objects
So that’s a data backup that’s simple and cheap. But it’s not yet at the “set it up and forget about it” stage. There’s still one more really simple step (using my Ubuntu system, at least): create a cron job.

If you’d like to sync your files every hour, you can create a text file containing only these two lines:

#!/bin/bash
s3cmd sync /home/yourname/workfiles/ s3://mybackupbucket8387/

…and, using sudo, save the file to the directory /etc/cron.hourly/

Assuming that you named your file “mybackup”, all that’s left is to make your file executable using:

sudo chmod +x mybackup

You should test it out over a couple of hours to make sure that the backups are actually happening, but that should be the last time you’ll ever have to think about this data backup archive – at least until your PC crashes.

Cloud Academy