Worried about the fact that your data backup plan is a little bit anemic?
AWS S3cmd can help.
Whether or not you’ve already got some kind of data backup system to protect your personal and work files from loss or attack, it’s probably not enough. Your data should ideally be securely and reliably stored far from your home or office. And preferably in more than one location. So even if you’ve already got a USB drive in a drawer somewhere holding an (outdated) archive, and some more kept on Google Drive, it can’t hurt to add another layer.
There are, however, things that get in the way. “It’s too complicated to set up” some will complain. Or “I’ve tried before, but I can never remember to actually update it regularly.”
Do you have an Amazon AWS account? Are you reasonably comfortable with your operating system’s command line? Then I’ve got a dead simple, dirt cheap, bullet-proof DIY data backup plan that you can set up just once a then completely forget about. (Although you should devote a few moments to check on it every now and them). And it’s cheap. $0.03 per GB per month cheap.
Download and install S3cmd
If you haven’t already, you’ll need to get S3cmd working on your system. This process was thoroughly tested on Linux (without causing harm to any animals), and things should work pretty much the same on Mac. I believe that the AWS CLI for Windows has similar functionality.
First of all, if you haven’t already, install Python and wget:
sudo apt-get install python python-setuptools wget
Then, using wget, download the S3cmd package (1.5.2 is currently the latest version):
Run tar to unpack the archive:
tar xzvf s3cmd-1.5.2.tar.gz
Move into the newly created S3cmd directory:
…and run the install program:
sudo python setup.py install
You are now ready to configure S3cmd:
You’ll be asked to provide the Access Key ID and Secret Access Key of the AWS user account through which you’re planning to access S3, along with other authentication, encryption, and account details. The configure program will then offer to test your connectivity to S3, after which it will save your settings and you should be all ready to go.
Create your data backup job
Since a data backup without data doesn’t make a lot of sense, you’ll want to identify exactly which folders and files need backing up. You’ll also want to create a new bucket from the AWS console:
So let’s suppose that you keep all your important data underneath a directory called work files, and you’ve named your bucket mybackupbucket8387. Here’s what your backup command will look like:
s3cmd sync /home/yourname/workfiles/ s3://mybackupbucket8387/ --delete-removed
The trailing slashes on both the source and target addresses are important, by the way.
Let’s examine this command:
Sync tells the tool that you want to keep the files on the source and target locations synchronized. That means, that an update will first check the contents of both directories, and add copies of any files that exist in one, but not the other. The two addresses simply define which two data locations are to be synced, and –delete-removed tells the tool to remove any files that exist in the S3 bucket, but are no longer present locally.
Depending on how big your data backup will be, the first time you run this might take some time.
Update: to make sure you don’t accidentally remove the wrong files, it’s always a good idea to run a sync command with the dry-run argument before executing it for real:
There are cases when you might not want to use –delete-removed. Perhaps you would prefer to keep older versions of overwritten files archived and available. To do that, simply remove the –delete-removed argument from your command line and enable Versioning on your S3 bucket.
If you’d like to reduce your costs even further and also build in an automatic delete for overwritten files that have been sitting long enough, you could use the AWS console to create a Lifecycle rule for your bucket that will transfer previous versions of files that are older than, say thirty days, to Glacier – whose storage costs are only $0.01 per GB per month.
So that’s a data backup that’s simple and cheap. But it’s not yet at the “set it up and forget about it” stage. There’s still one more really simple step (using my Ubuntu system, at least): create a cron job.
If you’d like to sync your files every hour, you can create a text file containing only these two lines:
#!/bin/bash s3cmd sync /home/yourname/workfiles/ s3://mybackupbucket8387/
…and, using sudo, save the file to the directory /etc/cron.hourly/
Assuming that you named your file “mybackup”, all that’s left is to make your file executable using:
sudo chmod +x mybackup
You should test it out over a couple of hours to make sure that the backups are actually happening, but that should be the last time you’ll ever have to think about this data backup archive – at least until your PC crashes.
Content Roadmap: AZ-500, ITIL 4, MS-100, Google Cloud Associate Engineer, and More
Last month, Cloud Academy joined forces with QA, the UK’s largest B2B skills provider, and it put us in an excellent position to solve a massive skills gap problem. As a result of this collaboration, you will see our training library grow with additions from QA’s massive catalog of 500+...
DevSecOps: How to Secure DevOps Environments
Security has been a friction point when discussing DevOps. This stems from the assumption that DevOps teams move too fast to handle security concerns. This makes sense if Information Security (InfoSec) is separate from the DevOps value stream, or if development velocity exceeds the band...
Test Your Cloud Knowledge on AWS, Azure, or Google Cloud Platform
Cloud skills are in demand | In today's digital era, employers are constantly seeking skilled professionals with working knowledge of AWS, Azure, and Google Cloud Platform. According to the 2019 Trends in Cloud Transformation report by 451 Research: Business and IT transformations re...
Disadvantages of Cloud Computing
If you want to deliver digital services of any kind, you’ll need to estimate all types of resources, not the least of which are CPU, memory, storage, and network connectivity. Which resources you choose for your delivery — cloud-based or local — is up to you. But you’ll definitely want...
Google Cloud vs AWS: A Comparison (or can they be compared?)
The "Google Cloud vs AWS" argument used to be a common discussion among our members, but is this still really a thing? You may already know that there are three major players in the public cloud platforms arena: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)...
Deployment Orchestration with AWS Elastic Beanstalk
If you're responsible for the development and deployment of web applications within your AWS environment for your organization, then it's likely you've heard of AWS Elastic Beanstalk. If you are new to this service, or simply need to know a bit more about the service and the benefits th...
How to Use & Install the AWS CLI
What is the AWS CLI? | The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services and implement a level of automation. If you’ve been using AWS for some time and feel...
Cloud Academy’s Blog Digest: July 2019
July has been a very exciting month for us at Cloud Academy. On July 10, we officially joined forces with QA, the UK’s largest B2B skills provider (read the announcement). Over the coming weeks, you will see additions from QA’s massive catalog of 500+ certification courses and 1500+ ins...
AWS Fundamentals: Understanding Compute, Storage, Database, Networking & Security
If you are just starting out on your journey toward mastering AWS cloud computing, then your first stop should be to understand the AWS fundamentals. This will enable you to get a solid foundation to then expand your knowledge across the entire AWS service catalog. It can be both d...
How to Become a DevOps Engineer
The DevOps Handbook introduces DevOps as a framework for improving the process for converting a business hypothesis into a technology-enabled service that delivers value to the customer. This process is called the value stream. Accelerate finds that applying DevOps principles of flow, f...
AWS AMI Virtualization Types: HVM vs PV (Paravirtual VS Hardware VM)
Amazon Machine Images (AWS AMI) offers two types of virtualization: Paravirtual (PV) and Hardware Virtual Machine (HVM). Each solution offers its own advantages. When we’re using AWS, it’s easy for someone — almost without thinking — to choose which AMI flavor seems best when spinning...
AWS Machine Learning Services
The speed at which machine learning (ML) is evolving within the cloud industry is exponentially growing, and public cloud providers such as AWS are releasing more and more services and feature updates to run in parallel with the trend and demand of this technology within organizations t...