Tagging Along in AWS: EC2 Tags in Action

(Update) We’ve recently published a new post on AWS Tagging Best Practices where we review what to keep in mind when tagging your resources and how tags are important for your business.


One of the most under-appreciated features of EC2 I’ve found is tagging. Almost any resource in EC2 can be tagged via a key-value pair, including volumes, instances, and snapshots. There are a few restrictions, for example, the length of the EC2 tags themselves, and you cannot use some AWS reserved keywords. But, as usual, the best place to confirm this is in the official AWS documentation.

In this post, I’d like to demonstrate a potential use case for tagging, and a practical example of how to access these tags programmatically via the Java AWS SDK.

EC2 Tagging

In practice, the most likely tag you might have encountered when starting out with EC2 is the (non-compulsory) name tag as presented by the launch wizard:
Tagging step in EC2 instance creation
An important concept to keep in mind is that tags do not carry semantic meaning, which gives you free rein to interpret the values as you deem fit.
So let’s get our hands dirty with an example…

EC2 tags use case and a potential solution

Let’s say you’re running two EC2 on-demand instances on your AWS account. One of them hosts your live server, and the other runs your staging environment for testing. On the AWS Management Console, it might look something like this:
environment_mc
Like the good, responsible engineer that you are, you’d soon realize the need to periodically back up the block devices (storage) on your live environment in case it gets corrupted, or an inexperienced engineer in your team accidentally messes up the configuration beyond repair. You could, of course, do it manually from the management console. However, if your number of instances start growing, a much better solution would be to do this programmatically somehow and execute that code as often as you feel necessary via a cron job or a scheduled task.

Via tags, you could approach it this way:

  1. Create a tag with the same key (for example ‘Stack’) on both instances. On your live instance(s) give it the value ‘Production’, and on the staging ones give it the value ‘Stage’.
  2. Via the AWS SDK, get a list of the instances on your account, and if the tag with the key value ‘Stack’ has the value ‘Production’, you know you can take a snapshot of that instance’s storage volume(s).

 Creating Tags

With this strategy in mind, we can go over to action.

On the management console, select your live instance, select the ‘Tags’ tab and click on ‘Add/Edit Tags’.
In the Key field, type ‘Stack’ and in the Value type ‘Production’. You should end up with something that looks like this:
environment_mc tags_production
For your staging instance, create another tag with the same Key value, but in the Value field, type ‘Staging’ instead.

That’s it. You’re set up for the programming part. We’re using Java for this example, for which Amazon generously provided an API to interact their services.

AWS Java SDK tag iteration

Provided that your access keys have been configured correctly, you should be able to authenticate your SDK session successfully to allow access to your resources. Remember to set the region as well:

    private static final String STACK_TAG_KEYNAME = "Stack";
    private static final SimpleDateFormat sdf = //To date your backups
            new SimpleDateFormat("yyyy-MM-dd_HH_mm_ss");
    public static void main(String[] args) {
        AWSCredentials credentials = null;
        log = LogFactory.getLog(SimpleLog.class);
        log.info("Starting EC2 live backups");
        try {
            credentials = new ProfileCredentialsProvider()
                    .getCredentials();
        } catch (Exception e) {
            throw new AmazonClientException(
                    "Cannot load the credentials from the "
                    + "credential profiles file. "
                    + "Please make sure that your "
                    + "credentials file is at the correct "
                    + "location (~/.aws/credentials), and is in valid format.",
                    e);
        }
        ec2 = new AmazonEC2Client(credentials);
        ec2.setRegion(Region.EU_Ireland.toAWSRegion());

Getting access to all the instances (reserved and otherwise) is pretty straightforward. We then check each instance for a tag key and value that matches our criteria to identify the ones we’re interested in backing up:

       //Gets all reservations for your region
        Iterator vReservations = ec2.describeInstances()
                .getReservations().iterator();
        List vInstancesToBackUp = new ArrayList();
        //Step through all the reservations...
        Reservation vResItem = null;
        while (vReservations.hasNext()) {
            //For each reservation, get the instances
            vResItem = vReservations.next();
            Iterator vInstances = vResItem.getInstances().iterator();
            //For each instance, get the tags associated with it.
            while (vInstances.hasNext()) {
                Instance vInstanceItem = vInstances.next();
                List pTags = vInstanceItem.getTags();
                Iterator vIt = pTags.iterator();
                while (vIt.hasNext()) {
                    Tag item = vIt.next();
                    //if the tag key macthes and the value we're looking for, we return
                    if (item.getKey().equals("Stack")
                     && item.getValue().equals("Production")) {
                        vInstancesToBackUp.add(vInstanceItem);
                    }
                }
            }
        }
        log.info("Number of instances to back up:" + vInstancesToBackUp.size());

When we’ve got the list of instances we’re interested in, we then create a snapshot of the block devices on that instance. We’ll use the instance name in the description of the snapshot to confirm that we’ve backed up the correct instance:

        for (Instance item : vInstancesToBackUp) {
            List devices = item.getBlockDeviceMappings();
            //For each block device, take a snapshot
            for (InstanceBlockDeviceMapping blockMapping : devices) {
                log.info("Creating snapshot for device " + blockMapping.getDeviceName());
                CreateSnapshotRequest csr = new CreateSnapshotRequest(blockMapping.
                        getEbs().getVolumeId(), "SnapshotOf_" +
                        getInstanceName(item) + " on " + sdf.format(new Date()));
                CreateSnapshotResult result = ec2.createSnapshot(csr);
                log.info("Snapshot ID created="+result.getSnapshot().getSnapshotId());
            }
        }

If everything goes well, when executed this code should output the following:

run:
Oct 13, 2014 9:52:51 PM org.apache.commons.logging.impl.SimpleLog main
INFO: Starting EC2 live backups
Oct 13, 2014 9:52:54 PM org.apache.commons.logging.impl.SimpleLog main
INFO: Number of instances to back up:1
Oct 13, 2014 9:52:54 PM org.apache.commons.logging.impl.SimpleLog main
INFO: Creating snapshot for device /dev/sda1
Oct 13, 2014 9:52:54 PM org.apache.commons.logging.impl.SimpleLog main
INFO: Snapshot ID created=snap-3506c2c8

Note the snapshot id returned is snap-3506c2c8. If we go back to the management console, expand the Elastic Block Store node and click on Snapshots, you should see your newly created snapshot there. Note the live instances name in the description field, that is proof that we snapped the correct instance:

EC2 Snaphsot image
Snapshot view

You can use this snapshot to restore your storage volume to a previous version.

Conclusion

We’ve seen but one of the potential applications of tags. Using their semantic neutrality, their usage is of course not only limited to create volume snapshots, but also grouping billing cost centers more effectively, or whatever organizational application you can think of. Happy tagging!

Avatar

Written by

Charles van der Wath


Related Posts

Patrick Navarro
Patrick Navarro
— January 22, 2020

Top 5 AWS Salary Report Findings

At the speed the cloud tech space is developing, it can be hard to keep track of everything that’s happening within the AWS ecosystem. Advances in technology prompt smarter functionality and innovative new products, which in turn give rise to new job roles that have a ripple effect on t...

Read more
  • AWS
  • salary
Alisha Reyes
Alisha Reyes
— January 6, 2020

New on Cloud Academy: Red Hat, Agile, OWASP Labs, Amazon SageMaker Lab, Linux Command Line Lab, SQL, Git Labs, Scrum Master, Azure Architects Lab, and Much More

Happy New Year! We hope you're ready to kick your training in overdrive in 2020 because we have a ton of new content for you. Not only do we have a bunch of new courses, hands-on labs, and lab challenges on AWS, Azure, and Google Cloud, but we also have three new courses on Red Hat, th...

Read more
  • agile
  • AWS
  • Azure
  • Google Cloud Platform
  • Linux
  • OWASP
  • programming
  • red hat
  • scrum
Alisha Reyes
Alisha Reyes
— December 24, 2019

Cloud Academy’s Blog Digest: Azure Best Practices, 6 Reasons You Should Get AWS Certified, Google Cloud Certification Prep, and more

Happy Holidays from Cloud Academy We hope you have a wonderful holiday season filled with family, friends, and plenty of food. Here at Cloud Academy, we are thankful for our amazing customer like you.  Since this time of year can be stressful, we’re sharing a few of our latest article...

Read more
  • AWS
  • azure best practices
  • blog digest
  • Cloud Academy
  • Google Cloud
Avatar
Guy Hummel
— December 12, 2019

Google Cloud Platform Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2019, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the second consecuti...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
Alisha Reyes
Alisha Reyes
— December 10, 2019

New Lab Challenges: Push Your Skills to the Next Level

Build hands-on experience using real accounts on AWS, Azure, Google Cloud Platform, and more Meaningful cloud skills require more than book knowledge. Hands-on experience is required to translate knowledge into real-world results. We see this time and time again in studies about how pe...

Read more
  • AWS
  • Azure
  • Google Cloud
  • hands-on
  • labs
Alisha Reyes
Alisha Reyes
— December 5, 2019

New on Cloud Academy: AWS Solution Architect Lab Challenge, Azure Hands-on Labs, Foundation Certificate in Cyber Security, and Much More

Now that Thanksgiving is over and the craziness of Black Friday has died down, it's now time for the busiest season of the year. Whether you're a last-minute shopper or you already have your shopping done, the holidays bring so much more excitement than any other time of year. Since our...

Read more
  • AWS
  • AWS solution architect
  • AZ-203
  • Azure
  • cyber security
  • FCCS
  • Foundation Certificate in Cyber Security
  • Google Cloud Platform
  • Kubernetes
Avatar
Cloud Academy Team
— December 4, 2019

Understanding Enterprise Cloud Migration

What is enterprise cloud migration? Cloud migration is about moving your data, applications, and even infrastructure from your on-premises computers or infrastructure to a virtual pool of on-demand, shared resources that offer compute, storage, and network services at scale. Why d...

Read more
  • AWS
  • Azure
  • Data Migration
Wendy Dessler
Wendy Dessler
— November 27, 2019

6 Reasons Why You Should Get an AWS Certification This Year

In the past decade, the rise of cloud computing has been undeniable. Businesses of all sizes are moving their infrastructure and applications to the cloud. This is partly because the cloud allows businesses and their employees to access important information from just about anywhere. ...

Read more
  • AWS
  • Certifications
  • certified
Avatar
Andrea Colangelo
— November 26, 2019

AWS Regions and Availability Zones: The Simplest Explanation You Will Ever Find Around

The basics of AWS Regions and Availability Zones We’re going to treat this article as a sort of AWS 101 — it’ll be a quick primer on AWS Regions and Availability Zones that will be useful for understanding the basics of how AWS infrastructure is organized. We’ll define each section,...

Read more
  • AWS
Avatar
Dzenan Dzevlan
— November 20, 2019

Application Load Balancer vs. Classic Load Balancer

What is an Elastic Load Balancer? This post covers basics of what an Elastic Load Balancer is, and two of its examples: Application Load Balancers and Classic Load Balancers. For additional information — including a comparison that explains Network Load Balancers — check out our post o...

Read more
  • ALB
  • Application Load Balancer
  • AWS
  • Elastic Load Balancer
  • ELB
Albert Qian
Albert Qian
— November 13, 2019

Advantages and Disadvantages of Microservices Architecture

What are microservices? Let's start our discussion by setting a foundation of what microservices are. Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs). ...

Read more
  • AWS
  • Docker
  • Kubernetes
  • Microservices
Nisar Ahmad
Nisar Ahmad
— November 12, 2019

Kubernetes Services: AWS vs. Azure vs. Google Cloud

Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications. Cloud service providers — such as Amazon Web Ser...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Kubernetes