image
The Image Builder in General
The Image Builder in General
Difficulty
Beginner
Duration
7m
Students
209
Ratings
5/5
Description

This course explores EC2 Image Builder, a service designed to help automate the creation and updating of images for your EC2 instance. So if you want to understand how to build and maintain secure EC2 images, you're in the right place!

Learning Objectives

  • Build and define a pipeline for creating your EC2 images
  • Validate and test your pipeline

Intended Audience

  • DevOps engineers or solutions architects that deal with fleet management or any type of image lifecycling

Prerequisites

To get the most out of this course, you should already have experience using Amazon EC2.

Transcript

The EC2 image builder will help you automate the creation of gold standard VM Images for your ec2 instances and fleets as well as for containerized Docker workloads.  It does this by having a robust creation pipeline that helps with many important facets of image building. For example:

The pipeline can help with customizing the software installed on your images. This might include dealing with OS updates, preinstalling business productivity tools, and security patching.

It also deals with checking security for your images, this could be things like enforcing strong passwords, turning on disk encryption, and closing all non-essential ports.

You can then have it automatically test your image: checking to see if they actually boot, testing that sample applications can run, or doing security policy checks.

Finally, EC2 Image Builder can push out and distribute your image to selected AWS regions where they can be rotated in for your EC2 fleets.

In the past, you would have to use a third-party program like Packer to orchestrate this type of system, create your own custom automation scripts, or even do everything by hand. 

When working with EC2 image builder, you get to set up and define a pipeline that your images will work through. This pipeline is an automated configuration setup for building your Amazon Machine images (AMIs).

Getting started with creating your pipeline is fairly easy. You will need to define some simple information like a name, a description, some tags, and a general schedule that you would like the pipeline to run. This schedule is for automatic builds where you can have the system create new versions of your images on a recurring basis. You do also have the option of ignoring that feature and can choose to manually build your images if you wish. 

The next stage of defining your pipeline is to specify a recipe that you will be using to create your AMIs. A recipe is a predefined set of instructions that the pipeline will carry out when creating a new image. You can use prebuilt recipes, or create your own.

When creating your own recipe, you will need to define a starting operating system that you plan to build for. Currently, there are six available OSs that we can target: We have Amazon Linux, Windows Server, Ubuntu, Red hat, CentOS, and SUSE Linux Enterprise Server. Obviously, your choice here affects what workloads you can run.

Once you have determined what your recipe will be built on, we now are able to define some components that the pipeline can add to your image. A component is used to install or add some feature or functionality to your image. They help you to define a sequence of steps that you would like to run on an instance prior to creation of an AMI. 

These components can either be ones you’ve created yourself or taken from the large selection of prebuilt ones created by amazon.

Here are a few examples of amazon managed components.

Install the latest version of Apache Tomcat, Installs the latest version of the AWS CLI, Installs the latest version of the AWS CodeDeploy agent.

Components come in two types: the first is a Build component, and the second is a Test component.

A build component contains the software and settings that you want to be installed or applied to the image during the build process. A test component is a test that is run after the image has been created, that will help to validate its functionality, security, and performance.

For both of these components, you will need to create a definition document that describes the actions that EC2 image builder should perform on that image. The definition document is a YAML formatted script that contains your code, commands, and definitions.

Each document can contain phases, which are just logical groupings of steps. There can be many phases within a single definitions document. Overall there are three different phases that you can have in your document, which are the Build, Validate, and test phases.

Here is a small example of what one of these documents looks like.

In this sample, we can see that we are creating a component that operates during the build phase. 

Its name is sample S3 download, its action is to attempt an s3 download, and it will try up to 3 times. The inputs for this action are a source and destination - sample-bucket/sample1 and it will be downloaded to the c drive. Overall, this isn't a very complex component.

The next phase after Build is validation. After everything from the build phase has been run or applied to the instances, including any customizations, we can validate that they have been applied successfully. If we receive a successful response from the validation, the instance will be shut down and a final image will be created and sent onto the last stage, Testing.

For testing, an instance is created using that image from the previous phases. The image builder can then run all of the test components at this time to check that the instance is operating as you expect.

The reason there is a validation phase and then a test phase is that validation can make sure all of the components you wanted installed are installed. And the test phase allows us to prove that the instance will be healthy and fully operational on a fresh boot / instantiation.

There are four different types of actions that are supported within these documents during the various phases. They are general execution, file download and upload, file system operations, and system actions.

The general execution action consists of:

  • Executing a bash command
  • Executing binary
  • And executing a PowerShell command

The file download and upload action allows you to:

  • Download from s3
  • Upload to s3
  • And download from the web (over HTTP and HTTPs)

The file system operations include:

  • AppendFile
  • CopyFile
  • CopyFolder
  • CreateFile

And a whole lot more... 

And finally, we have some system actions such as:

  • Reboot
  • Set registry
  • Update OS

With all of these actions, you can really do some in-depth modification of any basic AMI and start to turn it into the exact gold standard image that you want. And you even have the option to have the build scheduler re-run the pipeline again if there are ever any updates to any of the underlying dependencies (the components) of your AMI pipeline.

If you do decide to have this enabled, just remember that you must use semantic versioning (x.x.x) for your components, and always use the latest version for the source image.

You also have the power to share your final AMIs across AWS accounts. Image builder integrates well with AWS organizations in facilitating this feature. Additionally, EC2 image builder lets you use AMI Launch permissions to control which AWS accounts are allowed to launch EC2 VM with your AMIs.

About the Author

William Meadows is a passionately curious human currently living in the Bay Area in California. His career has included working with lasers, teaching teenagers how to code, and creating classes about cloud technology that are taught all over the world. His dedication to completing goals and helping others is what brings meaning to his life. In his free time, he enjoys reading Reddit, playing video games, and writing books.