Containers, and in particular Docker based ones are the Big Thing everybody talks about and works with these days, and in fact we frequently get questions about getting started with Docker. Many people and companies really take it to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. Pretty much everyone uses docker or LXC now to deploy workloads, run CI tests, or even to have an insulated environment to develop applications with a specific set of libraries and a dedicated configuration. This is especially convenient when you want to share that all across multiple computer or with your colleagues.
So, what is this Docker everybody is talking about and how can you take advantage of it?
What is Docker?
We have a great course about getting started with Docker, and its first lecture is all about the question “What is Docker?”. The course goes in deep details about this wonderful open source software and the logic behind it. Nevertheless, Docker is basically a wrapper around Linux Container, another old software to create containers on Linux, as you might easily imagine. Actually, the latest versions of Docker are using a brand new library instead of LXC, but it’s a change under the hood with no impact on high-level functionality.
Since it exploits LXC (or this new libcontainer library), Docker containers are built on top of facilities like cgroups and namespaces provided by the Linux kernel, so they are not traditional virtual machines and do not require a separate operating system. Instead, they use those kernel’s functionality to completely isolate the application’s view of the operating system. So resources are confined, services are restricted, multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory and I/O. If you add the fact that a Docker container can start in an handful of milliseconds, you probably now understand how cool the whole thing is.
Using Docker to create and manage containers makes it easier to create highly distributed systems by allowing multiple applications, worker tasks, and other processes to run autonomously on a single physical machine or across a spectrum of virtual machines. And that’s why Docker is finding its heaven in the Cloud, where a lot of providers, and especially tier-1 players like Amazon, Google and Microsoft are devoting some of their resources to add compatibility and support for Docker.
Getting started with Docker
Despite the huge complexity hidden under the hood, getting started with Docker is quite easy, also thanks to it Git-style syntax. If you are familiar with that VCS, and you probably are if you are interested in Docker, you will see many concepts shared among the two. The base to get a Docker container running is by starting an image containing all the files and settings and everything is needed to run the container. I won’t spend any more seconds discussing all the details about the Docker syntax and usage, as it is well-explained both in the Getting Started with Docker course I already talked you about and in its followup “Docker: advanced concepts“.
The key factor here is that it’s quite easy to grasp the basic concepts of Docker and that helped a lot in making the difference about its huge spread. In fact, as we have just seen, Docker has not reinvented the wheel, rather it wrapped around an existing, solid and well-working technology adding a set of features that helped it make the big jump. Nowadays, Docker Inc. is a well-established startup, who raised more then 40 millions of dollars so far, and is seeing a larger and larger ecosystem growing around it. In fact, many hosting services were born to help developers deploy applications and Docker containers on the Internet
Docker hosting services
Indeed, Docker hosting services are among the first companies born in this ecosystem. These are *aaS services providing virtual machines to deploy Docker images and/or container to, sometimes with a good degree of customization of the underlying infrastructure. Sometimes, these services also add APIs or other software layers in addition to the standard Docker features, making them full-fledged PaaS services.
We already published a comparison among the 4 most important Docker services in another post, and I really recommend you to read it if you are interested in learning more about them. Sometimes, subtle variations among the various services available can make a huge difference according to your needs, so ensure to read that post thoroughly. Nevertheless, after we published that post, the top players in the Cloud world made their moves to enhance support for Docker on their respective platforms.
Docker support by Amazon, Google, and Microsoft
Amazon announced a brand new service built on top of EC2 during the latest re:Invent. Actually, everyone was expecting a move there by the giant of the Cloud world, given that its closest competitors had announced something similar already. This new service has been called ECS, that is EC2 Container Service, and is currently available as a preview. The initial focus of ECS is to address multi-container multi-host clustering, which aligns with customer requirements for high-performance and scale as they move their Dockerized distributed applications into production. Amazon already had some support for Docker in its Elastic Beanstalk PaaS service, but this one looks way closer to the Infrastructure level, and we expect it will provide huge flexibility to the needs of the developers. We are looking forward to running an extensive test about this service, and you can expect a deep review very soon.
On the other side, Google developed a dedicated support for Docker long before AWS. There is a specific service to help getting started with Docker on GCP, under the name of Google Container Engine. It allows to deploy and run docker container on GCP virtual machines, paying just for the Google Compute Engine instances that you will provision for your containers and no extra costs. The interesting thins here is that at the core of this Google Container Engine service there is another open source software that Google is actively developing, that is Kubernetes. This is a cluster manager for containers that can schedule replicas across a group of node instances. A master instance exposes the Kubernetes API, through which tasks are defined. Kubernetes spawns containers on nodes to handle the defined tasks. Also, the number and type of containers can be dynamically modified according to need. It’s a very advanced technology, and I’m looking forward to see Google Container Engine out of the alpha, as they may introduce backward incompatible changes until the stable version will be released. Google is devoting a lot of efforts and resources to this project, and is probably the most advanced platform with regard to Docker support.
Microsoft recently added support for Docker containers too. The initial support for Docker on Linux-based Azure VMs has been added in June, but that was just the bare minimum they needed to make Docker available to users. Last month though, they took this thing more seriously, and they announced a stronger commitment in the field. In particular, they announced the open sourcing of Docker Engine for Windows Server, support for the Docker Open Orchestration APIs, and the federation of Docker Hub images in the Azure Gallery. Although none of them seems as interesting as the dedicated services provided by Google or Amazon, the fact that even a company like Microsoft is showing such a great interest for Docker confirms how important this software it is and how crucial the container technology has become.
WaitCondition Controls the Pace of AWS CloudFormation Templates
AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...
The 9 AWS Certifications: Which is Right for You and Your Team?
As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in the cloud.As the market leader and most mature p...
Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live
The announcements at re:Invent just keep on coming! Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. If you're not too familiar with Amazon EC2, you might want to familiarize yourself by creating your first Am...
Google Cloud Certification: Preparation and Prerequisites
Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...
Understanding AWS VPC Egress Filtering Methods
In order to understand AWS VPC egress filtering methods, you first need to understand that security on AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructur...
S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3
Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...
Microservices Architecture: Advantages and Drawbacks
Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...
What Are Best Practices for Tagging AWS Resources?
There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...
How to Optimize Amazon S3 Performance
Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...
How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy
One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...
What are the Benefits of Machine Learning in the Cloud?
A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...
How to Use AWS CLI
The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....