Docker is a relatively new open platform for building, shipping, and running distributed applications. Initially it was mainly used for the creation of development environments, allowing applications to be easily tested in controlled, reproducible environments. More recently, as people got a better feel for what it could do, it’s also being used for continuous integration, Platform as a Service (PaaS), and production deployments.
In this blog post I will discuss the ingredients needed for effective continuous integration and deployment using Jenkins and Docker. In a later post, we’ll talk about the process itself.
CI is an organizational practice that aims to improve software quality and development speed by applying regular and automated unit tests against new code. Using a version control system, many development teams will regularly push new code from a project’s branches back to the main branch, allowing tested code to be quickly merged with the project and verified as deployable. A popular unit testing framework for Java code is JUNIT.
Continuous deployment is an automated process that ensures your application is always ready to deploy to production or development environments. By using both continuous integration and continuous deployment, development teams can always be ready to quickly deploy reliable builds and patches.
- Jenkins is an open source continuous integration server.
- Docker containers allow developers and system administrators to quickly and easily port applications with all of their dependencies and get them running across systems and machines.
- Docker files are scripts run within the Docker environment to customize and configure new containers at launch time.
- Amazon EC2 Instances are used to host multiple Docker containers.
- Amazon S3 buckets are used to store build artifacts.
- The Git Source Code Management plugin for Jenkins enables the use of Git as a build SCM tool.
- Post Steps sends files or executes commands on a remote server using SSH.
- Publish Artifacts to S3 – copy build artifacts to a specified S3 Bucket.
How does Docker work
- Docker was designed using Linux containers (LXC). LXC is an operating system-level virtualization tool for running multiple isolated server installs (containers) on a single control host. The main difference between KVM virtualization and Linux Containers is that virtual machines require a separate kernel instance to run on, while containers can share the host operating system kernel. It is similar to a chroot, but offers much more isolation. Docker works much like a virtual machine (If you want to deepen your understanding of how Docker works in this short lecture you’ll see how is Docker different from VMs), wrapping everything (file system, process management, environment variables, etc.) into a container. Docker really does let you “Build once, configure once, and run anywhere.”
Docker Architecture Components
- File system A container can only access to its own sandbox file system.
- Users namespace A container has its own user databases, which means a container’s root is not the same as the host’s root account.
- Process namespace Processes within a container cannot access or see processes in the host machine or other containers.
- Network namespace A container gets its own virtual network device and IP address.
Common use cases of Docker
- Automating the packaging and deployment of applications.
- Creation of lightweight, private PAAS environments.
- Automated testing and continuous integration/deployment.
- Deploying and scaling web apps, databases and backend services.
- Sharing your containers through the Docker index.
Continuous integration: conclusion
So far we covered terminology and players involved in the process of continuous integration using Docker and Jenkin. In a future post, we will explain practical workflow and processes and how we can use them with docker.
2018 Was a Big Year for Content at Cloud Academy
As Head of Content at Cloud Academy I work closely with our customers and my domain leads to prioritize quarterly content plans that will achieve the best outcomes for our customers.We started 2018 with two content objectives: To show customer teams how to use Cloud Services to solv...
2019 Cloud Computing Predictions
2018 was a banner year in cloud computing, with Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) all continuing to launch new and innovative services. We also saw growth among enterprises in the adoption of methodologies supporting the move toward cloud-native...
Introducing Assessment Cycles
Today, cloud technology platforms and best practices around them move faster than ever, resulting in a paradigm shift for how organizations onboard and train their employees. While assessing employee skills on an annual basis might have sufficed a decade ago, the reality is that organiz...
Cloud Skills: Transforming Your Teams with Technology and Data
How building Cloud Academy helped us understand the challenges of transforming large teams, and how data and planning can help with your cloud transformation.When we started Cloud Academy a few years ago, our founding team knew that cloud was going to be a revolution for the IT indu...
Disadvantages of Cloud Computing
If you want to deliver digital services of any kind, you’ll need to compute resources including CPU, memory, storage, and network connectivity. Which resources you choose for your delivery, cloud-based or local, is up to you. But you’ll definitely want to do your homework first.Cloud ...
Announcing Skill Profiles Beta
Now that you’ve decided to invest in the cloud, one of your chief concerns might be maximizing your investment. With little time to align resources with your vision, how do you objectively know the capabilities of your teams?By partnering with hundreds of enterprise organizations, we’...
A New Paradigm for Cloud Training is Needed (and Other Insights We Can Democratize)
It’s no secret that cloud, its supporting technologies, and the capabilities it unlocks is disrupting IT. Whether you’re cloud-first, multi-cloud, or migrating workload by workload, every step up the ever-changing cloud capability curve depends on your people, your technology, and your ...
What is Chaos Engineering? Failure Becomes Reliability
In the IT world, failure is inevitable. A server might go down, an app may fail, etc. Does your team know what to do during a major outage? Do you know what instances may cause a larger systems failure? Chaos engineering, or chaos as a service, will help you fail responsibly.It almo...
AWS re:Invent 2017: Themes and Tools Shaping Cloud Computing in 2018
As the sixth annual re:Invent approaches, it’s a good time to look back at how the industry has progressed over the past year. How have last year’s trends held up, and what new trends are on the horizon? Where is AWS investing with its products and services? How are enterprises respondi...
Cloud Academy at Cloud Expo Santa Clara, Oct 31 – Nov 2
71% of IT decision-makers believe that a lack of cloud expertise in their organizations has resulted in lost revenue.1 That’s why building a culture of cloud—and the common language and skills to support cloud-first—is so important for companies who want to stay ahead of the transfo...
Product News: Announcing Cloud Academy Exams, Improved Filtering, Navigation, and More
At Cloud Academy, we’re obsessed with creating value for the organizations who trust us as the single source for the learning, practice, and collaboration that enables a culture of cloud.Today, we’re excited to announce the general availability of several new features in our Content L...
On ‘the public understanding of encryption’ Tweet by Paul Johnston
Some of the questions by journalists about encryption prove they don't get it. Politicians don't seem to get it either (most of them). In fact, outside technology, there are some ridiculous notions of what encryption means. Over and over again, the same rubbish around encryption gets re...