Over the next 3 weeks we will be discussing Docker and how it can be used to ship software. We will cover
- What is Docker
- Building a “simple” container
- Orchestrating complete container systems
How things are done now
The development and subsequent implementation/deployment of software is a problem that has yet to have been solved. Typically a developer writes code in their custom environment for it to be tested in another and deployed into production. All the way through the pipeline from development to production it’s possible (albeit undesirable) that these environments are completely different:
- Developer likes OS X and writes their Python code using the latest version on their MacBook
- QA tests using Ubuntu with the version of Python that comes with the OS, running on virtual machines
- The operations team deploy to a dedicated cluster of servers running Redhat Enterprise Linux
Differences in the hardware, operating system or libraries can cause problems to appear in QA or production which did not happen in development.
Imagine the following production environment:
Here we a have a single load balancer that connects to two application servers which in turn utilize a shared cache server, database server and file store. Leaving aside the single points of failure for the lower tier, we have at the application layer the ability to scale up as demand requires.
What the above diagram does not show is that over time the application layer has changed from being a client facing portal utilizing web services written in Python to the same but with new web services written in Node.js and Ruby.
In the development environment the above architecture is a lot simpler as all the above components run on the same machine. Whilst this reduces the complexity for the developers, the process for deploying the system into test and later production needs to adapting.
As it stands changes to the Ruby code, has to go out with the Python changes.
The same applies for UI changes in the web portal. In this setup discrete component updates have not been implemented requiring that we update the complete application layer for any minor change.
The problem we’re trying to solve
So ideally we’d like to have a method of replicating a complete and complex system on a variety of hardware platforms and environments with the ability to update components rather than the whole system for any given change. At Docker they call this the matrix from hell:
We need a way to ship updates from developer systems into any number of mission critical environments.
An analogy to shipping or deploying software could be a goods provider shipping a physical product to a customer. Imagine you sell a physical good and you have customers on the same continent as well as overseas.
The requirements for shipping that product depends on the size, packing and transport systems. In some cases those requirements could vary from country to country. So for transporting goods we have a similar matrix from hell to the one we see in our data centers:
You need to know how your product would be transported through each leg of the journey, the route it would take making sure that the packaging would be adequate for delivering your goods to their final destination.
Prior to the 1960’s this was the case until the container was invented. Each container is designed as a standardized unit to allow for goods to be loaded in to a box that sealed and remains sealed until it has been delivered.
Meanwhile it can be loaded and unloaded, stacked, moved around efficiently over long distances and transferred from one method of transportation to another.
How Docker helps us
Docker utilizes features of the Linux kernel to allow for the packaging of software into containers that can be run together on the same machine ensuring resource isolation and eliminating the need to start resource heavy virtual machines. We can see the differences in between the two configurations in the following diagram:
In a virtual machine environment you have individual machines each running a complete OS with the required libraries and software to run the applications. With Docker the ‘Guest OS’ becomes the ‘Hypervisor’, each container sharing the resources of the host operating system, in particular the Linux Kernel. This last point is important it allows the use of containers running different operating systems, providing greater flexibility.
So we could have an Ubuntu host operatings system with containers running CentOS, Debian or Redhat Linux.
Pushing code to in from development to production is done using containers that encapsulate complete functionality without their contents requiring any changes at any intermediate stage.
Converting the matrix from hell into…
Thus ensuring that the code that developer wrote is what ends up in production having passed through QA validation.
Docker in Solid Gear
In Solid Gear we use Docker in different ways:
- New application development
- Ondemand test environments
- Converting existing applications
New application development
Using Docker from the start in a project is probably the easiest way to get going. We are able to align the architecture with how docker works. In one project a complete application is made up of 4 containers with a 5th acting as the frontend (shared with another set of containers).
Changes to the application only require updates to the application container leaving the other 3 containers alone. We’ll cover how this is done another day.
Ondemand test environments
Docker containers are very lightweight and require next to no time to start up. With one project we have designed and implemented a complete environment that allows QA engineers to select from a menu:
- Web server software
- PHP version
- Database software
- Application software version
These menu options are then used to select a pre-built container that is started and running in seconds. Imagine the resources and the time needed replicate this using physical or virtual hardware. Once the QA engineer has finished with the environment they it can be deleted.
Converting existing applications
Where Docker is not easy to implement is the conversion of an existing legacy application from the multi-server model outlined above into containers (Legacy being something that already exists and is working.)
With one company we are investigating the possible implementation of Docker in a system that at the application layer is made up from from multiple components which are candidates for converting into individual containers.
Once we have converted the platform into containers we will work with both the developers and QA engineers to use Docker for development and testing. This setup will allow them to work with consistent and clean environments which can over time be deployed in to production.