If you work in data center or cloud IT, you’ve heard a lot about containers in the past few years. If you work elsewhere in technology, you’ve probably started hearing the word bubble up in conversation quite a bit recently. Containers such as Docker have exploded in popularity and their impact on computing, particularly the cost of computing, should not be underestimated.
What is Docker?
Imagine you own a small hotel. Your hotel has a total of three rooms. The remaining space in the hotel consists of bathrooms and a kitchen, and is shared between the guests in each of the three rooms. Although all guests have a dedicated room, they have to share bathrooms and a kitchen with the other guests from the other rooms. Not surprisingly, guests frequently complain about this arrangement.
If we apply a computing example to our very strange hotel situation, a physical computer is the hotel, each guest room represents an application, and our shared kitchen and bathroom setup represents the computer’s libraries and frameworks. Each of the three applications in this computer use Ruby on Rails framework. Luckily, since it’s shared between each of the applications, there’s no problem. However, if application one needs Ruby on Rails version one, application two needs version two, application three needs version three, and the Ruby on Rails framework we have installed on the computer is version three, apps one and two won’t work.
This issue can be resolved quite easily. Traditionally, we would establish three different hotels, represented by physical or virtual machines, one for each guest room application, so that the correct Ruby on Rails framework could be coupled with the correct version its application requires. This has the potential of being a very expensive solution with lots of overhead introduced by replication.
Instead of the traditional approach, we could use the concept of containers. We would have one hotel, represented by a physical or virtual machine, and each room would have its own application, bathroom, and kitchen, containing the correct versions of supporting libraries and frameworks, completely independent of the other rooms. In addition, since each room has its own kitchen/bathroom combo, the original shared libraries and frameworks can be destroyed, making room for another application guest room. Further, since each guest room operates independently, that container could be copied and pasted very inexpensively into another hotel environment and will run in exactly the same way.
What’s The Big Deal?
Containers like Docker ensure that applications and the environments they’re dependent upon can be modularized and placed anywhere. This concept decreases the potential for failures when moving development efforts from dev, to test, to staging, and to production. It also decreases the overhead required to run a multitude of environments on one machine, making virtualization easier and significantly less expensive than ever before.
Legacy systems and the infrastructure that houses them are costly. According to the US Department of Defense, in 2014, the average cost per hour to run the mainframe was sixteen times the per-hour cost to run a functionally equivalent Linux environment. As the face of technology continues to transform, services such as Docker will continue to drive the cost and complexity of modern computing environments down. In contrast, the cost and complexity of legacy computing environments is steadily on the rise. The diminishing pool of developers capable of maintaining it and the continued challenges with integrating it with the rest of the business will only act to steepen this rising cost trend.
It is important to take into account the opportunities lost, such as the ease and low cost of Docker, by kicking the legacy modernization can down the road.
Special thanks to Victor Dozal for the hotel analogy