I was quite late to the Docker party, I only started working with it about 18 months ago. At the time it I wasn’t impressed
‘Really? Another Technology Fad I have to learn only for it to be dropped next year?’
I can’t say I was overly enamoured at the prospect, when I had read the initial ‘welcome sales pitch’ writeup I still wasn’t buying it
It’s just a VM only smaller, what’s the point?
I hadn’t actually understood what it was about at all, I had a very cynical stance, in part due to the constant stream of new shiny technology fashions that always get promoted as the big new thing, and never are.
I summoned as much enthusiasm as I could muster and started figuring out how to make it work. The more I got into it the more I realised this wasn’t just a passing fad.
I’ve been in IT quite a while now, long enough to remember when Virtual Machines started to become mainstream. I can remember at the time thinking what was the point of a pretend computer. Now they are everywhere. The current global IT infrastructure wouldn’t work without them, I have 3 on my MacBook using two different virtual machines and I can’t imagine trying to work without them.
I think my lightbulb moment was when I realised exactly what the Docker image was, in particular that it included all the dependencies that my App needed, pre installed and ready to go. It doesn’t sound that significant but how often do servers have to be taken offline for patches or upgrades? How often do you deploy a new App only to find you forgot to install something on the server you deployed to, something that your App needs?
With Docker, you build an image for your App that contains not just your App, but all the runtime dependencies it needs, all the configuration settings to make it work. You run the image on your dev machine. It doesn’t work, you forgot something so you throw the image away and build a new one, in the same way as you would fix a bug and recompile. Finally when it works that image gets copied and run in your live environment, the same way as your binary gets deployed now. That same image with all the built dependencies you installed. Nothing gets forgotten because it is all burnt into the image. You can run as many copies as you like. When you change your App, just build a new image. Upgrade the runtime and it only gets applied to your new image, no more trying to synchronise deployments and updates.
There is a (good) rule in software engineering that once you compile a piece of code that resultant binary is the one that gets tested and ultimately deployed, you never rebuild the binary. This makes sure that your tests are as valid as possible. Imagine extending this rule to cover not just your binary but the whole machine installation, from the OS version and patches to the libraries, runtime, config files, everything. How confident would you be deploying that?
It quite a bold statement, but I believe that containerisation is the next generation of VM, in ten years from now, containers will be as commonplace as VMs are now. Whether you use Docker or CoreOS’s Rocket, containers are here to stay.