Docker has been around for a few years now, and I think the hype cycle has settled enough that we can talk about what’s actually happening here. Most organizations are still treating containers like a nicer way to deploy apps. That undersells it badly. What we’re looking at is a completely new unit of computation, and it’s going to change how we think about running software at a pretty fundamental level.
The “pets vs. cattle” metaphor for servers has been kicking around for a while, but honestly, before containers it was more aspiration than reality for most teams. You could invest in Chef or Puppet or Ansible to get close, but configuration drift was always lurking. There was always this gap between what you thought was running and what was actually running, and that gap caused incidents.
Containers close that gap in a way that nothing else has. The image is the artifact. You build it, you ship it, you run it. Same thing everywhere – your laptop, CI, staging, production. “Works on my machine” stops being a joke and starts being a guarantee. I know that sounds dramatic but I’ve been running containers in production for a few months now and the reduction in environment-related surprises has been significant.
There’s a bunch of orchestration platforms fighting it out right now – Docker Swarm, Mesos, Nomad, Kubernetes. I’ve kicked the tires on all of them. Kubernetes is going to win, even though it’s easily the most complex of the bunch.
Here’s why I think that. Swarm is simpler, sure, but it’s basically Docker’s orchestration layer – it doesn’t give you much beyond what Docker Compose already does. Mesos is powerful but it’s designed for a different era and a different scale than most of us operate at. Kubernetes is doing something different. It’s not just running containers. It’s providing a declarative API for describing your infrastructure. Pods, Services, Deployments, ConfigMaps – this is a vocabulary for distributed systems that doesn’t care what cloud you’re on.
That declarative model is the thing. You describe what you want, Kubernetes figures out how to make reality match. It’s essentially a reconciliation loop borrowed from control theory, and it’s really elegant once you get past the learning curve. You end up with a version-controlled, reviewable specification of your entire system. That’s infrastructure as code taken to its actual logical endpoint, not just “we wrote some Ansible playbooks.”
Yeah, it’s complex. There’s a lot of YAML. The networking model takes a minute to wrap your head around. But I think the ecosystem will catch up fast – the community around this thing is massive and moving quickly. Give it two or three years.
When developers are writing Dockerfiles and Kubernetes manifests, they’re doing ops work whether they realize it or not. When ops engineers are writing controllers and custom operators, they’re doing software engineering. The wall between the two disciplines is dissolving, and I think that’s mostly a good thing.
It doesn’t mean everyone has to be a full-stack generalist. But the walls have to be porous. Your backend engineer should understand how their service gets deployed. Your infra person should understand the application’s failure modes. If those people can’t have a conversation about the same system, you’re going to have a bad time.
Containers open up an interesting strategic question about cloud provider lock-in. If your app runs in containers orchestrated by Kubernetes, you’ve got theoretical portability across any cloud, or even back to on-prem. In practice it’s messier – your managed database, your IAM setup, your networking config all tie you to a provider. But the application layer becomes portable even if the data layer doesn’t, and that shift in the balance of power matters.
Organizations that containerize now are going to have options later. The ones that build everything directly on proprietary cloud services are going to find themselves stuck. I’ve seen this movie before with other platform lock-in situations, and it never ends well for the customer.
Within five years, containers and orchestration are going to be as fundamental to deploying software as git is to writing it. If your org hasn’t adopted this model by then, you’re going to be at a serious disadvantage on velocity, reliability, and cost. The tools will get better. The rough edges will smooth out. But the core idea – a standard, portable, declarative way to describe and run distributed systems – that’s already solid. This revolution is just getting started.