People old enough might recall when software was mainly distributed through physical means. The rise of high-speed internet and smartphones ushered in the age of web services—cloud-based software accessible through browsers and apps.
Web applications were initially run on physical servers in private data centers. To streamline management, these applications were often monolithic, with a single server hosting the entire back-end code and database. However, web hosting services like Amazon and the emergence of hypervisor technology have transformed this landscape. Thanks to Amazon Web Services (AWS) and tools such as VirtualBox, packaging an entire OS into a single file is now effortless.
Services like EC2 simplified the process of bundling machine images and linking virtual servers. This led to the microservices paradigm—an architectural approach where large, monolithic applications are deconstructed into smaller, specialized services that excel in specific tasks. This approach generally simplifies scaling and feature development, as bottlenecks are easier to identify and system changes are simpler to isolate.
Pets to Livestock
As this trend gained momentum, I started my journey as an infrastructure engineer. I vividly remember setting up my first production environment in Amazon using a series of bash scripts. I treated those servers like cherished pets, assigning each a whimsical name. I meticulously monitored them, promptly addressed alerts, and ensured their well-being. I showered those instances with care and attention, as replacing them was a painful endeavor—much like losing a beloved companion.
Then came Chef, a configuration management tool that brought immediate relief. Tools like Chef and Puppet alleviated most of the manual effort involved in managing a cloud system. Chef’s “environments” construct enabled the separation of development, staging, and production servers, while its “data bags” and “roles” simplified the definition of configuration parameters and the implementation of changes. Now, my “pet” servers had successfully completed obedience training.

In 2013, Docker marked the dawn of a new epoch: the era of software as livestock (my apologies to any vegans). The container paradigm emphasizes orchestration over configuration management. Tools such as Kubernetes, Docker Compose, and Marathon prioritize the movement of predefined images rather than tweaking configuration values on active instances. Infrastructure becomes immutable; when a container malfunctions, it’s terminated and replaced without hesitation. The well-being of the entire herd takes precedence over individual animals. Servers no longer receive endearing names.
The Rewards
Containers offer numerous advantages, allowing businesses to focus on their core competencies. Tech teams can shift their attention from infrastructure and configuration management to application code. Companies can further streamline their operations by utilizing managed services for databases like MySQL, Cassandra, Kafka, or Redis, eliminating the need to manage the data layer entirely. Several startups even provide “plug-and-play” machine learning services, enabling sophisticated analytics without infrastructure concerns. These advancements culminate in the serverless model—an architectural approach that empowers teams to deploy software without managing a single VM or container. AWS services like S3, Lambda, Kinesis, and Dynamo make this possible. In essence, we’ve progressed from pets to livestock to an on-demand animal service.
It’s remarkable that in this day and age, even a twelve-year-old can effortlessly deploy a complex software system. This was unimaginable not long ago. Just a few presidents back, physical media reigned supreme, and software development and distribution were exclusive to large corporations. Bug fixes were a luxury. Today, that same twelve-year-old can create an AWS account and share their software with the world. Any bugs encountered can be reported via Slack, and a fix can be rolled out to all users within minutes.
The Risks
While undeniably impressive, this progress comes at a cost—reliance on cloud providers like Amazon translates into dependence on large corporations and proprietary technologies. If Richard Stallman and Edward Snowden haven’t raised concerns about such matters, the recent Facebook debacle should serve as a stark reminder.
Increased abstraction from hardware also introduces risks related to transparency and control. When a system running hundreds of containers encounters an issue, we can only hope that the failure is detectable. Identifying the root cause can be challenging if the problem lies within the host operating system or underlying hardware. An outage that could have been resolved in twenty minutes using VMs might take hours or even days to address in a containerized environment without proper instrumentation.
Beyond failures, security is another concern with technologies like Docker. We must place our trust in container platforms, assuming no backdoors or undisclosed vulnerabilities exist. Even open-source platforms are not immune to risks. Relying on third-party container images for system components can create vulnerabilities.
Wrap Up
The livestock paradigm offers several benefits but also presents drawbacks. Before hastily containerizing their entire stack, tech teams should carefully consider its suitability and ensure they can mitigate any potential downsides.
Personally, I enjoy working with containers and am eager to witness the evolution of platforms and paradigms in the next decade. However, as a former security consultant, I remain wary, knowing that everything comes with a price. As engineers, we must remain vigilant in safeguarding our autonomy as users and developers. Even the most streamlined CD/CI workflow is not worth sacrificing our independence.