I spent three years getting good at Kubernetes. CKA certified. Running production clusters on EKS and GKE. Building Helm charts, writing operators, debugging etcd. The whole thing.
Then I started building systems for actual businesses and stopped reaching for Kubernetes for most of them.
This is not a clickbait take. Kubernetes is not bad. It's the right tool for specific problems. The problem is that it's become the default assumption for any containerised workload, and the cost of that assumption—in operational complexity, cluster management overhead, and engineer time—is rarely calculated honestly.
What Kubernetes Actually Costs
Let me be specific about what "running Kubernetes in production" means.
An EKS cluster on AWS with three worker nodes (sufficient for a small-medium application) costs around £200-300/month in infrastructure alone before your application workload. That's before the NAT gateway, load balancer, and storage costs. In practice, a minimal production EKS setup for a single application is £400-600/month.
Then there's the operational overhead. You need engineers who understand Kubernetes—not deeply, but enough to debug why a pod is in CrashLoopBackOff, why an HPA isn't scaling, why a deployment rollout is stuck. You need to think about cluster upgrades (approximately every 6-12 months as EKS minor versions go end-of-life). You need to think about node group scaling, spot instance interruptions, pod disruption budgets.
None of this is impossible. But it's a set of problems you have to manage. For a single application run by a team of three engineers, it's a significant chunk of operational overhead applied to infrastructure that exists to run one thing.
What Most Applications Actually Need
Most applications need: run my container, give it some memory and CPU, scale up if traffic increases, don't fall over. The vast majority of SaaS products, internal tools, and client applications I've worked with have this requirement.
You don't need Kubernetes for this. You need a managed container runtime.
The Alternatives, Honestly
*Railway* is what I reach for first for greenfield projects under moderate load. Push code, it builds and deploys. Horizontal scaling is a configuration option. PostgreSQL, Redis, and other managed services are one-click additions. Environments (staging, production) are simple. Monthly cost for a small application: £20-80. There is no cluster to manage. There is no Kubernetes to know.
The trade-off: less control. You can't run a custom admission controller, you can't tune node affinity, you can't run custom operators. For most applications, you do not need these things.
*AWS App Runner* for teams already invested in the AWS ecosystem. Your container runs, it scales automatically, you pay per second of vCPU and memory. The networking model is simpler than ECS/Fargate. IAM integration works correctly. Cost for moderate load: £30-150/month. Still no cluster management.
*Fly.io* when you want geographic distribution and the low-latency edge placement that Kubernetes on a single region doesn't give you. Fly runs machines in 35+ regions. You can distribute your application globally with a configuration file and a CLI command. The cost model is consumption-based, which compares favourably against idle Kubernetes nodes.
*Serverless (Lambda, Cloud Functions, Vercel Functions)* for event-driven workloads, APIs with spiky traffic, and anything that benefits from per-invocation billing. Cold start latency has improved substantially for Node.js and Python functions. For an API that serves 10 requests per minute, serverless is dramatically cheaper than keeping a container running 24/7.
When Kubernetes Is Still Right
There are genuine use cases.
Multiple teams deploying multiple services to shared infrastructure. Kubernetes RBAC, namespace isolation, and resource quotas let you give each team a self-service deployment environment with guardrails. At 10+ services and 4+ teams, this overhead starts paying for itself.
Specific workload requirements that managed platforms don't support. GPUs, custom hardware, specific Linux kernel requirements, or compliance requirements that mandate running on your own hardware.
Existing Kubernetes expertise on the team. If you have three engineers who've run Kubernetes for years and find it comfortable, the operational overhead calculation changes. You're paying the cost already in muscle memory.
Requirements for open-source, vendor-neutral infrastructure. Kubernetes is portable between cloud providers and on-premises. Fly.io, Railway, and App Runner are not.
The Calculation I Actually Run
Before choosing infrastructure for a project, I ask:
- How many distinct services am I running? Under 5: skip Kubernetes. - Do any of these services have unusual resource requirements (GPUs, specific hardware)? - What's the traffic pattern? Predictable and steady → managed containers. Extremely spiky → serverless. - What's the team's current Kubernetes expertise? None → don't add it as a learning burden unless necessary. - What are the hard compliance requirements?
The honest answer, for most projects I see: Railway or App Runner, PostgreSQL on Neon, Redis on Upstash, occasional Lambda functions for scheduled jobs. This runs reliably, scales adequately for most SaaS businesses, and costs a fraction of a Kubernetes cluster.
I'm not anti-Kubernetes. I've run it in production and I know how to run it well. I'm anti-complexity-for-its-own-sake. The question is always whether the tool's benefits justify its costs for this specific problem.
For most problems, the honest answer is no.