@raesene those problems didn't go away when we invented containers though. plenty of stuff relies on libs which then desync their version requirements from each other and cause problems within the container. the solution to that is stuff like venv, and better cross-version interop in language tools, not "pack a whole virtual machine (or near as damnit) to go with it, with all the extra complexity that entails".
@gsuberland if a container image has almost a full VM worth of tools in it, it's generally not doing it right.
I've done "kitchen sink" style images in the past, but for specific use cases where it makes sense.
Over the last 5 years or so there's been quite a bit of work done on reducing the size of container images (e.g. Wolfi or DockerSlim).
As to desync, I don't think I've seen much of that typically the advantage of containerization is that you can keep lib vrsions consistent within an image much more easily than on a long lived VM...
Containers these days are part of a much wider ecosystem, where using them as a unit of deployment makes sense, from serverless services like Lambda which supports containers for a lot of use cases, through SAAS container services like Fargate, then on to orchestration services like Kubernetes.
@raesene the stuff you're describing fits squarely in the "unless you're a cloud vendor" hole, yes
@gsuberland There's a fair number of companies in the "not a cloud vendor" line of work who use Containers because they fit use cases they have.
The cloud vendors provide services to run containers because their customers want those services!
@raesene my argument is primarily against shipping containers as a default approach to setup. companies are free to set up their own tech stacks as they see fit, and will only incur my judgement if I have to test their crap.