super spicy take: shipping source, build systems, and containers (especially for ephemeral uses) instead of just shipping a compiled binary is a waste of electricity and arguments for the containerised delivery approach are not very compelling in practice unless you work for a cloud vendor.
@gsuberland what about interpreted languages, that don't have an easy way to distribute compiled binaries (e.g. Python, Ruby, etc)...
One of my favourite uses for containers is bundling up old versions of tools written in scripting environments (e.g. old pentest tools in perl) where you don't want to have that old version or its libs installed generally on a host.
@raesene ship the code as a tarball, extract it, job done. we did this for decades and it was fine.
@gsuberland yeah there were never problems with library version clashes in Python, perl, ruby .... 🙄 😛
containers also provide a decent level of isolation from the underlying host, and other processes running on the host, which can be very handy.
@raesene those problems didn't go away when we invented containers though. plenty of stuff relies on libs which then desync their version requirements from each other and cause problems within the container. the solution to that is stuff like venv, and better cross-version interop in language tools, not "pack a whole virtual machine (or near as damnit) to go with it, with all the extra complexity that entails".
@raesene isolation from the host is a reasonable reason to use a container, of course. but you don't need that for most things, hence the complaint.
@gsuberland if a container image has almost a full VM worth of tools in it, it's generally not doing it right.
I've done "kitchen sink" style images in the past, but for specific use cases where it makes sense.
Over the last 5 years or so there's been quite a bit of work done on reducing the size of container images (e.g. Wolfi or DockerSlim).
As to desync, I don't think I've seen much of that typically the advantage of containerization is that you can keep lib vrsions consistent within an image much more easily than on a long lived VM...
Containers these days are part of a much wider ecosystem, where using them as a unit of deployment makes sense, from serverless services like Lambda which supports containers for a lot of use cases, through SAAS container services like Fargate, then on to orchestration services like Kubernetes.
@gsuberland @raesene
Well, the tools I use requre venvs, which are on the same scale as minimalistic linux image on top of which their docker images are built. And given that one package which should be installed from git and was updated last time in 2016 can cause the whole library hell in the system - docker is a tool of choice for many things.
@raesene the stuff you're describing fits squarely in the "unless you're a cloud vendor" hole, yes
@gsuberland There's a fair number of companies in the "not a cloud vendor" line of work who use Containers because they fit use cases they have.
The cloud vendors provide services to run containers because their customers want those services!
@raesene my argument is primarily against shipping containers as a default approach to setup. companies are free to set up their own tech stacks as they see fit, and will only incur my judgement if I have to test their crap.