docker is really the "solve a problem with regex, now you have two problems" of the modern age
hot take: regex is fine, actually, and most of the jokes about it being a problem are just Regex Strings Look Scary disguised as technical complaints.
@gsuberland
All of:
s/docker/conda/
s/docker/web frameworks/
s/docker/NNI/
work well too. Not a complete list by far.
@gsuberland
💯 and regexes never surreptitiously modified iptables rules!
@gsuberland strong agree. Most problems that can be solved with a regex technically could be solved in other ways, but would be much less clear in doing so.
@gsuberland
Unless you add different escaping and various features available/working slightly different in different implementations. Not talking about performance analysis.
super spicy take: shipping source, build systems, and containers (especially for ephemeral uses) instead of just shipping a compiled binary is a waste of electricity and arguments for the containerised delivery approach are not very compelling in practice unless you work for a cloud vendor.
@gsuberland That and "people trying to jam regex into things that are much better suited for procedural string-manipulation" like the email regex (and even that monster isn't fully RFC-compliant...)
I was the designated "regex wizard" on my previous team, even though the only things we used them for were pseudo-xpath stuff to grab the contents of a specific tag as a string (in addition to actual xpath stuff) and thus they were actually really simple, just long-winded due to multiple tag names.
@becomethewaifu for sure. I guess my associated spicy take is that learning a technology is just as much about discovering and respecting what it cannot do (or should not do) as it is learning what it can do.
@gsuberland Containers are an indication that we've failed miserably at the, yes, _genuinely difficult_ problem of code sharing/coexisting.
But honestly, pretending applications don't have to share is one reason virtual memory exists, and there are dark thoughts down that path of "don't lie to program". So it;s impossible to say if it's bad or not.
@gsuberland But also, my Linux distro is a Docker-generated OCI image so lol
@gsuberland what about interpreted languages, that don't have an easy way to distribute compiled binaries (e.g. Python, Ruby, etc)...
One of my favourite uses for containers is bundling up old versions of tools written in scripting environments (e.g. old pentest tools in perl) where you don't want to have that old version or its libs installed generally on a host.
@gsuberland I guess, to me at least, it depends on what you make of it.
There are containers that lean into the possibilities (an executable along with the absolutely required dependencies, maybe a dozen megabytes or two) and then there are "containers" that are just virtual machines in disguise (a full Ubuntu installation + runtime environment, source code and the aforementioned build system, so maybe a gigabyte)..
I think the former are a decent way to package services (but not tools).
@gsuberland I feel that, frequently, containerisation is an excuse not to actually understand the stack you’re deploying. “Works on my machine” as a service.
@gsuberland shipping a script is not a substitute for documenting build and runtime dependencies, and providing installation documentation, and this is the hill I will die on
Also, NAT can die in a fire.
@gsuberland While this is true this ignores the goal of most packaging. Security checks / quality control does need the source too.
@jwarlander if you release your currently-Linux-only service as some code, a binary, and a text file describing install steps, I can probably port it to FreeBSD with no special knowledge.
if you release your service as a container I need to learn how your chosen container system works and reverse engineer the install requirements and steps from your compose file or whatever, and likely need to do extra work to get the build working in the first place.
@jwarlander my primary complaint here (which seems not to be what everyone is replying about, weirdly enough, funny how that happens) is that shipping a docker container full of commands to build the thing and then run it as a service makes bugger all sense over just building once for binary installs and offering a well-documented build and setup procedure for people who have other needs.
@gsuberland Oh! Yeah, definitely, containers aren't great as a general release method for software intended for others to use!
When replying above, I came from the context of my own situation - developing and deploying our own services that we run at work, in our cloud environment.
@raesene ship the code as a tarball, extract it, job done. we did this for decades and it was fine.
@gsuberland yeah there were never problems with library version clashes in Python, perl, ruby .... 🙄 😛
containers also provide a decent level of isolation from the underlying host, and other processes running on the host, which can be very handy.
@raesene those problems didn't go away when we invented containers though. plenty of stuff relies on libs which then desync their version requirements from each other and cause problems within the container. the solution to that is stuff like venv, and better cross-version interop in language tools, not "pack a whole virtual machine (or near as damnit) to go with it, with all the extra complexity that entails".
@raesene isolation from the host is a reasonable reason to use a container, of course. but you don't need that for most things, hence the complaint.
@gsuberland if a container image has almost a full VM worth of tools in it, it's generally not doing it right.
I've done "kitchen sink" style images in the past, but for specific use cases where it makes sense.
Over the last 5 years or so there's been quite a bit of work done on reducing the size of container images (e.g. Wolfi or DockerSlim).
As to desync, I don't think I've seen much of that typically the advantage of containerization is that you can keep lib vrsions consistent within an image much more easily than on a long lived VM...
Containers these days are part of a much wider ecosystem, where using them as a unit of deployment makes sense, from serverless services like Lambda which supports containers for a lot of use cases, through SAAS container services like Fargate, then on to orchestration services like Kubernetes.
@gsuberland @raesene
Well, the tools I use requre venvs, which are on the same scale as minimalistic linux image on top of which their docker images are built. And given that one package which should be installed from git and was updated last time in 2016 can cause the whole library hell in the system - docker is a tool of choice for many things.
@raesene the stuff you're describing fits squarely in the "unless you're a cloud vendor" hole, yes
@gsuberland There's a fair number of companies in the "not a cloud vendor" line of work who use Containers because they fit use cases they have.
The cloud vendors provide services to run containers because their customers want those services!
@raesene my argument is primarily against shipping containers as a default approach to setup. companies are free to set up their own tech stacks as they see fit, and will only incur my judgement if I have to test their crap.
@raesene although if they're shipping Kubernetes in my line of work they'll likely be getting the report equivalent of some facetime with sweet lady brick.
@gsuberland if anyone loves docker they haven’t worked with it long enough yet