>I’m gonna sound like an old person here. As much as these tools are gorgeous and ergonomic, remember that the others are standard, which means they’re available (almost) everywhere.
Well, I, for one, don't work "almost anywhere", I work with specific servers. I ain't gonna get a new server out of the blue. And if some teams works with the same N servers, they can mandate that the tools are present on all of them.
Plus even on some unknown system, one can quickly copy or download a set of static binaries as our toolset.
So unless someone is a sysadmin for heterogenous networks, or is called to go to random clients and fix unseen before systems, or some corporate mandate prevents them from having their tools installed, there's no reason not to expand beyond standard POSIX userland.
> Plus even on some unknown system, one can quickly copy or download a set of static binaries as our toolset.
Aha, even on a machine with networking problems running a non-glibc set of libraries? (muslc has some problems running glibc (even static,) binaries OOTB, and most docker systems use alpine as a base which means... dealing with musl).
> So unless someone is a sysadmin for heterogenous networks, or is called to go to random clients and fix unseen before systems, or some corporate mandate prevents them from having their tools installed, there's no reason not to expand beyond standard POSIX userland.
I don't believe that the person in question was arguing against expanding beyond the POSIX userland -- rather that they were arguing for maintaining familiarity with POSIX tools in case you need to use them.
If you are in a small team of 5 or 6 people, with less than 100 servers to manage, sure, it's doable.
But if you are part of a very large team of 50 or more sysadmins, with a large infra in the thousands of nodes with various OSes and vintage of OSes (even if Unix only), things can get tricky quite quickly.
First a lot of people will want their favorite tools to be installed which can result in a huge mess of special toolboxes not being consistently installed (different path location, tool set varying from server to server).
Second, these tools, specially the shiny newer ones, need to be built, packaged and maintained properly which represent a significant load, specially across several OSes/Vintage of OSes.
Third, as a general rule, having an install base with as few packages as possible is generally a good thing, on one hand it reduces the surface of exposition of a server, on a second hand, it helps auditing for security vulnerabilities as "dead weight" dependencies (ex: libX11 for an editor which is both terminal based and graphical based, but only used in its terminal form on a server) will not trigger false positives in term of CVEs.
You’re right, and I’m not at all against having custom tools in any controlled environment. In fact, I think enhancing a controlled environment with custom tools is extremely productive.
My point was more about education. I think becoming an expert in the standard tools should come before learning any custom ones, because it is a transferable skill.
Well, I, for one, don't work "almost anywhere", I work with specific servers. I ain't gonna get a new server out of the blue. And if some teams works with the same N servers, they can mandate that the tools are present on all of them.
Plus even on some unknown system, one can quickly copy or download a set of static binaries as our toolset.
So unless someone is a sysadmin for heterogenous networks, or is called to go to random clients and fix unseen before systems, or some corporate mandate prevents them from having their tools installed, there's no reason not to expand beyond standard POSIX userland.