If a candidate can be eliminated from consideration because an interviewer marked him/her as a "B", I don't think that's more strongly correlated with innovation than SAT scores and GPAs. It doesn't mean the interviewer is being a good judge of technical chops. And on top of that, the article doesn't say technical skill or ability correlates with innovation, but individual drive. So really, nothing adds up here.
I think the grandparent intended that the size measurement exclude statically linked libraries or assets, debug symbols, and compression technologies like UPX.
It can sometimes be beneficial from a distribution/deployment standpoint to have everything in one self-contained file. But you can't conclude much about the code quality of e.g. a computer game engine based on how many megabytes of graphics, music and sound effects a particular game based on that engine uses.
The rule is not meant to be universal, and I don't say other people should adopt it, but I think it's suitable for the work that I am doing right now.
Constraints like this can really shape a piece of software, for better or for worse. My inspiration is having work with a really powerful firmware system that had a hardware constraint to fit on a 1MB flash chip, everything included, and was done so well that it looked easy. Give yourself unlimited space and it's much easier to end up with UEFI...
I suspect quite a few programs out there would have turned out better if their authors had picked a semi-arbitrary maximum value for lines of code / bytes of RAM / bytes of disk / etc.
The actual rule I'm using for now is:
- 1 second compile excluding dependencies.
- 1 minute compile including dependencies (excl C compiler).
- 1 MB executable including everything except libc and base OS.
and it also opens it up to a rich set of well-known problems: sharing between processes and memory management, lock issues, system-call latencies, not to mention dealing with the monolithic environments that are almost definitely changing between each instance of your program you want to run (this 'environment' includes the shell, userspace services, kernel version and features...). Decomposing a program into multiple programs isn't always a good idea, there is a very broad trade-off here that needs to be evaluated for the needs of every program.
And why would you say systemd is monolithic? Have you looked at the source-code for it? Have you looked at how services are configured with it? I would argue its far more modular than sysVinit.
What about a trade-off between compound-ness and duration? If you want a 10-minute link, you really probably want something that's dead-simple to say like "zigg.be/fit". If its a 30-day lease, you might be willing to go for "zigg.be/greencatjumps" This allows shorter URLs to stay in a faster reuse pool.