Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for writing this. I think many of your observations are correct (and agree with them). I'm not as sold on your solutions though. Here's some thoughts:

Abstractions are a correct way to do things when we don't know what "correct" is or need to deal with lots of unknowns in a general way. And then when you need to aggregate lots of different abstractions together, it's often easier to sit yet another abstraction on top of that.

However, in many cases we have enough experience to know what we really need. There's no shame in capturing this knowledge and then "specializing" again to optimize things.

In the grand 'ol days, this also meant that the hardware was a true partner of the software rather than an IP restricted piece of black magic sealed behind 20 layers of firewalled software. (At first this wasn't entirely true, some vendors like Atari were almost allergic to letting developers know what was going on, but the trend reversed for a while). Did you want to write to a pixel on the screen? Just copy a few bytes to the RAM addresses that contained the screen data and the next go around on drawing the screen it would show up.

Sometime in the late 90s the pendulum started to swing back and now it feels like we're almost at full tilt the wrong way again despite all the demonstrations to the contrary that it was the wrong way to do things, paradoxically this seemed to happen after the open source revolution transformed software.

In the meanwhile, layers and layers and layers of stuff ended up being built in the meanwhile and now the bulk of software that runs is some kind of weird middle-ware that nobody even remotely understands. We're sitting on towers and towers of this stuff.

Here's a demo of an entire GUI OS with web browser that could fit in and boot off of a 1.4MB floppy disk and run on a 386 with 8MB of RAM. https://www.youtube.com/watch?v=K_VlI6IBEJ0

I would bet that most people using this site today would be pretty happy today if something not much better than this was their work environment.

People are surprised when somebody demonstrates that halfway decent consumer hardware can outperform multi-node distributed compute clusters on many tasks and all it took was somebody bothering to write decent code for it. Hell, we even already have the tools to do this well today:

https://aadrake.com/command-line-tools-can-be-235x-faster-th...

http://www.frankmcsherry.org/graph/scalability/cost/2015/01/...

There's an argument that developer time is worth more than machine time, but what about user time? If I write something that's used or impacts a million people, maybe spending an extra month writing some good low-level code is worth it.

Thankfully, and for whatever reasons, we're starting to hear some lone voices of sanity. We've largely stopped jamming up network pipes with overly verbose data interchange languages, the absurdity of text editors consuming multi-core and multi-GB system resources is being noticed, machines capable of trillions of operations per second taking seconds to do simple tasks and so on...it's being noticed.

Here's an old post I write on this some more, keep in mind I'm a lousy programmer with limited skills at code optimization, and the list of anecdotes at the end of that post has grown a bit since then.

https://news.ycombinator.com/item?id=8902739

and another discussion

https://news.ycombinator.com/item?id=9001618



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: