These are exactly the kinds of posts I love. It seems technical posts like this are less and less on the internet. Is this a result of "vibe coding"? We don't feel like writing up posts like this when a machine did the work? Maybe it's a result of fewer and fewer people blogging. Maybe I'm just old and yelling about things changing.
I love the Commodore 64. I still have a working "portable" C64 that I turn on from time to time and play around with.
So what’s remarkable isn’t that a 1541 can run BASIC or process data internally, but that constraints and packaging decisions (cost-cut bit-banging, slow serial link) shaped a design that was, in practice, more distributed than a lot of modern “smart peripherals.” That’s both a lesson and a reminder: simple external interfaces often mask surprisingly rich internal behavior.
Main lesson was dont do it ever again. Manufacturing cost of C128D, a C128 with build in floppy, was higher than that of Amiga A500. Retail price was also close.
In a world obsessed with AI and distributed everything, simple problems like "mount this USB drive on every OS without headaches" still feel unsolved. That’s both humbling and oddly comforting.
In a way, "eat real food" functions less as scientific advice and more as a cultural signal. It could be seen as a rejection of industrialized diets and all the complexities around that. The idea of "Eat Real Food" might be a better default when you are hungry and looking for food. I guess time will tell.
Would it b possible to allow users to type in a custom font into a text box? That way a font that is install on their system can be tested. Could allow people to use this tool to test commercial fonts.
The phrase "artificial intimacy" really sticks with me. If machines can simulate emotional engagement good enough and past a certain threshold, many users will treat AI more as real people. This might happen consciously or unconsciously.
That illusion of closeness could have the potential to warp how we relate to REAL people. Over time, if your "listener" never judges you or walks away, you might measure real human bonds against an unfair standard.
I'd be curious how they handle false positives... e.g. tasks that appear stuck (due to GC pauses, I/O stalls, etc.) vs truly dead ones. I have seen that overzealous cleanup can do more damage than letting a zombie linger. That being said, there is obviously an upper limit to letting zombies linger.
I’ve been wanting to implement a more “overzealous” approach to cleanup orphaned pods from analytical workflows (Prefect) that hang on to expensive compute resources, sometimes it feels frustratingly out of control. It’s really difficult to get good signal from the noise on if it’s actually orphaned (due to the things you’ve mentioned); killing a workload that isn’t actually orphaned can be very costly due to re-runs. Commenting out of solidarity here, but also curious to see others chime in their approach.
As a tech toy, it’s lovable. But for it to feel like a real “desktop in a keyboard” it needs more performance. I would love for something like this to be a breakout product that is real-world useful.