Great point. The thing is that most code we write is not elegant implementations of algorithms, but mostly glue or CRUDs. So LLMs can still broadly be useful.
I think that we can already experience a revolution with LLMs that are not fully autonomous. The potential is that an engineering-like approach to a prompt flow can allow you to design and review (not write) a lot more code than before. Though you're 100% correct that the analogy doesn't strictly hold until we can stop looking at the code in the same way that a js dev doesn't look at what the interpreter is emitting.
Mario Zechner has a very interesting article where he deals with this problem (https://mariozechner.at/posts/2025-06-02-prompts-are-code/#t...). He's exploring how structured, sequential prompts can achieve repeatable results from LLMs, which you still have to verify. I'm experimenting with the same, though I'm just getting started. The idea I sense here is that perhaps a much tighter process of guiding the LLM, with current models, can get you repeatable and reliable results. I wonder if this is the way things are headed.
@manuelabeledo: during 2025 I've been building a programming substrate called cell (think language + environment) that attempts to be both very compact and very expressive. Its goal is to massively reduce complexity to turn general purpose code more understandable (I know this is laughably ambitious and I'm desperately limited in my capabilities of pulling through something like that). But because of the LLM tsunami, I'm reconsidering the role of cell (or any other successful substrate): even if we achieve the goal, how will this interact with a world where people mostly write and validate code through natural language prompts? I never meant to say that natural language would itself be this substrate, or that the combination of LLMs and natural languages could do that: I still see that there will be a programming language behind all of this. Apologies for the confusion.
Hi HN! OP here. Thanks for reading and commenting -- (and @swah for posting!). It's unsettling to hit the HN front page, and even more so with an article that I hastily wrote down. I guess you never know what's going to hit a nerve.
Some context: I'm basically trying to make sense of the tidal wave that's engulfing software development. Over the last 2-3 weeks I've realized that LLMs will start writing most code very soon (I could be wrong, though!). This article is just me making sense of it, not trying to convince anybody of anything (except of, perhaps, giving the whole thing a think). Most of the "discarded" objections I presented in the list were things I espoused myself over the past year. I should have clarified that in the article.
I (barely) understand that LLMs are not a programming language. My point was that we could still think of them as a "higher level programming language", despite them 1) not being programming languages; 2) being wildly undeterministic; 3) also jumping levels by them being able to help you direct them. This way of looking at the phenomenon of LLMs is to try to see if previous shifts in programming can explain at least partially the dynamics we are seeing unfold so quickly (to find, in Ray Dalio's words, "another kind of those").
I am stepping into this world of LLM code generation with complicated feelings. I'm not an AI enthusiast, at least not yet. I love writing code by hand and I am proud of my hand-written open source libraries. But I am also starting to experience the possibilities of working on a higher level of programming and being able to do much more in breadth and depth.
I fixed an important typo - here I meant: "Economically, only quality is undisputable as a goal".
Responding to a few interesting points:
@manuelabeledo: during 2025 I've been building a programming substrate called cell (think language + environment) that attempts to be both very compact and very expressive. Its goal is to massively reduce complexity to turn general purpose code more understandable (I know this is laughably ambitious and I'm desperately limited in my capabilities of pulling through something like that). But because of the LLM tsunami, I'm reconsidering the role of cell (or any other successful substrate): even if we achieve the goal, how will this interact with a world where people mostly write and validate code through natural language prompts? I never meant to say that natural language would itself be this substrate, or that the combination of LLMs and natural languages could do that: I still see that there will be a programming language behind all of this. Apologies for the confusion.
@heikkilevanto & @matheus-rr: Mario Zechner has a very interesting article where he deals with this problem (https://mariozechner.at/posts/2025-06-02-prompts-are-code/#t...). He's exploring how structured, sequential prompts can achieve repeatable results from LLMs, which you still have to verify. I'm experimenting with the same, though I'm just getting started. The idea I sense here is that perhaps a much tighter process of guiding the LLM, with current models, can get you repeatable and reliable results. I wonder if this is the way things are headed.
@woodenchair: I think that we can already experience a revolution with LLMs that are not fully autonomous. The potential is that an engineering-like approach to a prompt flow can allow you to design and review (not write) a lot more code than before. Though you're 100% correct that the analogy doesn't strictly hold until we can stop looking at the code in the same way that a js dev doesn't look at what the interpreter is emitting.
@nly: great point. The thing is that most code we write is not elegant implementations of algorithms, but mostly glue or CRUDs. So LLMs can still broadly be useful.
I hope I didn't rage bait anybody - if I did, it wasn't intentional. This was just me thinking out loud.
Hi HN! OP here. Thank you everyone for reading and commenting. Thanks to your feedback I have done the following edits to the post:
- Added a comment on GLP-1 agonists. I wrote the article like it was 2023, not 2025. These drugs now exist and their benefits massively outweigh their drawbacks, particularly for people that really need help. Anything that helps out of the trap, particularly with this effectiveness, should be front and center. Thank you for pointing it out.
- Added a comment on my take on the usefulness of exercise for this process. I don't believe in exercise as a calory burner, but as something you need in order to be strong, fit, flexible and feel better mentally. It supports you in your journey. Exercise in order to burn calories to get lean is counterproductive. It is a thick wall of the mental fat trap.
- I realize that my struggles (and I don't say this lightly) have been a small fraction of what many of you had to go through, or are still going through. I also mentioned this in the article now. For some, it can be ten, a hundred, a thousand times harder than for others to break free from being overweight and be able to regulate their food in a way that is mentally healthy.
- I also added this: "Incidentally, I don't think this is about willpower (this is another parallel with Carr's insight). The decision to change comes from a deeper source. When I was most obsessed about asserting willpower over my eating, I was having the worst time and making bad choices. Getting out involves awareness, work, and a willingness to fail and keep on trying. The authors above say it much better than I can."
Hope again this was helpful for those with like struggles.
You brought up smoking and were mere inches near the truth but quickly ran away back into the lalaland... Smoking used to be more prevalent than obesity. Did we bring it down with "smoking positivity" and did shaming and harassing smokers only brought harm? What do you think?
For what it's worth, I believe that shaming is generally not helpful. For people to step out of a difficult situation, they need to be empowered. They will probably find your help more useful than you shaming them, or at least your sympathy. At least that's how I see it.
good article, I can (unfortunately) relate.
another aspect of the trap is when you have set backs (stress, life events) or get tired (long days, less sleep, emotional events) typically the first recourse is to stop the hardest parts: physical fitness, e.g., you take a car instead of bike/walk, skip sports, alcohol instead of water.
it's sometimes a vicious circle, you're tired due to overweight, thus eat more to get energy, making you more overweight.
Hi HN! OP here. I would have never expected to find interest here for ultra running. But of course there is -- thanks for the comments!
@n4r9: I avoid peanuts because I find them somewhat allergenic. I'll update the post.
ekr____: regarding fat, I think it's a personal thing. I get really tired of getting my calories through carbs when ultra running. I find it quite easy to scarf down 2000-3000kcal of nuts and they seem to sit well. But then again, I'm a slow runner (even by ultra standards). Also, indeed, the 30k stops are for a context with no race organization (no drop bags) where there's always a supermarket nearby.
@swah: thanks for posting! Stopping every 30k sounds crazy because it is crazy. But crazy is what ultra is, after all. I'm currently trying to "get used" to run 100k. In that context, carrying supplies for 30-40k (4-5 hours of running/shuffling) is reasonable. Stopping too often gives me more opportunities to get distracted, burn time and give up sooner than I would have otherwise.
> If you were my boss I'd be putting on my parachute and heading for the airstairs and expecting to see a plume of smoke wherever you impact.
(Ouch!)
You raise an interesting point. Perhaps, without a large amount of "bin #2" work, it's impossible for "bin #1" work to really take off. I've experienced this firsthand, and end up always doing a lot of "bin #2". But I wonder how much better it could be if I dared to focus even further. Or if it's at all possible and I'm just risking it too much.
Do you have any examples of the 80/20 idea actually killing a startup? I'm genuinely interested.
reply