Generated images moved even faster than generated code in a way through it's cycle, so maybe there is a lesson there we can observe and that you also touch upon.
Well i would be very happy if i could write that instinctively (at least the feel/flow makes it seem like that) in english, but i can't even by a long stretch.
The article is me coping with my existential crisis, trying to explore and accept my fears by writing it. And by exploring ideas I think I found some vision for my stance in all this - or hope if you will. I hope these feelings will be real, and I can write a positive blog post also, but I can't be certain if the feelings will survive the scrutiny at this point, or are just warm fuzzy delusions and next level of cope (I had these periods few times in last year).
I'm just trying to say, I am definitely not trying to deliberately spread FUD to hinder the open web -if that was your impression :P
Texts, like it or not, do not respect our intentions and live their own lives... The developments are indeed disturbing, and I also puzzle over the ways to navigate the mess they create.
The unfolding you foresee is definitely not to be dismissed: we've already seen quite a few people in this discussion sharing the same view, which I perceive almost totally ungrounded, and ready to act accordingly. But I think we should "grow the box" for our own good, despite not having proper credit from all those who benefit, because, while losing credit, our ideas gain impact, even ones that would have no credit anyway.
What makes me deeply concerned is the economic consequences. Although I believe that this time it's not different and eventually everything will play out to the greater public good, I also believe that this time it's not different and the transition is going to be very harsh. And I'm completely lost here.
Another thing is that I treasure my (very modest) role in public enlightenment. But advances in AI make me feel obsolete in this role, as I really cannot say more than a properly asked LLM. The only enlightenment remained in need seems to be enlightenment in prompting. I wonder whether others have the same feeling and what it can lead to.
(please, forgive me my midnight musings, I just had to say that at last)
And it's not just that the execution is faster now. The competition saw the "outer shell" of your idea. But LLM platforms (the forest) - they see the internals, if you used them to explore and develop it. They also see all similar ideas across the globe.
And they own - not rent the compute and models - as you do from them. If we want to extend this, they could "pre-cog" your idea and build it even before you do.
I'm not talking about what is happening now, I'm just playing out the thought experiment.
the pre-cog angle is the scariest part. it's not even that they copy you after the prompt patterns across millions of users already signal where demand is clustering before any individual ships. the only real counter is speed and distribution get to users before the signal becomes obvious enough to act on. which ironically means building in public is still the better strategy hiding slows you down more than it protects you.
The dynamics at least with the space dark forest is different than when we live on the same planet. It has to do with lack/slow communication over vast space (that you can't trust anyway).
It relied on two principles "the chain of suspicion" and "technological explosion", which don't hold true if we are on the same planet. You can google it (or llm it) :)
When I read if for the second time, trying to understand it - maybe even better match for the low orbit flying garbage would be "enshitification"? As the time goes on, more and more garbage is produced, and we have no clear way or specific motivated entity to start removing it so it just grows.
Enshittification specifically is when a product/service/platform gets worse from the user’s perspective because the platform vendor can directly profit from user-hostile design; for example, Google intentionally serves up bad results on the first search results page so the user clicks-through to the second page of results, resulting in more advert revenue to Google[1].
…whereas I feel what you’re describing is another Tragedy-of-the-Commons.
As the first line of the post says - it's a thought experiment, so comments like yours that open new options and ask new questions are the best outcome.
I have no other comment other than - very interesting. I thought about how the overlying model will change for us, but haven't considered that the underlying model (what you proposed) can change too ... if that makes sense.
I seem to get into sort of existential crisis every few moths with the progress that llm-s are doing. I probably fool myself for a while that "it's not real", then at some point I can't fool myself any more - then I accept it somewhat ... then the new progress happens and it cycles again.
But as it's written at the top, this was a thought experiment, not a prediction. And while I tried to put all the bad scenarios on the table (with the theme of the dark forest that is), I think I again found a sense of optimism, because I also think this thought experiment has flaws.
So I hope, that after a while I will be able to write the contrary, I've already written down some points about it - I already have a title. But we will see. I am more optimistic after I wrote this than before. :P
> This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.
You have a point about the update intervals and the higher speed they provide to developers. But you are talking about now, and I was making a thought experiment - about a potential future. LLM-s are not learning on the fly, but I suspect they do log the conversations, their responses and could also deduce from further interaction if a particular response was satisfactory to the user. So in a world where available training data is drying up, nobody is throwing all this away. Gemini even has direct upvote/downvote on responses. Algorithms will probably improve, and the intervals will probably shorten.
Given the detailed information that all the back and forwards generate - I think it's not hard to use similar technology to track underlying trends, get all the problems associated with them and all the solution space that is talked about - and generate the solution before even the ones who thought of it release it. Theoretically :)
I think the open development will become less open. I don't like it - but I think it's already happening. First - all the blogs and forums moved to specialized platforms (SO, discords, ..) and now event some of those are d(r)ying. If people (in extreme cases) don't even read the code they produce, why would they read about the code, discuss the code, that's not even in their care. That is without the theoretical fear of the global Borg slurping all they write.
> LLM-s are not learning on the fly, but I suspect they do log the conversations, their responses and could also deduce from further interaction if a particular response was satisfactory to the user.
Seems like this is hard to reliably do across the board. Sometimes when I stop interacting it's because it nailed the solution, and sometimes it's because it went so poorly that I opted to bin it and do it myself. Maybe all of the mid conversation planning and feedback is enough though.
reply