It might simply be a matter of time. Claude Code recently transitioned to a native executable rather seamlessly -- eschewing its npm and Javascript dependencies. I imagine the Desktop may migrate similarly. One motivation might be a somewhat more effective protection against code analysis -- though LLM control of tools like Ghidra might make that point somewhat moot as well.
My dad is 82, and like many in his generation, suffers the curse of the industrious man- he will continue to perform physical feats for which he assumes capability until he injures himself (and possibly others). Driving is just one of these things.
So far, he has gotten operations on his hips, hands, shoulders, and back after overexerting himself while gardening, moving furniture, and... walking. When I ask him to consider the risks of driving, he brushes me off like I'm being ridiculous. It infuriates me. There's no arguing with him. And I'm absolutely gonna get a phone call one day that I'm not gonna like.
Regardless, we're talking about a distribution of abilities, and the number of people who can't drive safely is going to increase dramatically in the near future. The point of this article wasn't to judge all boomers.
It really depends on the style of game. There are gradations here. I think many designers are stuck in a static, controlling posture. Minecraft is an excellent and viral example of an alternative.
I dont think Minecraft is a fractal-like experience. The world may be infinite, but it repeats and each block that world is comprised of has set of rules that dont change. The living entities all follow their own rules. etc.
The costs of interactive AI have interesting effects as the author points out. Much like the lack of variety in music models, 3D asset generation via AI has a long way to go, particularly as studios have no incentive to share their data. But I think AI assistance could at least make some marginal improvements. Take a procedural game like No Man's Sky. There are billions of possible worlds in the game. But, presumably because of team size and/or agile philosophy, they drop these rigid, nearly identical, items and buildings throughout those billions of planets. AI could at least conveniently empower some added incremental complexity (via Claude Code etc.) to their existing asset generation, which lends more believability to these planets. If studios were willing to partner and share data, incredible world generation models, which could be used exclusively as asset generators rather than real time world renderers initially, could be built and dramatically empower designers.
I appreciate the tone of this article. I am exhausted by the usual existential fear, but doomers are in good company -- during the development of the OG atom bomb, there was fear of the possibility that the fission chain reaction would not stop and all life-as-we-know-it would be destroyed upon detonation. I look at the idea of a rapid AI induced material "robocalypse" (robot-apocalypse) as a similar projected fear, and the result being similarly unlikely, particularly in the near term (coming decades). Even with AI access to sophisticated 3d fabrication facilities, there will be severe supply chain constraints that would impede an overwhelming spawn of robots. If say China or the United States had a ubiquitous deployment of robots already, with manual and mobility capabilities roughly equivalent to humans, the concern would perhaps actually be warranted. We are far from that. Clownpocalyse fits the bill better. Much of the Clownpocalyse are in the ideas themselves, like Nick Bostrom's paperclip improbability.
The scenario was about the first fusion (hydrogen) bomb test causing a runaway "ignition" of the atmosphere. It was never considered likely, but they still did the math to make certain it couldn't happen.
The author, and even posters here in the comments, have neglected to speak of a related and highly successful technology -- search engines -- instead focusing on the broader internet as a technological development. Google's search engine product has earned billions. At a minimum, AI has disrupted, transformed and radically improved this ubiquitous technology already. That alone, and AI has many other transformative applications already productized (Claude Code anyone?), earns its place well ahead of the hyped technologies list the author has provided.
The author writes as if he didn't know 'aider' even existed. "Vibe coding skipped that phase entirely" is dead wrong. What may be different is that the cycle was incredibly short before Anthropic made it mainstream with Claude Code. Gemini CLI, definitely a Claude Code imitator, existed long before The New York Times knew what Claude Code was. Openclaw -- a decidedly different agentic AI application -- is part of another period where weirdos are playing with tools.
The juxtaposition of quotations at the head of this article will seem even more silly as AI progresses. The user-centric culture that Steve Jobs championed at Apple is quite orthogonal to the trajectory of artificial intelligence. AI has been under collective development for decades. Along this trajectory ChatGPT was the discovery of a viable "product". Remembering OpenAI's documented history, ChatGPT was not the result of building a tool towards solving a specific user need. It is no accident that Apple does not know what to do with AI yet. I am hoping that they can learn from Anthropic's tool empowerment lead and from the possibilities of OpenClaw, and instrument thoughtful AI integrations for their products. OpenAI can learn from them too, but they aren't in a particularly advantageous incumbent position like Apple and Google. But whatever Apple may do, it will only be a fraction of the AI story, regardless of its consumer success. Comparing the markets of OpenAI and Anthropic highlights this diversity.
Not sure how many developers are like me, but I am very open to Claude, very open to Gemini, open to open source models (including gpt-oss), but am very reluctant to use frontier OpenAI models. The Microsoft distrust runs extremely deep, the browser authentication dance demanded of users for ChatGPT was the most extreme of the major frontier models, and early OpenAI API service stability was absolutely terrible. Llama had my back back then.
This is is no way dismissing your concern but I think this reinforces my point about branding. Whether or not Microsoft is handling AI in a responsible way, we don't trust them due to their poor practices on Window.
reply