Saying that code quality doesn't matter is simply false, if people enjoyed Claude code so much, anthropic wouldn't have to block direct API access to users.
Their "product market fit" is the LLM model itself, not the harness. The harness is completely replaceable in my opinion, it's just that Claude is the cheapest way to access the models.
In my experience, it's definitely faster to do manually if it's something that you know well. What LLMs enable is to skip research and learning by producing usable code immediately.
There is a long way between "usable code" and "the code I actually want". And each change I ask for piles on the slop. I don't get the slop when I just spend the same amount of time to write it out myself.
Most of what I find AI useful for is analyzing large volumes of data and summarizing, like looking in log files for a problem, or compiling reports from tons of JSON data. But even for those use cases, a simple CTRL-F is way way faster.
That made me laugh a bit as well. Definitely want to see some rigorous testing on that, I'd expect that on longer calls tha caller can make the ai say basically anything.
Not just when running out of context, it's always. Once it fixates on a goal, all hell breaks loose and there's nothing that it won't be sacrificed to get there. At least that's my experience with Claude Code, I am pressing the figurative breaks all the time.
I think that generative AI has many more downsides than upsides. It is and will continue to be a net negative for society, unless we have the collective discipline to manage the dangers.
"You think" is cheap, try doing it, rewrite an existing library in rust and see how it goes. Doing a rough prototype is easy, but the real work starts after that.
Their "product market fit" is the LLM model itself, not the harness. The harness is completely replaceable in my opinion, it's just that Claude is the cheapest way to access the models.
reply