I had some ideas for extending the lem editor (emacs in common lisp) the other day and I am barely literate in Lisp. So I had Claude Code do it.
Fully awesome. No problems. A few paren issues, buit it seemed to not really struggle really. It produced working code. Was also really good at analyzing the lem codebase as well.
Besides, one can easily code a skill+script for detecting the problem and suggesting fixes. In my anecdotal experience it cuts down the number of times dumber models walk in circle trying to balance parens.
- Markdown files: Obsidian Sync merges the changes using Google's diff-match-patch algorithm.
- Other file types: For all other files, including canvases, Obsidian uses a "last modified wins" approach. The most recently modified version replaces earlier versions.
For conflicts in Obsidian settings, such as plugin settings, Obsidian Sync merges the JSON files. It applies keys from the local JSON on top of the remote JSON.
Pi makes GPT-5.3-Codex act about on par with Claude easily.
There's something in the default Codex harness that makes it fight with both arms behind its back, maybe the sandboxing is overly paranoid or something.
With Pi I can one-shot many features faster and more accurately than with Codex-cli.
Different sets of people, and different audiences. The CEO / corporate executive crowd loves AI. Why? Because they can use it to replace workers. The general public / ordinary employee crowd hates AI. Why? Because they are the ones being replaced.
The startups, founders, VCs, executives, employees, etc. crowing about how they love AI are pandering to the first group of people, because they are the ones who hold budgets that they can direct toward AI tools.
This is also why people might want to remain anonymous when doing an AI experiment. This lets them crow about it in private to an audience of founders, executives, VCs, etc. who might open their wallets, while protecting themselves from reputational damage amongst the general public.
I have been in dozens of meetings over the past year where directors have told me to use AI to enable us to fire 100% of our contract staff.
I have been in meetings where my director has said that AI will enable us to shrink the team by 50%.
Every single one of my friends who do knowledge work has been told that AI is likely to make their job obsolete in the next few years, often by their bosses.
You don' have to look past this very forum, most people here seem to be very positive about gen AI, when it comes to software development specifically.
Lots of folk here will happily tell you about how LLMs made them 10x more productive, and then their custom agent orchestrator made them 20x more productive on top of that (stacking multiplicatively of course, for a total of 200x productivity gain).
I don't know what is your bubble, but I'm a regular programmer and I'm absolutely excited even if a little uncomfortable. I know a lot of people who are the same.
I am using AI a lot to do tasks that just would not get done because they would take too long. Also, getting it to iterate on a React web application meant I can think about what I want it to do rather than worry about all the typing I would have to do. Especially powerful when moving things around, hand-written code has a "mental load" to move that telling an AI to do it does not.
Obviously not everything is 100% but this is the most productive I have felt for a very long time. And I've been in the game for 25 years.
Why do you need to move things around? And how is that difficult?
Surely you have an LSP in your editor and are able to use sed? I've never had moving files take more than fifteen minutes (for really big changes), and even then most of the time is spent thinking about where to move things.
LLM's have been reported to specifically make you "feel" productive without actually increasing your productivity.
I mean there are two different things. One is whether there are actual productivity boosts right now. And the second is the excitement about the technology.
I am definitely more productive. A lot of this productivity is wasted on stuff I probably shouldn't be writing anyways. But since using coding agent, I'm both more productive at my day job and I'm building so many small hobby projects that I would have never found time for otherwise.
But the main topic of discussion in this thread is the excitement about technology. And I have a bit mixed feelings, because on one hand side I feel like a turkey being excited for the Thanksgiving. On the other hand, I think the programming future is bright. there will be so much more software build and for a lot of that you will still need programmers.
My excitement comes from the fact that I can do so much more things that I wouldn't even think about being able to do a few months ago.
Just as an example, in last month I have used the agents to add features to the applications I'm using daily. Text editor, podcast application, Android keyboard. The agents were capable to fork, build, and implement a feature I asked for in a project where I have no idea about the technology. Iif I were hired to do those features, I would be happy if I implemented them after two weeks on the job. With an agent, I get tailor made features in half of a morning. Spending less than ten minutes prompting.
I am building educational games for my kids.
They learn a new topic at school? Let me quickly vibe the game to make learning it fun. A project that wouldn't be worth my weekend, but is worth 15 minutes. https://kuboble.com/math/games/snake/index.html?mode=multipl...
So I'm excited because I think coding agents will be for coding what pencil and paper were for writing.
I don't understand the idea that you "could not think about implementing a feature".
I can think of roughly 0 fratures of run-of-the-mill software that would be impossible to implement for a semi-competent software developer. Especially for the kinds of applications you mention.
Also it sounds less like you're productive and more like the vibeslop projects are distracting you.
I produce more good (imo) production features despite being distracted.
The features I mention is something that I would be able to do, but only with a lot of learning and great effort - so in practical terms I would not.
It is probably a skill issue but in the past many times I downloaded the open source project and just couldn't build and run it. Cryptic build errors, figuring out dependencies. And I see claude gets the same errors but he just knows how to work around those errors.
Setting up local development environment (db, dummy auth, dummy data) for a project outside of my competence area is already more work than I'm willing to do for a simple feature. Now it's free.
>I can think of roughly 0 fratures of run-of-the-mill software that would be impossible to implement for a semi-competent software developer.
Yes. I'm my area of competence it can do the coding tasks I know exactly how to do just a bit faster. Right now for those tasks I'd say it can one shot code that would take me a day.
But it enables me to do things in the area where I don't have expertise. And getting this expertise is very expensive.
I have a large C# application. In this application I have a functionality to convert some group of settings into a tree model (a list of commands to generate this tree). There are a lot of weird settings and special cases.
I asked claude to extract this logic into a separate python module.
It succesfully one-shot that, and I would estimate it as 2 days work for me (and I wrote the original C# code).
This is probabaly the best possible kind of task for the coding agents, given that it's very well defined task with already existing testcases.
I feel like it depends on the platform and your location.
An anonomyous platform like Reddit and even HN to a certain extent has issues with bad faith commenters on both sides targeting someone they do not like. Furthermore, the MJ Rathburn fiasco itself highlights how easy it is to push divisive discourse at scale. The reality is trolls will troll for the sake of trolling.
Additionally, "AI" has become a political football now that the 2026 Primary season is kicking off, and given how competitive the 2026 election is expected to be and how political violence has become increasingly normalized in American discourse, it is easy for a nut to spiral.
I've seen less issues when tying these opinions with one's real world identity, becuase one has less incentive to be a dick due to social pressure.
That’s a big reason I am open about my identity, here (and elsewhere, but I’m really only active, hereabouts).
At one time, I was an actual troll. I said bad stuff, and my inner child was Bart Simpson. I feel as if I need to atone for that behavior.
I do believe that removing consequences, almost invariably brings out the worst in people. I will bet that people are frantically creating trollbots. Some, for political or combative purposes, but also, quite a few, for the lulz.
There is a massive difference between saying "I use AI" and what the author of this bot is doing. I personally talk very little about the topic because I have seen some pretty extreme responses.
Some people may want to publicly state "I use AI!" or whatever. It should be unsurprising that some people do not want to be open about it.
The more straightforward explanation for the original OP's question is that they realized what they were doing was reckless and given enough time was likely to blow up in their face.
They didn't hide because of a vague fear of being associated with AI generally (which there is no shortage of currently online), but to this specific, irresponsible manifestation of AI they imposed on an unwilling audience as an experiment.
I personally know some of those people. They are basically being forced by their employers to post those things. Additionally, there is a ton of money promoting AI. However, in private those same people say that AI doesn't help them at all and in fact makes their work harder and slower.
You are assuming people are acting in good faith. This is a mistake in this era. Too many people took advantage of the good faith of others lately and that has produced a society with very little public trust left.
I mean, this is very obviously false. Literally everyone is not. Some people are, some people are absolutely condemning the use, some people use it just a bit, etc.
Anti-AI people are treated in a condescending way all the time. Then there is Suchir Balaij.
Since we are in a Matplotlib thread: People on the NumPy mailing list that are anti-AI are actively bullied and belittled while high ranking officials in the Python industrial complex are frolicking at AI conferences in India.
I'm playing around with it, and it's very cool! One issue is that fingerprint expansion doesn't always work, e.g. I have a memory "Going to Albania in January for a month-long stay in Tirana" and asking "Do I need a visa for my trip?" didn't turn up anything, using expansion "visa requirements trip destination travel documents..."
What would you think about adding another column that is used for matching that is a superset of the actual memory, basically reusing the fingerprint expansion prompt?
Depends on what your prose is for. If it's for documentation, then prose which matches the expected tone and form of other similar docs would be clichéd in this perspective. I think this is a really good use of LLMs - making docs consistent across a large library / codebase.
A problem I’ve found with LLMs for docs is that they are like ten times too wordy. They want to document every path and edge case rather focusing on what really matters.
It can be addressed with prompting, but you have to fight this constantly.
> A problem I’ve found with LLMs for docs is that they are like ten times too wordy
This is one of the problems I feel with LLM-generated code, as well. It's almost always between 5x and long and 20x (!) as long as it needs to be. Though in the case of code verbosity, it's usually not because of thoroughness so much as extremely bad style.
I have been testing agentic coding with Claude 4.5 Opus and the problem is that it's too good at documentation and test cases. It's thorough in a way that it goes out of scope, so I have to edit it down to increase the signal-to-noise.
The “change capture”/straight jacket style tests LLMs like to output drive me nuts. But humans write those all the time too so I shouldn’t be that surprised either!
1. Take every single function, even private ones.
2. Mock every argument and collaborator.
3. Call the function.
4. Assert the mocks were called in the expected way.
These tests help you find inadvertent changes, yes, but they also create constant noise about changes you intend.
Juniors on one of the teams I work with only write this kind of tests. It’s tiring, and I have to tell them to test the behaviour, not the implementation. And yet every time they do the same thing. Or rather their AI IDE spits these out.
If the goal is to document the code and it gets sidetracked and focuses on only certain parts it failed the test. It just further proves llm's are incapable of grasping meaning and context.