Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Yegge is leaning into the true definition of vibecoding with this project: “It is 100% vibecoded. I’ve never seen the code, and I never care to.”

I don't get it. Even with a very good understanding of what type of work I am doing and a prebuilt knowledge of the code, even for very well specced problem. Claude code etc. just plain fail or use sloppy code. How do these industry figures claim they see no part of a 225K+ line of code and promise that it works?

It feels like we're getting into an era where oceans of code which nobody understands is going to be produced, which we hope AGI swoops in and cleans?





This is also my experience. Everything I’ve ever tried to vibe code has ended up with off-by-one errors, logic errors, repeated instances of incorrect assumptions etc. Sometimes they appear to work at first, but, still, they have errors like this in them that are often immediately obvious on code review and would definitely show up in anything more than very light real world use.

They _can_ usually be manually tidied and fixed, with varying amounts of effort (small project = easy fixes, on a par with regular code review, large project = “this would’ve been easier to write myself...”)

I guess Gas Town’s multiple layers of supervisory entities are meant to replace this manual tidying and fixing, but, well, really?

I don’t understand how people are supposedly having so much success with things like this. Am I just holding it wrong?

If they are having real success, why are there no open source projects that are AI developed and maintained that are _not_ just systems for managing AI? (Or are there and I just haven’t seen them?...)


In my comment history can be found a comment much like yours.

Then Opus 4.5 was released. I had already had my CC cluade.md, and Windsurf global rules + workspace rules set up. Also, my main money making project is React/Vite/Refine.dev/antd/Supabase... known patterns.

My point is that given all that, I can now deploy amazing features that "just work," and have excellent ux in a single prompt. I still review all commits, but they are now 95% correct on front end, and ~75% correct on Postgres migrations.

Is it magic? Yes. What's worse is that I believe Dario. In a year or so, many people will just create their own Loom or Monday.com equivalent apps with a one page request. Will it be production ready? No. Will it have all the features that everyone wants? No. But it will do that they want, which is 5% of most SaaS feature sets. That will kill at least 10% of basic SaaS.

If Sonnet 3.5 (~Nov 2024) to Opus 4.5 (Nov 2025) progress is a thing, then we are slightly fucked.

"May you live in interesting times" - turns out to be a curse. I had no idea. I really thought it was a blessing all this time.


Yeah, it sounds like "you're holding it wrong"

Like, why are you manually tidying and fixing things? The first pass is never perfect. Maybe the functionality is there but the code is spaghetti or untestable. Have another agent review and feed that review back into the original agent that built out the code. Keep iterating like that.

My usual workflow:

Agent 1 - Build feature Agent 2 - Review these parts of the code, see if you find any code smells, bad architecture, scalability problems that will pop up, untestable code, or anything else falling outside of modern coding best practices Agent 1 - Here's the code review for your changes, please fix Agent 2 - Do another review Agent 1 - Here's the code review for your changes, please fix

Repeat until testable, maybe throw in a full codebase review instead of just the feature.

Agent 1 - Code looks good, start writing unit tests, go step by step, let's walk through everything, etc. etc. etc.

Then update your .md directive files to tell the agents how to test.

Voila, you have an llm agent loop that will write decent code and get features out the door.


I'm not trying to be rude here at all but are you manually verifying any of that? When I've had LLMs write unit tests they are quick to write pointless unit tests that seem impressive "2123/2123 tests passed!" but in reality it's testing mostly nothing of value. And that's when they aren't bypassing commit checks or just commenting out tests or saying "I fixed it all" while multiple tests are broken.

Maybe I need a stricter harness but I feel like I did try that and still didn't get good results.


I feel like it was doing what you're saying about 4-6 months ago. Especially the commenting out tests. Not always but I'd have to do more things step by step and keep the llm on track. Now though, the last 3-4 months, it's writing decent unit tests without much hand holding or refactors.

Hmm, my last experience was within the last 2 months but I'm trying not to write it off as "this sucked and will always suck", that's the #1 reason I keep testing and playing with these things, the capabilities are increasing quickly and what did/didn't work last week (especially "last model") might work this week.

I'll keep testing it but that just hasn't been my experience, I sincerely hope that changes because an agent that runs unit test [0] and can write them would be very powerful.

[0] This is a pain point for me. The number of times I've watching Claude run "git commit --no-verify"... I've told it in CLAUDE.md to never bypass commit checks, I've told it in the prompt, I've added it 10 more times in different places in CLAUDE.md but still, the agent will always reach for that if it can't fix something in 1-3 iterations. And yes, I've told it "If you can't get the checks to pass then ask me before bypassing the checks".

It doesn't matter how many guardrails I put up and how good they are if the agent will lazily bypass them at the drop of a hat. I'm not sure how other people are dealing with this (maybe with agents managing agents and checking their work? A la Gas Town?).


I haven't seen your issue, but git is actually one of the things I don't have the llm do.

When I work on issues I create a new branch off of master, let the llm go to town on it, then I manually commit and push to remote for an MR/PR. If there are any errors on the commit hooks I just feed the errors back into the agent.


Interesting, ok, I might try that on my next attempt. I was trying to have it commit so that I could use pre-commit hooks to enforce things I want (test, lint, prettier, etc) but maybe instead I should handle that myself and make it more explicit in my prompts/CLAUDE.md to test/lint/etc. In reality I should just create a `/prep` command or similar that asks it to do all of that so that once it thinks it's done, I can quickly type that and have it get everything passing/fixed and then give a final report on what it did.

You’ll likely have the same issue relying on CLAUDE.md instructions to test/lint/etc, mine get ignored constantly to the point of uselessness.

I’m trying to redesign my setup to use hooks now instead because poor adherence to rules files across all the agentic CLIs is exhausting to workaround.

(and no, Opus 4.5 didn’t magically solve this problem to preemptively respond to that reply)


What do your rules files look like?

I wonder if some people are putting in too much into their markdown files of what NOT to do.

I hate people saying the llms are just better auto-correct, but in some ways they're right. I think putting in too much "don't do this" is leading the llm down the path to do "this" because you mentioned it at all. The LLM is probabilistically generating it's response based on what you've said and what's in the markdown files, the fact you put some of that stuff in there at all probably increases the probability those things will show up.


In my projects there's generally a "developer" way to do things and an "llm agent" way to do things.

For the llm a lot of linting and build/test tools go into simple scripts that the llm can run and get shorthand info out of. Some tools, if you have the llm run them, they're going to ingest a lot from the output (like a big stacktrace or something). I want to keep context clean so I have the llm create the tool to use for build/test/linting and I tell it to create it so the outputs will keep its context clean, then I have it document it in the .md file.

When working with the LLM I have to start out pretty explicit about using the tooling. As we work through things it will start to automatically run the tooling. Sometimes it will want to do something else, I just nudge it back to use the tooling (or I'll ask it why or if there are benefits to the other way and if there are we'll rebuild the tooling to use the other way).

Finally, if the LLM is really having trouble, I kill the session and start a new one. It used to feel bad to do that. I'd feel like I'm losing a lot of info that's in context. But now, I feel like it's not so bad... but I'm not sure if that's because the llms are better or if my workflow has adapted.

Now, let me backup a little bit. I mentioned that I don't have the llm use git. That's the control I maintain. And with that my workflow is: llm builds feature->llm runs linters/tests->I e2e test whatever I'm building by deploying to a dev/staging/local env->once verified I commit. Now I will continue that context window/session until I feel like the llm starts fucking up. Then I kill the session and start a new one. I rarely compact, but it does happen and I generally don't fret about it too much.

I try to keep my units of work small and I feel like it does the best when I do. But then I often find myself surprised at how much it can do from a single prompt, so idk. I do understand some of the skepticism because a lot of this stuff sounds "hand-wavy". I'm hoping we all start to hone in on some general more concrete patterns but with it being so non-deterministic I'm not sure if we will. It feels like everyone is using it differently and people are having successes and failures across different things. People where I work LOVE MCPs but I can't stand them. When I use them it always feels like I have to remind the llm that it has an MCP, then it feels like the MCP takes too much context window and sometimes the llm still trips over how to use it.


Ok, that's a good tip about separate tools/scripts for the LLM, I did something similar less than a year ago so that I kept lint/test output to a minimum but it was still invoked via git hooks. I'll try again with scripts next time I'm doing this. My hope was to let the agent commit to a branch (with code that passed lint/test/prettier/etc), push it, auto-deploys to preview branches, and then that's where I'd do my e2e/QA and once I was happy I could merge it and it get deployed to the main site.

I discussed approaches in my earlier reply. But what you are saying now makes me think you are having problems with too much context. Pare down your CLAUDE.md massively and never let you context usage get over 60-65%. And tell CLAUDE not to commit anything without explicit instructions from you (unless you are working in a branch/worktree and are willing to throw it all away).

put a `git` script in `PATH` that simply errors out i.e.:

    if "--no-verify" in sys.args:
        println("--no-verify is not allowed, file=sys.stderr)
        sys.exit(1)
and otherwise forwards to the underlying `git`

Literally yesterday I was using Claude for writing a SymPy symbolic verification of a mathematical assertion it was making with respect to some rigorous algebra/calculus I was having it do for me. This is the best possible hygiene I could adopt for checking its output, and it still failed to report on results correctly.

After manual line-by-line inspection and hand-tweaks, it still saved me time. But it's going to be a long, long time before I no longer manually tweak things or trust that there are no silent mistakes.


Those kinds of errors were super common 4-6 months ago, but LLM quality moves fast. Nowadays I don't see these very often at all. Two things that make a huge difference: work on writing a spec first. github.speckit, GSD, BMAD, whatever tool you like can help with this. Do several passes on the spec to refine it and focus on the key ideas.

Now that you have a spec, task it out, but tell the LLM to write the tests first (like Test-Driven Development, but without all the formalisms). This forces the LLM to focus on the desired behavior instead of the algorithms. Be sure to focus on tests that focus on real behavior: client apis doing the right error handling when you get bad input, handling tricky cases, etc. Tell the system not to write 'struct' tests - checking that getters/setters work isn't interesting or useful.

Then you implement 1-3 tasks at a time, getting the tests to pass. The rules prevent disabling tests, commenting out tests, and, most importantly, changing the behavior of the tests. Doesn't use a lot of context, little to no hallucinating, and easily measurable progress.


>> When I've had LLMs write unit tests they are quick to write pointless unit tests that seem impressive "2123/2123 tests passed!" but in reality it's testing mostly nothing of value.

This has not happened to me since Sonnet 4.5. Opus 4.5 is especially robust when it comes to writing tests. I use it daily in multiple projects and verify the test code.


I thought I did use Opus 4.5 when I tested this last time but I might have still been on the $20 plan and I cannot remember if you get any Opus 4.5 on that in Claude Code (I thought you did with really low limits?), so maybe I wasn't using Opus 4.5, I will need to try again.

I haven’t used multi-agent set up yet but it’s intriguing.

Are you using Claude Code? How do you run the agents and make them speak?


Let me clarify actually, I run separate terminals and the agents are separated. I think claude code cli is the best. But at home I pay per token. I have a google account and I pay for chatgpt. So I often use codex and gemini cli in tandem. I'll copy + paste stuff between them sometimes or I'll have one review the changes or just the code in general and then feed the other with the outputs. I'll break out claude code for specific tasks or when I feel like gemini/chatgpt aren't quite doing the job right (which has gotten rarer the past few months).

I messed around with separate "agents" in the same context window for a while. I even went as far as playing with strands agents. Having multiple agents was a crapshoot.

Sometimes they'd work great, but sometimes they start working on the same files at the same time, argue with each other, etc. I'd always get multiple agents working, at least how I assumed they should work, by telling the llm explicitly what agents to create and what work to pass off to what agents. And it did a pretty poor job of that. I tried having orchestration agents, but at a certain point the orchestration agent would just takeover and do everything. So I'm not big on having multiple agents (in theory it sounds great, especially since they are supposed to each have their own context window). When I attempted doing this kind of stuff with strands agents it honestly felt like I was trying to recreate claude, so I just stick with plain cli llm tools for now.


I worry about people who use this approach where they never look at the code. Vibe-coding IS possible but you have to spent a lot of time in plan mode and be very clear about architecture and the abstractions you want it to use.

I've written two seperate moderately-sized codebases using agentic techniques (oftentimes being very lazy and just blanket approving changes), and I don't encounter logic or off-by-one errors very often if at all. It seems quite good at the basic task of writing working code, but it sucks at architecture and you need occasional code review rounds to keep the codebase tidy and readable. My code reviews with the AI are like 50% DRY and separating concerns


In a recent Yegge interview, he mentions that he often throws away the entire codebase and starts from scratch rather than try to get LLMs to refactor their code for architecture.

This has been my best way to learn, put one agent on a big task, let it learn things about the problem and any gotchas, and then have it take notes, do it again until I'm happy with the result, if in the middle I think there's two choices that have merit I ask for a subagent to go explore that solution in another worktree and to make all its own decisions, then I compare. I also personally learn a lot about the problem space during the process so my prompts and choices on us sequent iterations use the right language I need to use.

Honestly, in my experience so far, if an LLM starts going down a bad path, it’s better just to roll back to a point where things were OK and throw away whatever it was doing, rather than trying to course correct.

I don't get you guys that are getting such bad results.

Are you guys just trying to one shot stuff? Are you not using agents to iterate on things? Are you not putting agents against each other (have one code, one critique/test the code, and put them in a loop)?

I still look at the code that's produced, I'm not THAT far down the "vibe coding" path that I'm trusting everything being produced, but I get phenomenal results and I don't actually write any code any more.

So like, yeah, first pass the llm will create my feature and there's definitely some poorly written code or duplicate code or other code smells, but then I tell another agent to review and find all these problems. Then that review gets fed back in to the agent that created the feature. Wham, bam, clean code.

I'm not using gastown or ralph wiggum ($$$) but reading the docs, looking over how things work, I can see how it all comes together and should work. They've been built out to automatically do the review + iteration loop that I do.


My feeling has been that 'serious' software engineers aren't particularly suited to use these tools. Most don't have an interest in managing people or are attracted to the deterministic nature of computing. There's a whole psychology you have to learn when managing people, and a lot of those skills transfer to wrangling AI agents from my experience.

You can't be too prescriptive or verbose when interacting with them, you have to interact with them a bit to start understanding how they think and go from there to determine what information or context to provide. Same for understanding their programming styles, they will typically do what they're told but sometimes they go on a tangent.

You need to know how to communicate your expectations. Especially around testing and interaction with existing systems, performance standards, technology, the list goes on.


All our best performing devs/engineers are using the tools the most.

I think this is something a lot of people are telling themselves though, sure.


Best performing by what metric? There aren't meaningful ways to measure engineer "performance" that makes them comparable as far as I know.

Your org doesn't track engineering impact?

What about git stats?

I can tell you the guys that are consistently pushing code AND having the biggest impact are using LLM tools.


Are we measuring productivity by lines of code again? This was treated as unserious for decades.

Why ignore where I mention engineering impact??? Come on, be real here

Probably because you mentioned "git stats".

What you meant by that?


High number of days with commits, merging and shipping code consistently (some people/project will ship multiple times a day/week, some projects move a little slower).

That plus the completion of high impact projects makes good strong engineers.

Those are the people I see using LLMs


So quantity of code?

What git stats do you have that show “impact”?

The OP was right to assume it was lines of code. Another assumption could be number of commits, which also doesn’t measure impact.


Track engineering impact and git stats were two separate suggestions in that comment. Every org tracks impact through performance reviews.

It lets 0.05X developers be 0.2X developers and 1X developers be 0.9-1.1X developers.

The problem is some 0.05X developers thought they were 0.5X and now they think they're 2X.


Nah, our best devs/engineers use the tools the most.

In my real life experience it's been the middling devs that always talk about "ai slop" and how the tools can't do their jobs.


On our team there's a very clear distinction between three groups:

- those who have embraced AI and learned to use it well

- those who have embraced AI but treat it as a silver bullet

- those who reject AI

First group is by far the most productive and adds the most value to the team.


Yeah, it's similar where I'm at.

If anything the silver bullet people are mostly managers and C levels... some of which don't even use the tools themselves.

Of the devs that rejected it at first, the ones with the same sentiment I'm seeing online in threads like these, we forced one to give it a try. He now treats totters between using it well and treating it as a silver bullet. I still hear him incredulous about the things claude does at meetings, "I had to do <thing> and I thought I'd let claude get a crack at it... did it in one shot"


I mean, that fits with what I said.

I mean, not all workplaces hire the best.

I have some success but by the time I'm done I'm often not sure if I saved any time.

My (former) coworker who’s heavy into this stuff produced a lot of unmaintainable slop on his way out while singing agents praises to hire-ups. He also felt he was getting a lot of value and had no issues.

Where is the "super upvote button" when you need it?

YES! I have been playing with vibe coding tools since they came out. "Playing" because only on rare occasions have I created something that is good enough to commit/keep/use. I keep playing with them because, well I have a subscription, but also so I don't fall into the fuddy-duddy camp of "all AI is bad" and can legitimately speak on the value, or lack thereof, of these tools.

Claude Code is super cool, no doubt, and with _highly targeted_ and _well planned_ tasks it can produce valuable output. Period. But, every attempt at full-vibe-coding I've done has gotten hung up at some point and I have to step in an manually fix this. My experience is often:

1. First Prompt: Oh wow, this is amazing, this is the future

2. Second Prompt: Ok, let me just add/tweak a few things

10. 10th prompt: Ugh, everytime I fix one thing, something else breaks

I'm not sure at all what I'm doing "wrong". Flogging the agents along doesn't not work well for me or maybe I am just having trouble letting go of the control and I'm not flogging enough?

But the bottom line is I am generally shocked that something like Gas Town was able to be vibe-coded. Maybe it's a case of the LLM overstating what it's accomplished (typical) and if you look under the hood it's doing 1% of what it says it is but I really don't know. Clearly it's doing something, but then I sit over here trying to build a simple agent with some MCPs hooked up to it using a LLM agent framework and it's falling over after a few iterations.


So I’m probably in a similar spot - I mostly prompt-and-check, unless it’s a throwaway script or something, and even then I give it a quick glance.

One thing that stands out in your steps and that I’ve noticed myself- yeah, by prompt 10, it starts to suck. If it ever hits “compaction” then that’s beyond the point of return.

I still find myself slipping into this trap sometimes because I’m just in the flow of getting good results (until it nosedives), but the better strategy is to do a small unit of work per session. It keeps the context small and that keeps the model smarter.

“Ralph” is one way to do this. (decent intro here: https://www.aihero.dev/getting-started-with-ralph)

Another way is “Write out what we did to PROGRESS.md” - then start new session - then “Read @PROGRESS.md and do X”

Just playing around with ways to split up the work into smaller tasks basically, and crucially, not doing all of those small tasks in one long chat.


I will check out Ralph (thank you for that link!).

> Another way is “Write out what we did to PROGRESS.md” - then start new session - then “Read @PROGRESS.md and do X”

I agree on small context and if I hit "compacting" I've normally gone too far. I'm a huge fan of `/clear`-ing regularly or `/compact <Here is what you should remember for the next task we will work on>` and I've also tried "TODO.md"-style tracking.

I'm conflicted on TODO.md-style tracking because in practice I've had an agent work through everyone on the list, confidently telling me steps are done, only to find that's not the case when I check its work. Either a TODO.md that I created or one I had the agent create both suffer from this. Also, getting it update the TODO.md has been frustrating, even when I add it to CLAUDE.md "Make sure to mark tasks as complete in the TODO.md as you finish them" or adding the same message to the end of all my prompts, it won't always update it.

I've been interested in trying out beads to see if works better than a markdown TODO file but I haven't played with that yet.

But overall I agree with you, smaller chunks are key to success.


I hate TODO.mds too. If I ever have to use one, I'll keep track of it manually, and split the work myself into chunks of the size I believe CC/codex can handle. TODO.md is a recipe for failure because you'll quickly have more code than you can review and nothing to trust that it was executed well.

> 10. 10th prompt: Ugh, everytime I fix one thing, something else breaks

Maybe that is the time to start making changes by hand. I think this dream of humans never ever writing any more code might be too far and unnecessary.


I’ve definitely hit that same pattern in the early iterations, but for me it hasn’t really been a blocker. I’ve found the iteration loop itself isn’t that bad as long as you treat it like normal software work. I still test, review, and check what it actually did each time, but that’s expected anyway. What’s surprised me is how quickly things can scale once the overall architecture is thought through. I’ve built out working pieces in a couple of weeks using Claude Code, and a lot of that time was just deciding on the architecture up front and then letting it help fill in the details. It’s not hands-off, but used deliberately, it’s been quite effective https://robos.rnsu.net

I agree that it can be very useful when used like that but I'm referring to fully vibe-coding, the "I've never looked at the code"-people. CC is a great tool when you use plan carefully, review its work, etc but people are building things they say they've never read the code for and that just hasn't been my experience, it always falls over on it's own if I'm not in the code reviewing/tweaking.

> How do these industry figures claim they see no part of a 225K+ line of code and promise that it works?

The only promise is that you will get your face ripped off.

“WARNING DANGER CAUTION - GET THE F** OUT - YOU WILL DIE […] Gas Town is an industrialized coding factory manned by superintelligent robot chimps, and when they feel like it, they can wreck your shit in an instant. They will wreck the other chimps, the workstations, the customers. They’ll rip your face off if you aren’t already an experienced chimp-wrangler.”


Yeah, I'm at that stage 6 or 7. I'm using multiple agents across multiple terminal windows. I'm not even coding any more, literally I haven't written code in like 2-4 months now beyond changing a config value or something.

But I still haven't actually used Gastown. It looks cool. I think it probably works, at least somewhat. I get it. But it's just not what I need right now. It's bleeding edge and experimental.

The main thing holding me back from even tinkering with it is the cost. Otherwise I'd probably play with it a little, but it's not something I'd expect to use and ship production code right now. And I ship a ton of production code with claude.


There is an incentive for dishonesty about what AI can and cannot do.

People from OpenAI was saying that GPT2 had achieved AGI. There is a very clear incentive for that statement to be made by people who are not using AI for anything productive.

Even as increasingly bombastic claims are made, it is obvious that the best AI cannot one-shot everything if you are an actual user. And the worst ones: was using Gemini yesterday and it wouldn't stop outputting emojis, was using Grok and it refused to give me a code snippet because it claimed its system prompt forbade this...what can you say?

I don't understand why anyone would want to work on a codebase they didn't understand either. What happens when something goes wrong?

Again though, there is massive financial incentive to make these claims, and some other people will fall along with that because it is good for their career, etc. I have seen this in my own company where senior people are shoehorning this stuff in that they clearly do not actually use or understand (to be clear, this is engineering not management...these are people who definitely should understand but do not).

Great tool, but the 100% vibecoding without looking at the code, for something that you are actually expecting others to use, is a bad idea. Feels more like performance art than actual work. I like jokes, I like coding, room for both but don't confuse the two.


> I don't understand why anyone would want to work on a codebase they didn't understand either. What happens when something goes wrong?

It's your coworker's problem. The one who actually understands the big picture and how the system fits into it. They'll deal with it.


No one is promising anything. It's just a giant experiment and the author explicitly tells you not to use it. I appreciate those that try new things, even it it's possibly akin to throwing s** at a wall and seeing what sticks.

Maybe it changes how we code or maybe it doesn't. Vibe coding has definitely helped me write throwaway tools that were useful.


After listening to Yegge's interview, I'm not sure this is accurate: https://www.youtube.com/watch?v=zuJyJP517Uw

For example, he makes a comment to the effect that anyone using an IDE to look at code in 2026 is a "bad engineer."


Hyperbole is very common.

Watch the video - he's very clear that he's not looking at code. I see no indication that he is being hyperbolic.

In LLM field things move so fast that distinguishing accurate statements, mistaken statements, jokes and lies is hard.

A result, hyperbole is more annoying than usual.


> It's just a giant experiment and the author explicitly tells you not to use it.

No, he threw up a hyperbolic warning and then dove deep into how this is the future of all coding in the rest of his talks/writing.

It’s as good a warning as someone saying “I’m not {X} but {something blatantly showing I am X}”



Who's promising it works?

It's an experiment to discover what the limits are. Maybe the experiment fails because it's scoped beyond the limits of LLMs. Maybe we learn something by how far it gets exactly. Maybe it changes as LLMs get better, or maybe it's a flawed approach to pushing the limits of these.


I'm sympathetic to this view, but I also wonder if this is the same thing that assembly language programmers said about compilers. What do you mean that you never look at the machine code? What if the compiler does something inefficient?

Not even remotely close.

Compilers are deterministic. People who write them test that they will produce correct results. You can expect the same code to compile to the same assembly.

With LLMs two people giving the exact same prompts can get wildly different results. That is not a tool you can use to blindly ship production code. Imagine if your compiler randomly threw in a syscall to delete your hard drive, or decide to pass credentials in plain text. LLMs can and will do those things.


Even ignoring determinism, with traditional source code you have a durable, human-readable blueprint of what the software is meant to do that other humans can understand and tweak. There's no analogy in the case of "don't read the code" LLM usage. No artifacts exist that humans can read or verify to understand what the software is supposed to be doing.

yeah there is. it's called "documentation" and "requirements". And it's not like you can't go read the code if you want to understand how it works, it's just not necessary to do so while in the process of getting to working software. I truly do not understand why so many people are hung up on this "I need to understand every single line of code in my program" bs I keep reading here, do you also disassemble every library you use and understand it? no, you just use it because it's faster that way.

> do you also disassemble every library you use and understand it?

Sometimes.


> it's called "documentation" and "requirements"

What I mean is an artifact that is the starting point for generating the software. Compiled binaries can be completely thrown away whenever because you know you have a blueprint (the source code) that can reliably reproduce it.

Documentation & requirements _could_ work this way if they served as input to the LLMs that would then go and create the source code from scratch. I don't think many people are using LLMs this way, but I think this is an interesting idea. Maybe soon we'll have a new generation of "LLM-facing programming languages" that are even higher level software blueprints that will be fed to LLMs to generate code.

TDD is also a potential answer here? You can imagine a world where humans just write test suites and LLMs fill out the code to get it to pass. I'm curious if people are using LLMs this way, but from what I can tell a lot of people use them for writing their tests as well.

> And it's not like you can't go read the code if you want to understand how it works

In-theory sure, but this is true of assembly in-theory as well. But the assembly of most modern software is de-facto unreadable, and LLM-generated source code will start going that way too the more people become okay with not reading it. (But again, the difference is that we're not necessarily replacing it with some higher-level blueprint that humans manage, we're just relying on the LLMs to be able to manage it completely)

> I truly do not understand why so many people are hung up on this "I need to understand every single line of code in my program" bs I keep reading here, do you also disassemble every library you use and understand it? no, you just use it because it's faster that way.

I think at the end of the day this is just an empirical question: are LLMs good enough to manage complex software "on their own", without a human necessarily being able to inspect, validate, or help debug it? If the answer is yes, maybe this is fine, but based on my experiences with LLMs so far I am not convinced that this is going to be true any time soon.


Not only that but compiler optimizations are generally based on rigorous mathematical proofs, so that even without testing them you can be pretty sure it will generate equivalent assembly. From the little I know of LLM's, I'm pretty sure no one has figured out what mathematical principles LLM's are generating code from so you cant be sure its going to right aside from testing it.

I write JS, and I have never directly observed the IRs or assembly code that my code becomes. Yet I certainly assume that the compiler author has looked at the compiled output in the process of writing a compiler!

For me the difference is prognosis. Gas Town has no ratchet of quality: its fate was written on the wall since the day Steve decided he didn't want to know what the code says: it will grow to a moderate but unimpressive size before it collapses under its own weight. Even if someone tried to prop it up with stable infra, Steve would surely vibe the stable infra out of existence since he does not care about that


or he will find a way to get the AI to create harnesses so it becomes stable. The lack of imagination and willingness to experiment in the HN crowd is AMAZING me and worrying me at the same time. Never thought a group of engineers would be the most conservative and close minded people I could discuss with.

It's a paradox, huh. If the AI harness became so stable it wrote good code he wouldn't be afraid to look at the code he would be eager to look at it, right? But then if it mattered if AI wrote good code or not he couldn't defend his position that the way to create value with code is quantity over quality. He needs to sell the idea of something only AI can do, which means he needs the system to be made up of a lot of bad or low quality code which no person would ever want to be forced to look at.

There's a difference between "imagination and willingness to experiment" and "blind faith and gullibility".

Wait till you meet engineers other than sw engineers. Not even sure most sw people should be called engineers since there are no real accredited standards. I specifically trained as EE in physical electronics because other disciplines at the time seemed really rigid.

There's a saying that you don't want optimists building bridges.


The big difference is that compilation is deterministic: compile the same program twice and it'll generate the same output twice. It also doesn't involve any "creativity": a compiler is mostly translating a high-level concept into its predefined lower-level components. I don't know exactly what my code compiles to, but I can be pretty certain what the general idea of the assembly is going to be.

With LLMs all bets are off. Is your code going to import leftpad, call leftpad-as-a-service, write its own leftpad implementation, decide that padding isn't needed after all, use a close-enough rightpad instead? Who knows! It's just rolling dice, so have fun finding out!


> The big difference is that compilation is deterministic: compile the same program twice and it'll generate the same output twice.

That's barely true now. Nix comes close, but builds are only bit-for-bit identical if you set a bunch of extra flags that aren't set by default. The most obvious instability is CPU dispatch order (aka modern single computer systems are themselves distributed, racy systems) changes the generated code ever so slightly.

We don't actually care, because if one compiled version of the code uses r8 for a variable but a different compilation uses r9 for that variable, it doesn't matter because we just assume the resulting binary works the same either way. R8 vs r9 are implementation details that don't matter to humans. See where I'm going with this? If the LLM non-deterministically calls the variable fileName one day, and file_name the next time it's given the same prompt, yeah language syntax purists are going to suffer an aneurysm because one of those is clearly "wrong" for the language in use, but it's really more of an implementation detail at this point. Obviously you can't mix them, the generated code has to be consistent in which one it's using, but if compilers get to chose r8 one day and r9 the next, and we're fine with it, why is having the exact variable name that important, as long as it's being used correctly?


I’ve done builds for aerospace products where the only binary difference between two builds of the same source code is the embedded timestamp. And per FAA review guidelines, this deterministic attribute is required, or else something is wrong in the source code or build process.

I certainly don’t use all compilers everywhere, but I don’t think determinism in compilation is especially rare.


If your builds are not deterministic for the same set of inputs, you are doing something wrong - or victim of supply chain attack.

https://reproducible-builds.org/


No, some compilers aren't deterministic by design, e.g. because they compile stuff in parallel and don't take extra steps to enforce consistent ordering of things (because it doesn't matter).

The compiler is deterministic and the translation does not lose semantics. The meaning of your code is an exact reflection of what is produced.

We can tell you weren't around for the advent of compilers. To be fair, neither was I since the UNIX c compiler came out in '68 and was by far not the first compiler. Modern comilers you can make that claim about, but early compilers weren't.

I've been programming since 6502/6510 assembly language and all compilers I've used were deterministic (which isn't the same thing as being bug free or producing the correct output for a given input).

Bullshit.

All compilers have bugs. Any loss of semantics during compilation would be considered a bug. In order to do that, the source and target language need to be structured and specified. I wasn't around in the 60s either, but I think that hasn't changed.

Which early compilers were nondeterministic?

This analogy has always been bad any time someone has used it. Compilers directly transform via known algorithms.

Vibecoding is literally just random probabilistic mapping between unknown inputs and outputs on an unknown domain.

Feels like saying because I don't know how my engine works that my car could've just been vibe-engineered. People have put 1000s of hours into making certain tools work up to a give standard and spec reviewed by many many people.

"I don't know how something works" != "This wasn't thoughtfully designed"

Why do people compare these things.


No, it is not what assembly programmers said about compilers, because you can still look at the compiled assembly, and if the compiler makes a mistake, you can observe it and work around it with inline assembly or, if the source is available, improve the compiler. That is not the same as saying "never look at the code".

I feel like this argument would make a lot more sense if LLMs had anywhere near the same level of determinism as a compiler.

>but I also wonder if this is the same thing that assembly language programmers said about compilers

But as a programmer writing C code, you're still building out the software by hand. You're having to read and write a slightly higher level encoding of the software.

With vibe coding, you don't even deal with encodings. You just prompt and move on.


I've wondered if people who write detailed specs, are overly detailed, are in a regulated industry, or even work with offshore teams have success more quickly simply they start with that behavior. Maybe they have a tendency to dwell before moving on which may be slightly more iterative than someone who vibecodes straight through.

I wonder if assembly programmers felt this way about the reliability of the electical components which their code relies upon...

I wonder if electrical engineers felt this way about the reliability of the silicon crystal lattice their circuits rely upon…

Do you understand at a molecular level how cooking works? Or do you just do some rote actions according to instructions? How do you know if your cooking worked properly without understanding chemistry? Without looking at its components under a microscope?

Simple: you follow the directions, eat the food, and if it tastes good, it worked.

If cooks don't understand physics, chemistry, biology, etc, how do all the cooks in the world ensure they don't get people sick? They follow a set of practices and guidelines developed to ensure the food comes out okay. At scale, businesses develop even more practices (pasteurization, sanitization, refrigeration, etc) to ensure more food safety. None of the people involved understand it at a base level. There are no scientists directly involved in building the machines or day-to-day operations. Yet the entire world's food supply works just fine.

It's all just abstractions. You don't need to see the code for the code to work.


That's a terrible analogy lol.

1. Chefs do learn the chemistry, at least enough to know why their techniques work.

2. Food scientist is a real job

3. The supply chain absolutely does have scientists involved in day to day operations lol.

A better analogy is just shoving the entire contents of the fridge into a pot, plastic containers and all, and assuming it'll be fine.


> Chefs do learn the chemistry, at least enough to know why their techniques work

Cooks are idiots (most are either illegal immigrants with no formal education, or substance-abusing degenerates who failed at everything else) who repeat what they're told. They think ridiculous things, like that searing a stake "seals in the juices", or that adding oil to pasta water "prevents sticking", that alcohol completely "cooks off", that salt "makes water boil faster", etc. They are the auto mechanics of food. A few may be formally educated but the vast majority are not. They're just doing what they were shown to do.

> A better analogy is just shoving the entire contents of the fridge into a pot, plastic containers and all, and assuming it'll be fine.

That would never result in a good meal. On the other hand, vibe coding is curently churning out not just working software, but working businesses. You're sleeping on the real effect this is having. And it's getting better every 6 months.

Back to the topic: most programmers actually suck at programming. Their code is full of bugs, and occasionally the code paths run into those bugs and make them noticeable, but they are always there. AI does the same thing, just faster, and it's getting better at it. If you still write code by hand in a few years you will be considered a dinosaur.


> Cooks are idiots (most are either illegal immigrants with no formal education, or substance-abusing degenerates who failed at everything else) who repeat what they're told

Jesus Christ, dude. Just because someone works with their hands doesn't mean they're stupid. Good lord. Working in a professional kitchen is an incredibly demanding and difficult job. Don't be elitist to people who work way harder than you.

Especially since some of the dumbest and most intellectually coddled failsons I know went to, like, Yale lol. Or Harvard. A lot of YC startups are like Failson Continuation School. Plenty of people are smart, but a lot of them are just rich.

> On the other hand, vibe coding is curently churning out not just working software, but working businesses

Funny story, I'm evaluating SaaS ETL products and I found one that looked great. So I spent a couple hours testing out some tinkertoy examples with the idea to ask for budget if it worked.

I kept running into small stupid documentation problems and some incredibly stupid behavior in really basic shit (like, screwing up .env files) that no developer would do and then I realized it was all AI generated.

Did it work? Kinda! Mostly! Did it immediately make me put it in the "absolutely not" pile? Sure did.

If the code I can see is that sloppy and poorly reviewed, how bad is the code I can't see? I'm for sure not giving them our sensitive data.

If you think human code is bad, you should just work with better humans. ¯\_(ツ)_/¯


>Jesus Christ, dude. Just because someone works with their hands doesn't mean they're stupid. Good lord. Working in a professional kitchen is an incredibly demanding and difficult job. Don't be elitist to people who work way harder than you.

You're making a personal comment. It's orthoganol to the point. You said cooks learn the chemistry, he says they don't and they are too stupid too.

As bad as that statement is, it's true. Culinary arts as an occupation has statistically lower IQ than many other occupations. Additionally they don't actually learn the chemistry. You sidetracked off on a tirade talking about someones "elitest" character... but if you stick to the point, what you said was completely and utterly wrong.

>Funny story, I'm evaluating SaaS ETL products and I found one that looked great. So I spent a couple hours testing out some tinkertoy examples with the idea to ask for budget if it worked.

You know Ryan Dahl? Inventor of NodeJS, likely smarter, more successful, and a better coder than you says this: https://x.com/rough__sea/status/2013280952370573666

So you have a funny story, and then there are other smarter competent people saying the EXACT opposite of you. Does that ever make you pause and think? We've all seen evidence of AI fucking up. AI being stupid is a story so obvious that even the proponents of AI know AI can fuck up big time. But have you ever wondered what would make Ryan Dahl say something like that? Does what I'm saying even compute or are you just so stubbornly sure that your "funny story" invalidates everything?


> Culinary arts as an occupation has statistically lower IQ than many other occupations.

Citation very much needed lol. Again, don't be elitist about work you don't do and don't understand. Honestly, given the choice between between a random pool of kitchen staff and a bunch of people with BAYC twitter profiles, I'm taking the people who can pull off a busy Sunday brunch and I'm not thinking twice about it.

> Are you just so stubbornly sure that your "funny story" invalidates everything

It wasn't actually intended to be ha-ha funny, my guy, that's just a stock phrase.

And if you're asking, do I trust my own judgement to critically evaluate claims in my own industry? Yes. Yes, I do.

If you rely on other people telling you what's good and never think for yourself, you're always just going to be a follower. It's like you've never been on a single enterprise software sales call, jeez.


>Again, don't be elitist about work you don't do and don't understand

There’s a difference between being elite at and being truthful. Don’t weaponize the word elitism and use it to attack truth.

https://brght.org/iq/jobtitle/cook/

Below average iq for cooks. So you’re wrong. Almost everything you talk about is wildly wrong and off base.

> It wasn't actually intended to be ha-ha funny, my guy, that's just a stock phrase.

lol. Did it ever occur to you i was just using the same stock phrase to reference your “funny story”? Takes a certain iq to figure that out.

> And if you're asking, do I trust my own judgement to critically evaluate claims in my own industry? Yes. Yes, I do.

Good. A smart person though wouldn’t completely trust himself because he knows no one is infallible. So he evaluates his own judgements against other judgements. Especially judgements of others smarter than them. Are you a smart person? Maybe ask yourself that question.

> If you rely on other people telling you what's good and never think for yourself, you're always just going to be a follower. It's like you've never been on a single enterprise software sales call, jeez.

lol, never asked you that and it’s the wrong comparison my guy, my dude.

Read what I wrote. It’s a call to evaluate your own statement against others who say the opposite. It’s not a call to rely on what others say. Nor is it a call to just trust everything in your own brain. I asked you to evaluate your judgements and the judgements of others smarter than you as a whole.

You’re like the guy who thinks everyone is a salesman so you mistrust the entire world and you think everything you know and think is 100 percent true. I feel you’re scared of being wrong. Jeeze. Theres nothing to be scared of for being wrong, my dude.

A smart person would think: “hey half the population plus this guy smarter than me (Ryan dhal, who is one of many smart people that have nothing to sell) is saying AI writes all his code now. Maybe consider his perspective alongside mine?”

Understand, my dude?


Cooks also repeatedly cook the exact same recipe designed by someone else over and over again. In our industry cooks are closest to the CPU executing machine code.

With the exception that cooks are actually less reliable (sometimes your steak comes out medium rare, sometimes well done). The human world is chaotic and unreliable, yet we wrangle it into a workable form. I think pretty soon we'll see that paralleled in the AI world, in the same ways we categorize and value human labor and businesses.

It's unintuitive, but having an llm verification loop like a code reviewer works impeccably well, you can even create dedicated agents to check for specific problem areas like poor error handling.

This isn't about anthropomorphism, it's context engineering. By breaking things into more agents, you get more focused context windows.

I believe gas town has some review process built in, but my comment is more to address the idea that it's all slop.

As an aside, Opus 4.5 is the first model I used that most of the time doesn't produce much slop, in case you haven't tried it. Still produces some slop, but not much human required for building things (it's mostly higher level and architectural things they need guidance on).


> it's mostly higher level and architectural things they need guidance on

Any examples you can share?


Mostly, it's not the model that is lacking but the visibility it has. Often the top level business context for a problem is out of reach, spread across slack, email, internal knowledge and meetings.

Once I digest some of this and give it to Claude, it's mostly smooth sailing but then the context window becomes the problem. Compactions during implementation remove a lot of important info. There should really be a Claude monitoring top level context and passing work to agents. I'm currently figuring out how to orchastrate that nicely with Claude Code MD files.

With respect to architecture, it generally makes sound decisions but I want to tweak it, often trading off simplicity vs. security and scale. These decisions seem very subtle and likely include some personal preferences I haven't written anywhere.


In my experience, it really depends on what you're building _and_ how you prompt the LLM.

For some things, LLMs are great. For others, they're absolute dog shit.

It's still early days. Anyone who claims to know what they're talking about either doesn't or what they're saying will be out of date in a month's time (including me).


The secret is that it doesn't work. None of these people have built real software that anyone outside their bubble uses. They are not replacing anyone, they are just off in their own corner building sand castles.

Just because they're one-off tools that only one person uses doesn't mean it's not "real software". I'm actually pretty excited about the fact that it's now feasible for me to replace all my BloatedShittyCommercialApps that I only use 5% of with vibe-coded bespoke tools that only do the important 5%, just for me to use. If that makes it a "sand castle" to you, fine, but this is real software and I'm seeing real benefit here.

> I'm actually pretty excited about the fact that it's now feasible for me to replace all my BloatedShittyCommercialApps that I only use 5% of with vibe-coded bespoke tools that only do the important 5%, just for me to use.

Aren't you worried that they'll work fine for 3 weeks then delete all your data when you hold them slightly different? Vibe coded software seems to have a similar problem to "Undefined Behaviour", in that just because it works sometimes doesn't mean that it will always work. And there's no limit on what it might do when it doesn't work (the proverbial "nasal demons") - it might well wipe your entire harddrive, not just corrupt it's own data.

You can of course mitigate this by manually reviewing the software, but then you lose at least some of the productivity benefit.


> Aren't you worried that they'll work fine for 3 weeks then delete all your data when you hold them slightly different?

It might. It probably won't though. I don't see any code in it that deletes files. And, unlike BloatedShittyCommercialApp (and its cousin, BloatedDoEverythingOpenSourceApp), the code is going to be relatively small and if I do have doubts I can easily check to see what it's doing. I can build it quickly. I can patch it quickly. I don't have to file a bug to someone and beg him to look at it. I don't have to worry that the next release is going to break stuff I want and add stuff I don't want.

I recently moved my home theater PC from Kodi to a tiny bespoke vibed video player app, that basically just wraps libVLC with a minimal Android GUI. It's like 3000 lines of code total. I can practically keep the entire app in my head. If I need to fix something, it's 5 minutes in my dev terminal and then adb install. Ever tried to find and fix a bug in Kodi? The goddamn thing takes forever to even build, let alone debug. And that's even open source. I don't even have a remote chance of getting a bug fixed in professionally-built proprietary software.


> the code is going to be relatively small and I do have doubts I can easily check

Continues to make an app with 150K lines.


The whole "real software" thing is a type of elitism that has existed in our field for a long time, and AI is the new battleground on which it is wielded.

> The secret is that it doesn't work.

I have 100% vibecoded software that I now use instead of commercial implementation that cost me almost 200 usd a month (tool for radiology dictation and report generation).


Wait, so you're a radiologist and you're using software you vibecoded to generate radiology reports for real patients? Is that, like, allowed?

Not saying it's right, but boy do I have stories about the code used in <insert any medical profession> healthcare applications. Not sure how "vibecoded" programming lines of code is any worse.

Because that code is presumably working and the vibe code is probably not?

Honestly even if this wasn't vibe-coded I'm still a bit surprised at individual radiologists being able to bring their own software to work, for things that can have such a high effect on patient outcomes.

do you have evidence that all vibe coded solutions dont work? Because thats what you're implying.

If I wanted to prove murder, not negligence.

Of course it’s allowed. It’s just kind of text editor but with support of speech to text and structured reports (e.g. when reporting spine if I say l3 bd it automatically inserts description of bulging disc in the correct place in the report). I then copy paste it to RIS so there’s absolutely nothing wrong or illegal in that.

Depends where in the world they are. Here in Hungary, it’s not uncommon to email [email protected]

What does that have to do with vibe-coding?

Vibe-coded radiology reports, finally the 21st century will get its own Therac-25 incident.

Yes I’m sure that text to speech with very nice fluff on top will have terrible consequences. It’s almost as bad as some radiologists using Word for writing reports which is not fda-approved (shocking I know!)

And yet I notice you haven't mentioned publishing it and undercutting the market. You could make a lot of money out-competing the existing option if what you produced was production-grade software. I'm guessing the actual case is that you only needed a small subset of the functionality of the paid software, and the LLM stitched together a rough unpolished proof-of-concept that handled your exact specific use case. Which is still great for you! But it's not the future of coding. The world still needs real engineers to make real software that is suitable for the needs of many, and this doesn't replace that.

>The world still needs real engineers to make real software that is suitable for the needs of many, and this doesn't replace that.

I think azan_ is demonstrating that shipping products 'suitable for the needs of many' is going to have to compete with 'slopping software for the needs of one'.


The only people who think that are programmers already or programmer-adjacent. Your mother is never going to be able to use a Gas Town-like workflow to make software for her own needs, nor is she even going to want to spend her weekends trying. These tools still require a baseline minimum of technical knowledge, and a real time investment, and also a real money investment the way some people are using them. Moreover, most real software has interoperability needs. A world where everyone makes their own Twitter or WhatsApp is a world where nobody can talk to anyone else.

There is a small subset of the population who is now enabled to make proof-of-concepts with less effort than before. This is no way diminishes the need for delivering performant, secure, interoperable software at scale to serve humanity's needs.


> Your mother is never going to be able to use a Gas Town-like workflow to make software for her own needs, nor is she even going to want to spend her weekends trying.

I'm going on a tangent here but what's with this constant deprecation of mothers to make a point? There are many people here whose mothers can develop software.


I think it’s just a generalization. They could have said “your uncle Pete” without actually implying anything about anyone’s uncle named Peter.

People's mothers are statistically unlikely to be programmers, obviously. My own grandmother was a programmer, but it conveys the idea in two words rather than making up a clunky phrase to describe the exact degree of non-techiness of the hypothetical person.

What if we packaged Gas Town up in an operating system userspace, put it on rails, and gave people an interface to it?

An interface isn't enough. Even if you never look at the code, the results are going to be influenced significantly by having the vocabulary to accurately describe what you want. The less sufficient your technical vocabulary, the more ambiguous your prompts will be and the less likely it is that the Polecats will be able to deliver anything resembling your unspoken imagination. To say nothing of being able to guide the lost critters when they run into problems.

It sounds like a medical device, in which case marketing it may require FDA approval or notification. Whereas vibe-coding a one-off tool for yourself might still require validation but you're the one taking the risk and accepting liability for it.

I think the thing you're missing is that the tool doesn't need to be marketed because someone else could ask their LLM to make them a similar tool but fitting their use case.


If they're using a 100% vibe-coded tool that they've never read the code of to replace something that would require government approval, for use on real-world patients, they're probably committing medical malpractice as we speak. Let us pray that is not the case.

It doesn't matter if the tool "needs" to be marketed. There is a market of paying customers. If other people are paying $200/month, both your and their lives would be improved significantly by you offering a $100/month replacement software. For all the talk about LLMs replacing the need for packaged software, people are still paying for packaged software, and while they are, you could be making large amounts of money while saving them money. If you're altruistic, you could even release it as FOSS and save a lot of people $200/mo. Unless, of course, your vibe-coded app isn't actually remotely capable of replacing the software in question.


Jumping to conclusion that I’m committing malpractice is completely uncalled for and offensive. > Unless, of course, your vibe-coded app isn't actually remotely capable of replacing the software in question. It is completely capable FOR ME. I’m not interested in publishing it because I love my job and it pays great already.

Not everything has to be monetized, buddy. It's okay to relax.

> If you're altruistic, you could even release it as FOSS and save a lot of people $200/mo. Unless, of course, your vibe-coded app isn't actually remotely capable of replacing the software in question.

My partner is a radiologist and I'd love to hear more about what you built. The engineer in me is also curious how much this cost in credits?

It CAN be cheap.

I built a clinical pharmacist "pocket calculator" kinda app for a specific function. It was like $.60 in claude credits I think. Built with flutter + dart. It's a simple tool suite and I've only built out one of the tools so far.

Now to be fair, that $.60 session was just the coding. I did some brainstorming in chatgpt and generated good markdown files (claude.md, gemini.md, agents.md) before I started.


How much costs you renting vibecoding tools?

such tools cost 10-20/mo usually?

Using mystery vibe coded software in a tightly regulated, consequence-heavy environment, that’s so reassuring! /s

Is it _just_ speech-to-text, or god-forbid are you giving it scans and having it write reports for you too?


It’s text to speech with structured reports support. Jesus Christ stop with the moral panic already.

FYI I also assumed that it is doing something more dangerous, mostly because you mentioned being radiologist as relevant.

Is it calling some external API or doing this text to speech locally?


no that's not true. I rarely now write a SINGLE line of code both at work or at home. Even simple config switches, I ask codex/gemini to do it.

You always have to review overall diff though and go back to agent with broader corrections to do.


> You always have to review overall diff though and go back to agent with broader corrections to do.

This thread is about vibe coding _without_ looking at the code.


Of course it works. I haven't looked at code for my internal development in months.

I don't know why people keep repeating this but it's wrong. It works.


It is fine to have criticisms of this, I have many, but saying that Yegge hasn't built real software is just not true.

Yegge obviously built real software in the past. He has not built real software wherein he never looked at the code, as he is now promoting.

Ok but this entire idea is very new. Its not an honest criticism to say no one has tried the new idea when they are actively doing it.

Honestly I don't get the hostility. Yegge is running an experiment. I don't think it will work, but it will be interesting and informative to watch.


The 'experiment' isn't the issue. The problem is the entire culture around it. LLM tools are being shoved into everything, LLMs are soaking up trillions in investment, engineers are being told over and over that everything has changed and this garbage is making us obsolete, software quality is decreasing where wide LLM usage is being mandated (eg. Microsoft). Gas Town does not give the vibe of a neutral experiment but rather looks be a full-on delve into AI psychosis with the way Yegge describes it.

To be clear, I think LLMs are useful technology. But the degree of increasing insanity surrounding it is putting people off for obvious reasons.


I share the frustration with the hype machine. I just don't think a guy with a blog is an appropriate target for our frustration with corporate hype culture.

> Ok but this entire idea is very new. Its not an honest criticism to say no one has tried the new idea when they are actively doing it.

Not really new. Back in the day companies used to outsource their stuff to the lowest bidder agencies in proverbial Elbonia, never looked at the code, and then panickedly hired another agency when the things visibly were not what was ordered. Case studies are abound on TheDailyWTF for the last two decades.

Doing the same with agents will give you the same disastrous results for comparably the same money, just faster. Oh and you can't sue them, really.

Maybe it's better, who knows.


Fair point on the Elbonia comparison. But we can't sue the SQLite maintainers either, and yet we trust them with basically everything. The reason is that open source developed its own trust mechanisms over decades. We don't have anything close to that with LLMs today. What those mechanisms might look like is an open question that is getting more important as AI generated code becomes more common.

> But we can't sue the SQLite maintainers either, and yet we trust them with basically everything.

But you don’t pay them any money and don’t enter into contractual relationship with them either. Thus you can’t sue them. Well, you can try, of course, but.

You could sue an Elbonian company, though, for contract breach. LLMs are like usual Elbonian quality with two middlemen but quicker, and you only have yourself to blame when they inevitably produce a disaster.


The experiment is fine if you treat it as an experiment. The problem is the state of the industry where it's treated as serious rather than silly — possibly even by Steve himself.

> saying that Yegge hasn't built real software is just not true

I mean... I feel like it's somewhat telling that his wikipedia page spends half its words on his abrasive communication style, and the only thing approximating a product mentioned is a (lost) Rails-on-Javascript port, and 25 years spent developing a MUD on the side.

Certainly one doesn't get to stay a staff-level engineer at Google without writing code - but in terms of real, shipping software, Yegge's resume is a bit light for his tenure in BigTech


OP defines herself as a mediocre engineer. She's trying to sell you Slop Town, not engineering principles.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: