But... you can ask! Ask claude to use encapsulation, or to write the equivalent of interfaces in the language you using, and to map out dependencies and duplicate features, or to maintain a dictionary of component responsibilities.
AI coding is a multiplier of writing speed but doesn't excuse planning out and mapping out features.
You can have reasonably engineered code if you get models to stick to well designed modules but you need to tell them.
But time I spend asking is time I could have been writing exactly what I wanted in the first place, if I already did the planning to understand what I wanted. Once I know what I want, it doesn't take that long, usually.
Which is why it's so great for prototyping, because it can create something during the planning, when you haven't planned out quite what you want yet.
Also new haiku. Not as smart but lighting fast, I've it review code changes impact or if i need a wide but shallow change done I've it scan the files and create a change plan. Saves a lot of time waiting for claude or codex to get their bearing.
every time I dig in this story is always stories of stories, and all walk backward to maybe one single merchant, which is just his word, with no police trail or court case trail or anything substantial, with news agency work over "examples and reconstruction of what might have happened" and no actual data that could be verified / falsified.
is this something anyone has actually seen happen, or is it part of the AI hype cycle?
Not in place outside in India, (or I suppose some US States, based on what you said?) I am going to guess that theres far more paper prescriptions than digital, globally.
I was consulting for an insurance company once. they even had examples of some of their employees to get insurance money for broken things, using their internal example pictures….
It's funny how every few months there's a new malicious usecase that AI proponents cast unreasonable amounts of doubt onto, then the problem becomes widely recognized, and AI proponents just move onto the next bastion of "ya but is this obvious malicious use case of my favored technology REALLY happening?"
Gigantic bot farms taking over social media
Non-consensual sexual imagery generation (including of children)
LLM-induced psychosis and violence
Job and college application plagiarism/fraud (??)
News publications churning out slop
Scams of the elderly
So don't worry: in a few months we can come back to this thread and return fraud will be recognized to have been supercharged by generative AI. But then we can have the same conversation about like insurance fraud or some other malicious use case that there's obvious latent demand for, and new capability for AI models to satisfy that latent demand at far lower complexity and cost than ever before.
Then we can question whether basic mechanics of supply and demand don't apply to malicious use cases of favored technology for some reason.
well yes that's how should we navigate societal change, out of actual threats and not what ifs. what ifs gave us some nice piece of work legislation before like DMCA, so yeah I'm going to be overly cautious about anything that is emotionally charged instead of data driven.
Are you adjusting your perception of the problem based on fear of a possible solution?
Anyway, our society has fuck tons of protections against "what ifs" that are extremely good, actually. We haven't needed a real large scale anthrax attack to understand that we should regulate anthrax as if it's capable of producing a large scale attack, correct?
You'll need a better model than just asserting your prior conclusions by classifying problems into "actual threats" and "what ifs."
I mean digital privacy was not a what-if when the DCMA was written, it and its problems existed long before then. You're conflating business written legislation which is a totally different problem.
Also I guess you're perfectly fine with me developing self replicating gray nanogoo, I mean I've not actually created it and ate the earth so we can't make laws about self replicating nanogoo I guess.
Yes please go ahead and do. We already have laws against endangerment as we have laws against fraud as we did have laws aroubd copyright infringement. No need to cover all what ifs, as I mentioned, unless unwanted behaviour falls between the cracks of the existing frameworks.
is it happening literally is the most important question. people are clamoring for regulations and voiding consumer protections, over something nobody seem to find a independently verifiable source.
Lmao no. "The estimated amount of refund fraud" + "off the shelf AI can generate and edit photorealistic images" adds up to "refund fraud with AI generated images" by default.
There are enough fraudsters out there that someone will try it, and they're dumb enough that someone will get caught doing it in a hilariously obvious way. It would take a literal divine intervention to prevent that.
Now, is there enough AI-generated fraud for anyone to give a flying fuck about it? That's a better question to ask.
Well then you'll have no trouble to find a verifiable source of it happening and prove your point. something beyond "this person said" or "here a potential example to showcase it's possible"
well then here's my refutation: some say this isn't happening at the scale this article claim someone say it's happening.
that should convince you by your own admission.
beside it's the article responsibility to provide evidence for their points. circular links leading to the same handful of stories is not "preponderant"
maybe, but this story is circulating for a while now even on mainstream media, and I still haven't seen shops names, no order IDs, no platform statements, nothing that can be independently verified yet. just "people say". sure if this is such a big problem we'd have some proof to go by by now.
Mostly gemini 3 pro when I ask to investigate a bug and provide fixing options (i do this mostly so i can see when the model loaded the right context for large tasks) gemini immediately starts fixing things and I just cant trust it
Codex and claude give a nice report and if I see they're not considering this or that I can tell em.
but, why is it a big issue? if it does something bad, just reset the worktree and try again with a different model/agent? They are dirt cheap at 20/m and I have 4 subscription(claude, codex, cursor, zed).
Same I have multiple subscription and layer them. I use haiku to plan and send queue of task to codex and gemini whose command line can be scripted
The issue to me is that I have no idea of what the code looks like and have to have a reliable first layer model that can summarize current codebase state so I can decide whether the next mutation moves the project forward or reduces technical debt. I can delegate much more that way, while gemini "do first" approach tend to result in many dead ends that I have to unravel.
The issue is that if it's struggling sometimes with basic instruction following, it's likely to be making insidious mistakes in large complex tasks that you might no have the wherewithal or time to review.
The thing about good abstractions is that you should be able to trust in a composable way. The simpler or more low-level the building blocks, the more reliable you should expect them to be. In LLMs you can't really make this assumption.
I'm not sure you can make that assumption even when a human wrote that code. LLMs are competing with humans not with some abstraction.
> The issue is that if it's struggling sometimes with basic instruction following, it's likely to be making insidious mistakes in large complex tasks that you might no have the wherewithal or time to review.
Yes, that's why we review all code even when written by humans.
I would expect there to be significant overlap between the demographics of those who more commonly get in accidents and those who use THC. Based on nsc.org, it seems like the majority of car accidents are with drivers 25-34 years old, and occur more frequently late at night on weekends. That generally matches the profile of the stereotypical THC user. It is hard to find good numbers of THC use.
Remember that not all the population drives, nor are accidents randomly distributed in the population.
reply