Hacker Newsnew | past | comments | ask | show | jobs | submit | avereveard's commentslogin

Eh if there's a human on the other side single stream performance is going to matter to them.

But... you can ask! Ask claude to use encapsulation, or to write the equivalent of interfaces in the language you using, and to map out dependencies and duplicate features, or to maintain a dictionary of component responsibilities.

AI coding is a multiplier of writing speed but doesn't excuse planning out and mapping out features.

You can have reasonably engineered code if you get models to stick to well designed modules but you need to tell them.


But time I spend asking is time I could have been writing exactly what I wanted in the first place, if I already did the planning to understand what I wanted. Once I know what I want, it doesn't take that long, usually.

Which is why it's so great for prototyping, because it can create something during the planning, when you haven't planned out quite what you want yet.


Owning the means of cognition is going to be more and more importan as it allows one to scale more than linearly.

Outsiders will be tied to limited or pay per use because owning the means of cognition will be a massive extractive economy


Also new haiku. Not as smart but lighting fast, I've it review code changes impact or if i need a wide but shallow change done I've it scan the files and create a change plan. Saves a lot of time waiting for claude or codex to get their bearing.

yeah all they could do is executing code they provided in their own compute environment, the browser.

Raymond Chen blog comes to mind https://devblogs.microsoft.com/oldnewthing/20230118-00/?p=10... "you haven’t gained any privileges beyond what you already had"


every time I dig in this story is always stories of stories, and all walk backward to maybe one single merchant, which is just his word, with no police trail or court case trail or anything substantial, with news agency work over "examples and reconstruction of what might have happened" and no actual data that could be verified / falsified.

is this something anyone has actually seen happen, or is it part of the AI hype cycle?


I’ve heard that this was happening with food apps in India. I am waiting for when people realize how to fake prescriptions.

> I am waiting for when people realize how to fake prescriptions

How would an LLM help with that? Paper prescriptions can be copied using Word and a pen.


Image gen, not LLM help.

Word and a pen is still effort, compared to just Image + prompt.


I mean a lot of US states use an electronic system where the doctor submits them directly. Are there still many printed prescriptions?

Not in place outside in India, (or I suppose some US States, based on what you said?) I am going to guess that theres far more paper prescriptions than digital, globally.

scamming to get refunds has always been a thing.

I was consulting for an insurance company once. they even had examples of some of their employees to get insurance money for broken things, using their internal example pictures….

and that say nothing whether this is actually happening or not, what's your point?

It's funny how every few months there's a new malicious usecase that AI proponents cast unreasonable amounts of doubt onto, then the problem becomes widely recognized, and AI proponents just move onto the next bastion of "ya but is this obvious malicious use case of my favored technology REALLY happening?"

Gigantic bot farms taking over social media

Non-consensual sexual imagery generation (including of children)

LLM-induced psychosis and violence

Job and college application plagiarism/fraud (??)

News publications churning out slop

Scams of the elderly

So don't worry: in a few months we can come back to this thread and return fraud will be recognized to have been supercharged by generative AI. But then we can have the same conversation about like insurance fraud or some other malicious use case that there's obvious latent demand for, and new capability for AI models to satisfy that latent demand at far lower complexity and cost than ever before.

Then we can question whether basic mechanics of supply and demand don't apply to malicious use cases of favored technology for some reason.


well yes that's how should we navigate societal change, out of actual threats and not what ifs. what ifs gave us some nice piece of work legislation before like DMCA, so yeah I'm going to be overly cautious about anything that is emotionally charged instead of data driven.

Who is talking about legislation?

Are you adjusting your perception of the problem based on fear of a possible solution?

Anyway, our society has fuck tons of protections against "what ifs" that are extremely good, actually. We haven't needed a real large scale anthrax attack to understand that we should regulate anthrax as if it's capable of producing a large scale attack, correct?

You'll need a better model than just asserting your prior conclusions by classifying problems into "actual threats" and "what ifs."


I mean digital privacy was not a what-if when the DCMA was written, it and its problems existed long before then. You're conflating business written legislation which is a totally different problem.

Also I guess you're perfectly fine with me developing self replicating gray nanogoo, I mean I've not actually created it and ate the earth so we can't make laws about self replicating nanogoo I guess.


Yes please go ahead and do. We already have laws against endangerment as we have laws against fraud as we did have laws aroubd copyright infringement. No need to cover all what ifs, as I mentioned, unless unwanted behaviour falls between the cracks of the existing frameworks.

That "whether this is actually happening or not" is not even a question worth asking.

No shit it's happening. Now, on what scale, and should we care?


is it happening literally is the most important question. people are clamoring for regulations and voiding consumer protections, over something nobody seem to find a independently verifiable source.

Lmao no. "The estimated amount of refund fraud" + "off the shelf AI can generate and edit photorealistic images" adds up to "refund fraud with AI generated images" by default.

There are enough fraudsters out there that someone will try it, and they're dumb enough that someone will get caught doing it in a hilariously obvious way. It would take a literal divine intervention to prevent that.

Now, is there enough AI-generated fraud for anyone to give a flying fuck about it? That's a better question to ask.


Well then you'll have no trouble to find a verifiable source of it happening and prove your point. something beyond "this person said" or "here a potential example to showcase it's possible"

No. The prior is so strong that it's up to you to prove that no AI fraud is happening.

Good luck.


The "some say" prior?

well then here's my refutation: some say this isn't happening at the scale this article claim someone say it's happening.

that should convince you by your own admission.

beside it's the article responsibility to provide evidence for their points. circular links leading to the same handful of stories is not "preponderant"


The "humans are stupid in all the usual ways" prior.

You might as well be asking for proof that humans use AI to generate porn.


well, now that it hit the news it will happen more often!

maybe, but this story is circulating for a while now even on mainstream media, and I still haven't seen shops names, no order IDs, no platform statements, nothing that can be independently verified yet. just "people say". sure if this is such a big problem we'd have some proof to go by by now.

> when models were given more than eight instructions

I mean you can call a model 8 time, this seems they looking for excuses.

> If users asked unrelated questions

if you have enumerable business process why do you have chat interface, put down a set of buttons to start each business process

this seem some egregious misuse of the tech, which has it's problem, but hasn't had a fair chance here.

wonder if the whole project was just smoke and mirror to justify layoffs.


Electronics are popualar also because there's a higher proportion of only child and parents have only so much time to dedicate to child play


True, not only for single child but for many with different sex, large age difference or simply careless parents.


Mostly gemini 3 pro when I ask to investigate a bug and provide fixing options (i do this mostly so i can see when the model loaded the right context for large tasks) gemini immediately starts fixing things and I just cant trust it

Codex and claude give a nice report and if I see they're not considering this or that I can tell em.


fyi that happened to me with codex.

but, why is it a big issue? if it does something bad, just reset the worktree and try again with a different model/agent? They are dirt cheap at 20/m and I have 4 subscription(claude, codex, cursor, zed).


Same I have multiple subscription and layer them. I use haiku to plan and send queue of task to codex and gemini whose command line can be scripted

The issue to me is that I have no idea of what the code looks like and have to have a reliable first layer model that can summarize current codebase state so I can decide whether the next mutation moves the project forward or reduces technical debt. I can delegate much more that way, while gemini "do first" approach tend to result in many dead ends that I have to unravel.


The issue is that if it's struggling sometimes with basic instruction following, it's likely to be making insidious mistakes in large complex tasks that you might no have the wherewithal or time to review.

The thing about good abstractions is that you should be able to trust in a composable way. The simpler or more low-level the building blocks, the more reliable you should expect them to be. In LLMs you can't really make this assumption.


I'm not sure you can make that assumption even when a human wrote that code. LLMs are competing with humans not with some abstraction.

> The issue is that if it's struggling sometimes with basic instruction following, it's likely to be making insidious mistakes in large complex tasks that you might no have the wherewithal or time to review.

Yes, that's why we review all code even when written by humans.


If that's the conclusion you'd also expect 40% of the population using it.


I would expect there to be significant overlap between the demographics of those who more commonly get in accidents and those who use THC. Based on nsc.org, it seems like the majority of car accidents are with drivers 25-34 years old, and occur more frequently late at night on weekends. That generally matches the profile of the stereotypical THC user. It is hard to find good numbers of THC use.

Remember that not all the population drives, nor are accidents randomly distributed in the population.

https://injuryfacts.nsc.org/motor-vehicle/overview/age-of-dr... https://injuryfacts.nsc.org/motor-vehicle/overview/crashes-b...


So in other words, people with a less risk-averse personality are more likely to engage in risky behaviors

”That generally matches the profile of the stereotypical THC user”

Got a source for that claim?


That sounds about right to be honest


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: