Hacker Newsnew | past | comments | ask | show | jobs | submit | aeon_ai's commentslogin

What’s your proposed alternative, hotshot armchair expert?

They do nothing?


Well, there is such a thing called academic institutions whose revenue does not depend on selling AI products, just as an example.

My alternative? Nationalize the company and implement a workplace democracy to replace the executive team + board.

I trust the workers more to dictate the direction of a company than most executives.

They can't do worse.

edit: or what another commentator said, fucking academia. Public universities have done more for humanity than nearly anything to come out of SV. Surveillance capitalism, mass misery + psychosis; it's very telling what our society values when mass amounts of the Earth are desperately trying to ban these very same services to protect children.


You've likely paid attention to the litigation here. Regardless of what remains to be litigated, the training in and of itself has already been deemed fair use (and transformative) by Alsup.

Further, you know that ideas are not protected by copyright. The code comparison in this demonstrates a relatively strong case that the expression of the idea is significantly different from that of the original code.

If it were the case that the LLM ingested the code and regurgitated it (as would be the premise of highlighting the training data provenance), that similarity would be much higher. That is not the case.


You're right, I've followed the litigation closely. I've advocated for years that "training is fair use" and I'm generally an anti-IP hawk who DEFENDS copyright/trademark cases. Only recently have I started to concede the issue might have more nuance than "all training is fair use, hard stop." And I still think Judge Alsup got it right.

That said, even if model training is fair use, model output can still be infringing. There would be a strong case, for example, if the end user guides the LLM to create works in a way that copies another work or mimics an author or artist's style. This case clearly isn't that. On the similarity at issue here, I haven't personally compared. I hope you're right.


I think “strong case” is probably reliant on a few points on the output side, and would have to be more than just author/artists style.

Style itself would be very hard to deem infringement, for obvious reasons (idea) - I think it’s much more likely an issue when a character has derivative elements (e.g., iron man, spider man esque features), and where the users prompt had explicit references to those characters (intent)

All that said, even then, on the artistic side I think it would come down to the same analysis that would apply to traditional media - AI is just a vehicle that introduces some novel risks.

Music might be more risky given the litigious nature of the industry.

Code? It’s going to be hard to claim infringement with dramatically different implementations, barring patent coverage.


> The code comparison in this demonstrates a relatively strong case that the expression of the idea is significantly different from that of the original code.

Can I use one AI agent to write detailed tests based on disassembled Windows, and another to write code that passes those same function-level tests? If so, I'm about to relicense Windows 11 - eat my shorts, ReactOS!


Judge Alsup -- U.S. District Judge William Alsup said Anthropic made "fair use" of books, deeming it "exceedingly transformative."

"Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different"


This quote is both funny and sad. It reads like an advertisement.

This is such a comically bad take.

The use of loaded and pejorative language like "forgery" emphasizes that this is not a logical argument, but a moral one. The repeated comparisons to "true craft" reveals the author would prefer that code be regarded like artisanal cheese.

Beyond the pretension, it's head in the sand to imply that the technology hasn't progressed. It's just very clearly not true to anyone who is paying attention - longer tasks, better code, less errors. I'm somebody who actively despises the hype bullshit-machine that SV has turned into, but technology is an industry for pragmatists that can leverage what works. And LLMs do.

If you don't like the technology, you have every right to scream that from the mountaintops. As it stands, this just serves as no more than a rallying cry to the ignorant.


This is the most rational take. I'm a quality guy (Deming, Juran, etc), but nothing about incorporating an LLM into my own work has lowered its quality. That isn't to say that I haven't encountered slop. The difference is that, self-identifying as a craftsman, I have the ability to decide whether or not something stays or goes on the scrap heap. It seems a lot of people are missing that point: just because you can churn out shit doesn't mean you have to (and sorry, sunk-cost bias re: tokens isn't an excuse—that's the cost of doing business). It's a choice. AI-assisted coding is a tremendous boon on productivity, if (and I'd argue only if) you treat it like a power tool and not a genie lamp.

No, you won't be rewarded magic beans for churning out crappy dashboards any more. But if you're serious about shipping quality, nothing is stopping you here.


> you won't be rewarded magic beans for churning out crappy dashboards any more

The days of being treated like a wizard for making buttons and widgets hit an endpoint were good while they lasted.


I get the sense that OpenAI is astroturfing “outrage and hypocrisy” in this thread.

The dead internet is alive and well.


They are on X as well


Literally a feature being advertised as of today.


The most likely and capable retaliation will be cyber/info wars.

Iran has sophisticated influence operations and will likely flood social media with disinformation designed to deepen political divisions and erode trust in institutions.

This advice serves even if you don’t believe the above. Be deeply skeptical of all viral content in the coming days and weeks, especially anything designed to change your opinions, or provoke outrage/fear. Verify before sharing. Expect deepfakes. Stick to primary sources when possible.


This kind of screams desperation, but I guess that's what happens when you're niche AI.


niche is a polite way to put it


Bot-ique Mechahitler.


No. The US needs automated weapons China will attack Taiwan, Hamas will go on another murder rampage.


If you’re not aware of what it’s good at, given what very smart people are saying and doing with it, I think you’re either not paying attention or aren’t being intellectually honest with yourself


Or those people aren't actually very smart, or they're caught up in the hype, or since they are very smart they exist in a mode where their experience doesn't translate to normal, everyday situations.

It seems that AI coding tools are very sensitive to codebase structure. If you work on a monolith with relatively simple, straightforward structure this is the happy path. A bird's nest of microservices is not. If your team has taken the time and effort to structure the codebase in a way that's amenable to AI, and you invest in the tooling, and you keep up that effort over time, then AI does seem to work.. Not "10x productivity gain" as they try to sell it to us, but maybe >1.0x. It's not clear, though, that for the vast majority of developers AI provides any speedup whatsoever. That's the problem. If it only works for the top 5% or whatever, that addressable market is very, very small.


Instead of appealing to authority you could have given direct examples of how it's transformed your ways of working, that could've continued the conversation somewhere.


there are a lot of smart people with eggs in this basket that stand to benefit from boosting AI hard.


I've seen a lot of very rich people* say it's amazing, it's changing my life, it's going to change your lives (it's going to take away all your jobs so we don't have to pay you anymore), we're about to hit the singularity and start a new golden age with it.

I've seen some apparently-smart people say they're using it for all kinds of things and it's doing great for them.

I've seen roughly the same number of apparently-smart people say they've tried it, they've given it a really good shot, but it doesn't work well for them, and in fact, when they tried, it made them less productive.

When I've personally tried it (almost exclusively on local generation), I've found it entertaining, but not reliable enough to use for more than that. And I do not trust any of the hosted models not to take everything I feed them and monetize it, including by selling it to organizations like ICE which I find utterly reprehensible.

So while I'm not bigstrat2003, about me, at least, you're wrong: I am paying attention, and I'm being intellectually honest. I'm also evaluating it for more than just "does this make me more money in the short term?"

* Who just so happen to be heavily invested in AI companies...


And most artists using the tools are still training LoRAs for Flux, Qwen, ZIT/ZIB, etc. Nano Banana is a useful tool, but not for the best work.


This is irrelevant to the point.

Using nano banana does not require arcane prompt engineering.

People who have not learnt image prompt engineering probably didn't miss anything.

The irony of prompt engineering is that models are good at generating prompts.

Future tools will almost certainly simply “improve” you naive prompt before passing it to the model.

Claude already does this for code. Id be amazed if nano banana doesnt.

People who invested in learning prompt engineering probably picked up useful skills for building ai tools but not for using next gen ai tools other people make.

Its not wasted effort; its just increasingly irrelevant to people doing day-to-day BAU work.

If the api prevents you from passing a raw prompt to the model, prompt engineering at that level isnt just unnecessary; its irrelevant. Your prompt will be transformed into an unknown internal prompt before hitting the model.


> Claude already does this for code. Id be amazed if nano banana doesnt.

Nano Banana is actually a reasoning model so yeah it kinda does, but not in the way one might assume. If you use the api you can dump the text part and it's usually huge (and therefore expensive, which is one drawback of it. It can even have "imagery thinking" process...!)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: