Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is nothing but speculation written by lawyers in the format of a scientific paper to feign legitimacy. Of course those $500 an hour nitpickers are terrified of AI because it threatens the exorbitant income of their cartel protected profession.




Care to actually engage with the text instead of deciding to paint the entire profession with a crappy brush?

I guess i'll start with calling two well known law professors "$500 an hour nitpickers" when they don't earn 500 an hour and have been professors for 15+ years (20+ in Jessica's case), so aren't earning anything close to 500 an hour, is not a great start?

I don't know if they are nitpickers, i've never taken their classes :)

Also, this is an op-ed, not a science paper. Which you'd know if you had bothered to read it at all.

You say elsewhere you didn't bother to read anything other than the abstract, because "you didn't need to", so besides being a totally uninformed opinion, complaining about something else being speculation when you are literally speculating on the contents of the paper is pretty ironic.

I also find it amazingly humorous given that Jessica's previous papers on IP has been celebrated by HN, in part because she roughly believes copyright/patents as they currently exist are all glorified BS that doesn't help anything, and has written many papers as to why :)


I dismiss the paper for 3 reasons:

1. It is entirely based on speculation of what is going to happen in the future.

2. The authors have a clear financial (and status based) interest in the outcome.

3. I have a negative opinion of lawyers and universities due to personal experience. (This is, of course, the weakest point by far.)

Speculation on future outcomes is not by itself a bad thing, but when that speculation is formatted like a scientific paper describing an experimental result I immediately feel I am being manipulated by appeal to authority. And the conflict of interest of the authors is about as irrelevant as pointing out that a paper on why Oxycodone is not addictive is paid for by Perdue Pharma. Perhaps Jessica's papers on IP are respected because they do not suffer from these obvious flaws? I owe the author no deference for the quality of her previous writing nor for her status as a professor.


What do you mean "formatted like a scientific paper?"

Law review articles look like this. Scientific journals don't own the concept of an abstract, nor are law review articles pretending to be scientific research.


What does "Research paper" mean to you?

Yeah, I haven't gotten through the 40 pages myself, but skimming through the material, it does seem that the arguments rely on an assumption that AI will be employed in a particular manner. For example, when discussing the rule of law, they assert that AI will be making the moral judgments and will be a black box that humans will just turn to to decide what to do in criminal proceedings. But that seems like it would be the dumbest possible way to use the technology.

Perhaps that's the point of the paper: to warn us not to use the technology in the dumbest possible way.


Nah we know the punch-lines to this one.

Worries about reduced quality of work are overblown, because there's always a human operator of the AI, reviewing the text between copying and pasting (no different from StackOverflow!). Enter vibe-coding.

Worries about AI becoming malicious or Skynet are overblown. Again, it's just a text interface, so the worst it can do is to write text that says "launch the nukes". Enter agents and MCP.

It still staggers me that I occasionally read about a judge calling out a lawyer for citing non-existent cases (this far into chatgpt's life). It was bound to happen to the first moron, but every other lawyer should have heard about it then. But it still happens.

Dumbest possible way is what we do.


> Worries about reduced quality of work are overblown, because there's always a human operator of the AI, reviewing the text between copying and pasting

Unfortunately no there is not.

> I occasionally read about a judge calling out a lawyer for citing non-existent cases (this far into chatgpt's life). It was bound to happen to the first moron, but every other lawyer should have heard about it then. But it still happens.

There you go.


3. Same, including press that is no longer unbiased and serve as propaganda of political opinions.

One might say that deinstitutionalization is actually good for plurality of opinions (some call it a democracy). If AI cause it, I'm fine with that.


And if AI leads to a situation in which the very ability to separate factual reporting from propaganda is almost entirely destroyed for anyone besides those in control of it, will you still be fine with it then?

Pointing to a system with problems and then saying you have no issue with something that has the potential to be orders of magnitude more problematic seems an odd approach to me.


Those in control of it aren't able to distinguish factual reporting today. Remember a few months ago when all the so called "reputable" news was screaming about an alleged terror attack against the UN that was caught, and it turned out to be nothing but a basic SMS fraud operation? https://www.bbc.com/news/articles/cn4w0d8zz22o

> Those in control of it aren't able to distinguish factual reporting today.

Can't tell if you're referring to media outlets or AI companies here.

I do remember this incident - it was an embarrassment for the outlets that jumped on that story. Especially because the general public has come to know there is a overriding tendency towards sensationalism.

But surely this is very different from actual outright propaganda operations?


I'm talking about the media companies. AI companies aren't any better at it, but at least they don't go around sanctimoniously claiming to be the source of truth in the same way as journalists do.

And it isn't different than outright propaganda operations because it is an outright propaganda operation. If you read the link in my comment, you will see that the report is just repeating claims from the government nearly verbatim.


I'm not going to take up the mantle of trying to dissuade you from your beliefs, but needless to say if you think that equating CNNs sensationalism-for-views model with the likes of Musk actively trying to dismantle Wikipedia [0] because he wants to rewrite reality (nevermind what Grok is currently doing [1]), then you need to have a hard look in the mirror.

[0] https://www.wired.com/story/elon-musk-launches-grokipedia-wi... [1] https://www.bbc.com/news/articles/ce8gz8g2qnlo

P.S. feel free to "do your own research" if the above are included in your supposed propaganda operation conspiracy.


Why do you lie and say he "tried to dismantle" Wikipedia when what he actually did was start a competitor?

My apologies, I forgot how far he actually went [0]

[0] https://www.theatlantic.com/technology/archive/2025/02/elon-...

> start a competitor

Very charitable way of referencing an observably-obvious disinformation generator


Ok, that's a little better. The first link was just referring to him starting his own. I still think "dismantle" is not an accurate description for asking people not to fund it, but it's within margin of error. I'm paywalled, though, so can't read the whole thing.

"Charitable" is irrelevant to my reference because "competitor" is a term completely devoid of any indication of the quality of the product.


> The first link was just referring to him starting his own.

He's pushing a platform that uses AI to generate content that's riddled with far-right misinformation. The context for him doing this is because he didn't like that Wikipedia now chronicles the very real fact that he made a Nazi salute. This doesn't constitute just starting an alternative, this is actively pushing an agenda of misinformation, while demonizing platforms that he doesn't like. He can't buy Wikipedia like he did with Twitter, so he's pushing to undermine & harm it, via defunding or other means (see government threats to "investigate" while Musk was running DOGE).

> "Charitable" is irrelevant to my reference because "competitor" is a term completely devoid of any indication of the quality of the product.

I was being nice; your characterizing of Musk's platform as a genuine "competitor" is BS. Every indication is that he's doing this because he wants to choose what constitutes fact and what doesn't.


Would you follow AI generated news? Not me and I'm sure I'm not the only one.

If AI leads to decentralisation of press, it sounds better to me. We certainly do not need one or few big entities that follows political tendencies.


Not if I can identify it, which I fear is going to become a harder task in the future.

> If AI leads to decentralisation of press, it sounds better to me.

Seems optimistic to me, given the trend with pretty much everything AI since ChatGPT was announced is concentrating as much power as possible in the hands of a few big tech companies.

As an added example: decentralization was a big promise of crypto; at present, hard for me to see how that's lived up to the promise. I don't see how the current trend with the hands of control over AI will work out any better in this regard.


Local AI exist as well. It's just hard to measure it.

Whats wrong with crypto decentralisation?


> Local AI exist as well. It's just hard to measure it.

Yeah but you're not going to get your news from local AI, are you? you have to connect it to the internet and look up news for you, but if a lot of what's found online is AI generated and there isn't a clear way to distinguish it, then how are you better off?

> Whats wrong with crypto decentralisation?

It hasn't really happened? To my knowledge, a large proportion of crypto volumes are going through a handful of centralised exchanges. Traditional finance sector is also increasing its presence/hold.


I find consumption of AI generated news useless. My reaction was primarily for decentralisation of AI generated tools.

I don't have numbers for crypto but auctions are not only way to buy crypto. And they do not have power to regulate value. Isn't that a sign of decentralization?


Enough people have gotten owned for using these things in court that I think the more likely response is laughing at the ignorance then feeling threatened.

1. Get owned in court because you used an LLM that made a poor legal argument.

2. Get owned out of court because you couldn't afford the $100K (minimum) that you have to pay to the lawyer's cartel to even be able to make your argument in front of a judge.

I'll take number 1. At least you have a fighting chance. And it's only going to get better. LLMs today are the worst they will ever be, whereas the lawyer's cartel rarely gets better and never cuts its prices.


Does it cost 100k minimum in the US to get a lawyer? Or am I misunderstanding something?

There is no "get a lawyer." You pay by the hour. And there is months to years of procedure before the judge even knows your lawsuit exists.

And the minimum to file a lawsuit comes out to 100k at standard rates ? Or was it just a random number?

It's going to cost you around $100K if you're lucky, and it could be a lot more. That's what I mean. There are no exact numbers because it depends on how many hours of lawyering it takes to get through the endless process and procedure (designed by lawyers, of course) before you ever even go to court. You can't know that in advance. And if the other side has more money than you, they know its to their advantage, so they will try to drag out the process and bleed you dry to gain leverage or even force you to drop the case.

Many lawyers work on contingency and take a set proportion of the settlement if they win instead of charging hourly.

That's assuming you are the one doing the suing and not the one getting sued. And even then, that applies to only very limited types of cases. And even then, the contingency is typically 33% (and sometimes can even eat over 50%) of your damages awarded, so the cost is massive in any case.

There is the option of small claims court which is massively cheaper, but it has very low limits for damages, so it's barely worth the effort.


> LLMs today are the worst they will ever be

Just wait till you see tomorrow's, trained on the slop fabricated by today's.


Tech workers know it all, no way a non-tech job could be worth anything more than 20 dollars an hour.

On one hand you're right that people tend to dismiss complexities of the jobs they are unfamiliar with, including IT crowd.

On the other hand when countries feel the need to legislate a new law enforcing writing documents in the human understandable language, one doesn't need to be an expert to suspect there was a systemic rot in those industries. It is totally valid to cry foul when even a parliament is concerned about about reading texts they produce for 500$/h.

https://www.congress.gov/bill/111th-congress/house-bill/946

https://www.legislation.govt.nz/act/public/2022/0054/latest/...


Please go to court using only ChatGPT as legal defense, I'd love to see it, it's going to make for great entertainment. The judge a little bit less so.

You can criticise the hourly cost of lawyers all you like, and it should be a beautiful demonstration to people like you that no, "high costs means more people go into the profession and lower the costs" is not and has never been a reality. But to think that any AI could ever be efficient in a system such common law, the most batshit insane, inefficient, "rethoric matters more than logic" system is delusional.


Yeah, unfortunately, it's the lawyers that are using ChatGPT.

Threatens income? It promises to reduce costs, which will lift profit.

> This is nothing but speculation

Did you read the paper?


It's written in the future tense, so I can safely call it speculation. I've read the abstract which is all I need to decide the full text is not worth my time.

Cool, then we can safely give your comments exactly the same treatment - since they are completely uninformed speculation about a paper you haven't read.

Is he incorrect that the paper is speculating about future events? I don't think it's completely uninformed either. He said that he's read the abstract, which is supposed to give you an impression of the structure of the argument. Why don't you engage with the criticism?

There is no criticism. He did not read the paper.

I read the entire paper, and his criticism is spot on. I even read through many of the references, which, in my spot checks, don't support the claims in the paper. Very disappointing work, IMHO.

Cool. Perhaps you should have criticized the paper and requested feedback instead of defending someone who did not read the paper!

I did both! I'm not concerned with defending anyone, I'm interested in truth. His criticism was sound, and your comments contribute even less to the discussion than his. Very disappointing.

> Is he incorrect that the paper is speculating about future events? I don't think it's completely uninformed either.

Most people would say this is a defense of the person, or at least a defense of the person's choice to not read the full paper. It is no fun to debate with intellectual dishonesty.


Anyone with experience reading research papers professionally will tell you that one of the responsibilities of a paper's abstract is to meaningfully convey the level of evidence and certainty that the paper is backed by. This paper did very well at that, by having the abstract indicate its more of an essay/opinion piece than an a more scientific piece. This is blindingly obvious, and was a simple observation that everyone for some reason dismissed not on merit, but because the person who said it hadn't read the whole paper, which for a 40 page document is an incredibly high bar that is likely not met by 90% of the people commenting here.

Anyway, I'm tired of this now.


And you must have read all 40 pages of it, right? Because if not you are a hypocrite. I claim that the Bible is the literal truth. Oh, you haven't read every word of the Bible? Your arguments against me are worthless!

I did actually read all 40 pages of it. I frequently read law journal articles, among with lots of other types of journals and papers.

I also used to maintain up to date reading lists of various areas (compiler optimization, for example) because I would read so many of the papers.

Let me give you a piece of advice:

First, gather facts, then respond.

Here you start by sarcastically asserting i wouldn't have read it, but it would generally be better to ask if i read it (fact gathering), and then devise a response based on my answer. Because your assertion is simply wrong, making the rest of it even sillier.

As for the strawman about the bible - i'm kinda surprised you are really trying to equate not reading any part of something with not reading every part of something, and really trying to defend what you did here, instead of just owning up to it and moving on.

This speaks a lot more about you than anything else.

That said -

When you make a claim covering that everything in a book is the literal truth, you only have to find a part that is not the literal truth to prove the claim wrong. Which may or may not require reading the entire thing to start (if it turns out your counter-claim is wrong, you at least have to read and find another)

In the original comment, you'll note your claim was "This is nothing but speculation" - IE all of the paper is speculation.

If we are being accurate, this would require you reading the entire thing to be able to say all of it is speculation. How could you know otherwise?

Even if we were being nice, and treat your claim colloquially as meaning "most of it is speculation", this would still require reading some of the paper, which you didn't do either.

Perhaps you should just quit while you are behind, and learn that when you screw up, the correct thing to do is say "yeah, i screwed up, i should have read it before saying that", instead of trying to double down on it.

Doubling down like this just makes you look worse.

As an aside - I was always an avid reader, and very bored in synagogue, so i have read every word of a number of books of the hebrew bible because it was more interesting than paying attention to the sermons.


His criticism that the paper is speculation is spot on. Many of the references don't support the claims they are cited for. It's fascinating to me that you want to argue the poster's standing to make a criticism more than you want to actually discuss the content of the paper.

Its a particularly weird criticism given that Danny is a lawyer and has experience in the CS research community. He is especially well suited to address a criticism that the authors are trying to trick people into thinking their work is a scientific paper, which is plainly a ridiculous criticism.

I'd love some clarity on that.

The linked page says this:

``` How AI Destroys Institutions

77 UC Law Journal (forthcoming 2026)

Boston Univ. School of Law Research Paper No. 5870623

40 Pages Posted: 8 Dec 2025 Last revised: 13 Jan 2026 ```

What exactly is this document? It reads like a heavily cited op-ed, but is coming out of a law school from a professor there and calls itself a "research paper". Very strange.

EDIT: I looked up UC Journal of Law, and I think I was misled because I'm not familiar with the domain. They describe themselves as:

> Since 1949, UC Law Journal, formerly known as Hastings Law Journal, has published scholarly articles, essays, and student Notes on a broad range of legal topics. With roughly 100 members, UCLJ publishes six issues each year, reaching a large domestic and international audience. Each year, one issue is dedicated to essays and commentary from our annual symposium, which features speakers and panel discussions on an area of current interest and development in the law.

So this is congruent with the Journal's normal content (it's an essay), but having the document call itself a "research paper" conjured an inflated expectation about the rigor involved in the analysis, at least for me.


> So this is congruent with the Journal's normal content (it's an essay), but having the document call itself a "research paper" conjured an inflated expectation about the rigor involved in the analysis, at least for me.

Right. And I think it is weird that people immediately leapt to this being some sort of deception by the authors and I think it was weird that when a lawyer who has experience in both domains clarified this that people doubled down.


Yep, I agree that jumping to the "deception" angle would be pretty far down on my list. I always admired the simplicity of HN's guideline to focus on curiosity, since it has far-reaching effects on the nature of the discourse.

> Even if we were being nice, and treat your claim colloquially as meaning "most of it is speculation", this would still require reading some of the paper, which you didn't do either.

I did read a some of it. The abstract. Which is there for the specific purpose of providing readers a summary to decide whether it is worth their time to read the whole thing.

And, yeah, obviously I didn't mean literally all because that just isn't how people talk. e.g. the author's names are not speculation. But the central premise of the paper "How AI Destroys Institutions" is speculative unless they provide a list of institutions that have been destroyed by AI and prove that they have. The institutions they list, "the rule of law, universities, and a free press," have not been destroyed by AI, so therefore, the central claim of the paper is speculative. And speculation on how new tech breakthroughs will play out is generally useless, the classic example being "I think there is a world market for maybe five computers," by the CEO of IBM.

Furthermore their claim here: > The real superpower of institutions is their ability to evolve and adapt within a hierarchy of authority and a framework for roles and rules while maintaining legitimacy in the knowledge produced and the actions taken. Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo.

This just completely contradicts any experience I have ever had with such institutions. Especially "empower individuals to take intellectual risks and challenge the status quo". Yeah. If you believe that, then I've got a bridge to sell you. These guys are some serious koolaid drinkers. Large institutions are where creativity and risk taking go to die. So yeah, not reading 40 pages by these guys.

You can tell a lot from a summary, and the entire premise that you have to read a huge paper to criticize is just bullshit in general.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: