Hacker Newsnew | past | comments | ask | show | jobs | submit | jibal's commentslogin

I realize I'm in the minority but I side with whomever I think is right under the law, regardless of my (sometimes extreme) feelings about the parties and even about the law.

A case only reaches the Supreme Court if there is confusion over who is right under the law. The Supreme Court decision itself is not a definitive guide to which side is right under the law, as they’ve overturned themselves multiple times. So how do you decide which party to side with?

Your view on the law seems a bit alien to me. My opinions on what the rules of the law should roughly look like, are largely independent of who specifically is involved in a legal dispute. Sure I guess if Hitler was being sued and the only way to stop him was this lawsuit by Sony, I would probably concede that on balance it's better to have a slightly worse legal standard around copyright. Otherwise, I think having a law that best reflects my moral views and creates the best incentives for society in general, far outweighs how i feel about the plaintiffs.

As for how I arrive on my views, it's obviously not an entirely rational process, but the rules you get from viewing property rights and self-ownership as fundamental seem to lead to the most preferable outcomes to me. If I were forced to adopt a more deontological philosophy, it's also the one that has the fewest obviously absurd conclusions, though not entirely. From this it's, in my opinion, pretty obvious to be skeptical of copyright law more generally (Ayn Rand would disagree) and therefore I welcome any precedent that weakens it.


I just told you: I side with whomever I think is right under the law.

And your first sentence is not remotely true--or rather, it is quite conceptually confused. Whose "confusion" are you talking about? Not mine, generally. There are of course disagreements about which side is right under the law, but often those disagreements are a result of bad faith--take just about every case Trump has ever appealed up to the SCOTUS. And many of the decisions made by the current crop of right wing ideologues on the Court are made in bad faith, especially Alito, Thomas, and Gorsuch, in that order of corruption. Many of the "disagreements" are based on bogus "textualism" and "originalism" frameworks that are applied completely ad hoc and hypocritically and were invented by conservatives solely in order to provide them with a basis for making rulings based on their ideology (the historical record is quite clear on this).

Anyway, the point was that I decide based on my view of the law, not who the parties are. Since you seem to completely miss the point, have poor reading comprehension, and are just adding muddle, I won't comment further.


The conclusion that LLMs don't reason is not a consequence of them not being able to do arithmetic, so your argument isn't valid.

Also, see https://news.ycombinator.com/newsguidelines.html

"Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

etc.


Plenty of humans can't do arithmetic. Can they also not reason.

Reasoning isn't a binary switch. It's a multidimensional continuum. AI can clearly reason to some extent even if it also clearly doesn't reason in the same way that a human would.


> Plenty of humans can't do arithmetic. Can they also not reason.

I just pointed out that this isn't valid reasoning ... it's a fallacy of denial of the antecedent. No one is arguing that because LLMs can't do arithmetic, therefore they can't reason. After all, zamalek said that he can't quickly multiply large numbers in his head, but he isn't saying that therefore he can't reason.

> Reasoning isn't a binary switch. It's a multidimensional continuum.

Indeed, and a lot of humans are very bad at it, as is clear from the comments I'm responding to.

> AI can clearly reason to some extent

The claim was about LLMs, not AI. This is like if someone said that chihuahuas are little and someone responded by saying that dogs are tall to some extent.

LLMs do not reason ... they do syntactic pattern matching. The appearance of reasoning is because of all the reasoning by humans that is implicit in the training data.

I've had this argument too many times ... it never goes anywhere. So I won't respond again ... over and out.


Indeed, and a lot of humans are very bad at it, as is clear from the comments I'm responding to.

This is your idea of "conversing curiously" and "editing out swipes," I suppose.

I've had this argument too many times ... it never goes anywhere. So I won't respond again ... over and out.

A real reasoning entity might pause for self-examination here. Maybe run its chain of thought for a few more iterations, or spend some tokens calling research tools. Just to probe the apparent mismatch between its own priors and those of "a lot of humans," most of whom are not, in fact, morons.


> Don't be snarky.

ROFL

Comments should get more thoughtful and substantive

Yes, they should, but instead we're stuck with the stochastic-parrot crowd, who log onto HN and try their best to emulate a stochastic parrot.


LLMs don't use tools. Systems that contain LLMs are programmed to use tools under certain circumstances.

you’re just abstracting it away into this new “systems” definition

when someone says LLMs today they obviously mean software that does more than just text, if you want to be extra pedantic you can even say LLMs by themselves can’t even geenrate text since they are just model files if you don’t add them to a “system” that makes use of that model files, doh


> when someone says LLMs today they obviously mean ...

LLMs, if the someone is me or others who understand why it's important to be precise. And in this context, the distinction between LLM and AI mattered--not pedantic at all.

I won't respond further ... over and out.


Searle's Chinese Room is a fallacious mess ... see the works of Larry Hauser, e.g., https://philpapers.org/rec/HAUNGT and https://philpapers.org/rec/HAUSCB-2 The importance of Searle's Chinese Room is how such extraordinarily bad argumentation has persuaded so many people open to it.

And the literature about philosophical zombies is contentious, to say the least, and much of it is also among the worst arguments in philosophy--Dennett confided in me that he thought it set back progress in Philosophy of Mind for decades, along with that monstrosity of misdirection, "the hard problem". Chalmers (nice guy, fun drunk at parties, very smart, but hopelessly deluded) once admitted to me on the Psyche-D list that his argument in The Conscious Mind that zombies are conceivable is logically equivalent to denying that physicalism is conceivable, so it's no argument against physicalism ... he said he used the argument to till the soil to make people more susceptible to his later arguments against physicalism (which I consider unethical)--all of which are bogus, like the Knowledge Argument--even Frank Jackson who originated it admits this.

Similarly, Robert Kirk, who coined the phrase "philosophical zombie" in 1974, wrote his book Zombies and Consciousness "as penance", he told me when he signed my copy.

> I don't want to do the thing where we fight on the internet.

Nor me ... I've had these "fights" too many times already and I know how they go, and I understand why people believe what they believe and why they can't be swayed, so I won't comment further ... I just want to put a dent in this "I'm a philosopher" argumentum ad verecundiam.


I would hope that philosophy would be exempt from accusations of arguments from authority. I say I don’t want to fight exactly because I don’t want to come off like a jerk because I’m arguing. If the Chinese Room is a mess, I welcome the argument, and will happily read the paper.

I’m less open to push back against philosophical zombies, as the argument seems trivially plausible, from a position of solipsism.


Philosophy may be exempt from accusations of arguments from authority--because that's a category mistake--but philosophers certainly aren't.

Hauser's papers are just a part of a large literature rejecting/refuting Searle's Chinese Room, but he has probably taken Searle more seriously than most. After Searle's well known response that waves away numerous objections, many people dismissed him as acting in bad faith. (It would have been even worse if they had known about the accusations of sexual assault. Sure, that would be ad hominem and intellectually dishonest, but we're talking about human beings, same as with arguments from authority.) See, e.g., https://www.nybooks.com/articles/1995/12/21/the-mystery-of-c... where Daniel Dennett writes:

> For his part, he has one argument, the Chinese Room, and he has been trotting it out, basically unchanged, for fifteen years. It has proven to be an amazingly popular number among the non-experts, in spite of the fact that just about everyone who knows anything about the field dismissed it long ago. It is full of well-concealed fallacies. By Searle’s own count, there are over a hundred published attacks on it. He can count them, but I guess he can’t read them, for in all those years he has never to my knowledge responded in detail to the dozens of devastating criticisms they contain; he has just presented the basic thought experiment over and over again. I just went back and counted: I am dismayed to discover that no less than seven of those published criticisms are by me (in 1980, 1982, 1984, 1985, 1987, 1990, 1991, 1993).

etc. If you've never read any of this literature yet can facilely write what you did above about Searle's discussion of the Chinese Room being "the most important work here", I don't expect you to start now ... but at least reconsider posing as a philosopher who is knowledgeable about such things.

Your reason to be less open to "push back against" (an odd formulation--the burden is on those who claim that they are conceivable, and therefore physicalism is false) philosophical zombies seems to hinge on another radical failure to understand the issue and unfamiliarity with the literature.

Philosophical zombies are completely independent of solipsism. The conceivability of zombies says that, if this is a world in which you are the sole inhabitant and you are conscious, then there is a possible world that is physically identical to this world and has the same physical laws, but the sole inhabitant (scoofy'), while physically identical to you and behaves identically, isn't conscious. That is, consciousness is not a consequence of physical laws and contingencies but is some sort of ethereal goop that accompanies physical entities. Of course Chalmers and other modern dualists don't subscribe to Descartes' substance dualism, but their attempts to formulate "process dualism" or some other nonsense solely because they need some alternative to physicalism--which they reject because they are hopelessly confused about the nature of consciousness and "qualia"--are frankly incoherent.

Maybe read Kirk's book and learn something about the subject. Here's a review that gives you a peek at what you'll find there: https://view.officeapps.live.com/op/view.aspx?src=https%3A%2...

Over and out.



That's very odd response if you know what a type system is.

We used to get free phone calls in phone booths by sticking an unwound paper clip into the earpiece and touching the other end to the coin box.

You could do the same by wearing wool socks and shuffling around for a minute before touching the coin slot!

That doesn't work very well on a humid day outside in the summer.

And the payphones in the city I grew up in didn't operate using ground-start signalling, so the paper clip/safety pin/pull-tab/static trick didn't work there at all.

But an innocuous walkman with a cassette tape that had some red box tones on it, with a bonus of having the rest of the cassette available for music to listen to? That worked great.


This was in the late 1950's for me, in the San Fernando Valley where summertime humidity was very low. But a few years later the phone company put shields in the headsets so you could no longer puncture the foil.

Fair.

I'm old enough to remember payphones being completely ubiquitous (with whole banks of them inside of each entrance for one large department store, usually with one or two more outside), but I'm not old enough to remember the 1950s. :)

I did find one old phone at a state park not too far out that could be tricked by grounding it, but that was in GTE territory instead of the Ohio Bell BOC that I was more familiar with.


Also true of people who used to be young and then became old. And trans people also have a lot to say about this.

That skit has nothing to do with Vihart ... Claude hallucinated that.

> This clearly shows that AI can think critically and reason.

No it doesn't ... Claude regurgitated human knowledge.


That's an irrelevant strawman. It tells us nothing about how create such a system ... how to pluck it out of the infinity of TMs. It's like saying that bridges are necessarily built from atoms and adhere to the laws of physics--that's of no help to engineers trying to build a bridge.

And there's also the other side of the GP's point--Turing completeness not necessary for creativity--not by a long shot. (In fact, humans are not Turing complete.)


No, twisting ot to be about how to create such a system is the strawman.

> Turing completeness not necessary for creativity--not by a long shot.

This is by far a more extreme claim than the others in this thread. A system that is not even Turing complete is extremely limited. It's near impossible to construct a system with the ability to loop and branch that isn't Turing complete, for example.

>(In fact, humans are not Turing complete.)

Humans are at least trivially Turing complete - to be Turing complete, all we need to be able to do is to read and write a tape or simulation of one, and use a lookup table with 6 entries (for the proven minimal (2,3) Turing machine) to choose which steps to follow.

Maybe you mean to suggest we exceed it. There is no evidence we can.


P.S. everything in the response is wrong ... this person has no idea what it means to be Turing complete.

> all we need to be able to do is to read and write a tape or simulation of one

An infinite tape. And to be Turing complete we must "simulate" that tape--the tape head is not Turing complete, the whole UTM is.

> A system that is not even Turing complete is extremely limited.

PDAs are not "extremely limited", and we are more limited than PDAs because of our very finite nature.


> P.S. everything in the response is wrong ... this person has no idea what it means to be Turing complete.

I know very well what it means to be Turing complete. All the evidence so far, on the other hand suggests you don't.

> An infinite tape. And to be Turing complete we must "simulate" that tape--the tape head is not Turing complete, the whole UTM is.

An IO port is logically equivalent to infinite tape.

> PDAs are not "extremely limited", and we are more limited than PDAs because of our very finite nature.

You can trivially execute every step in a Turing machine, hence you are Turing equivalent. It is clear you do not understand the subject at even a basic level.


> You can trivially execute every step in a Turing machine, hence you are Turing equivalent. It is clear you do not understand the subject at even a basic level.

LOL. Such projection. Humans are provably not Turing Complete because they are guaranteed to halt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: