Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Consider a more realistic version of Chinese room. Bob gets a slip of paper, with written Chinese on it, and slips it into a black-box "Chinese room", which produces another paper with a perfectly written Chinese answer. Bob hands it back - it is pretty obvious that Bob doesn't need to have an inkling of Chinese.

Except, in this version, there's an actual Chinese speaker, Xin, sitting inside the room. It's clear that it's Xin, not Bob, who understands Chinese.

Now let's move back to the original Chinese room argument. It's clear that it's the room that understands Chinese - or, rather, the whole system composed of the rules (the room) and its executor (the person), but not the executor by itself. It only seems absurd because our real-life experiences make us presume that, when there's a room and a person, the person must be the more sentient part. IMHO the whole argument is a philosophical sleight of hand.



This is the correct answer.

However, when confronted with this "the system does understand" argument, I believe that Searle and his defenders fall back on the lack-of-qualia position. That is, there is no entity that would experience anything, and that therefore the ability to demonstrate "understanding" as the room does isn't a sufficient (or perhaps even necessary) condition of consciousness.

This point is trickier to rebut, because I think it's fair to acknowledge that our ability to imagine where there would be qualia in the case of the chinese room is more than a little limited. I think the only fair answers are either:

1. our imagination is too limited, but that doesn't allow us to conclude that there cannot be qualia

2. there are indeed no qualia associated with the Chinese Room, because consciousness is not required for full language processing.

(or both)


The problem with the lack of qualia argument is that qualia has no operationalizable, testable definition; it's something we have a fuzzy idea of based on individual experience and attribute to other entities based on similarity of behavior, but can't really with any authority say does or does not exist in anything (other than that an entity itself that experiences qualia can attribute it to themselves.)


I mean this is the limit of current state of the art AI, right?

We're seeing with things like GPT3 that an actual powerful language model needs to have a load of real world knowledge built in. But that knowledge is all based on experiences by others. From the way it talks (and generates poetry) you can tell that it is unable to synthesize new experiences. It can't come up with a description (or metaphor) of human experience that it hasn't been taught.

To be a convincing AI, you don't just need to replicate descriptions of experience from memory, but actually have new experiences as well.

Meaning that the Chinese room, in order to be able to demonstrate "understanding", will also require other inputs than just text. Otherwise it can't really understand the world, because it only knows about the world from hear-say.


I’ve not heard this argument before. Isn’t saying “there is no entity that would experience anything” begging the question? If the system is capable of understanding, why wouldn’t it also be capable of experience? That is what we see in animals.


No, this is the very heart of Searle's thought experiment. His point was that you could imagine a rule-based system for language translation and conversation that clearly was not conscious. The argument wasn't about whether or not it was possible to build such a system, but about whether or not such a system must necessarily be conscious.

Searle's claim was that it was clear from his thought experiment that such a system could exist without any understanding, and definitely without consciousness. Others have disagreed with his reasoning and his conclusion.


I must have misunderstood your earlier comment then. I thought you were saying that Searle responded to ”the system does understand” argument by saying “OK, but it’s still not conscious.” Now I think you are saying his response was “no, it doesn’t understand, it merely passes the Turing test; it can’t understand because it’s clearly not conscious.” Which again, seems to beg the question.


No, Searle's main response to the system response was to just dismiss it out of hand (at least in the responses from him that I read). Dennett was probably (for me) the most articulate of the "system responders" and I think Searle considered his POV to be a joke, more or less.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: