Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GenAI does not Think nor Understand (hkubota.wordpress.com)
22 points by zerodev on July 11, 2024 | hide | past | favorite | 19 comments


Nor do most human adults. I’m not being cheeky here because this is an extremely important discussion

We need to make a distinction between levels of capabilities for independent agents with the capacity to take in information and produce a set of actions

1. Entry level action: Enough capacity to execute a task or set of tasks by implementing written instructions prepared by an expert (machines can do almost 100% of these tasks at this point)

2. Intermediate capacity: Determine what needs to be done from vague requirements, defining tasks with no intermediate specifications and completing those tasks (Current state of GPTs)

3. Expert capacity: Recognizing the context of the situation, Knowing how to define the problem in a way that is appropriate for context, evaluating the options and resources, setting up structures to granularity define the set of resources needed (GPTs can do this in limited and narrow cases)

Most humans only act at the first level. This is especially true in work contexts, and almost nobody would be hired if all jobs required level 2 or 3 capacity

What I’ve seen over 15 years of being in AI (and leadership generally) is that - as expected - the goalposts for what counts as “AI” always move up the expert chain with no relation to the existing distribution of capabilities of existing agents (humans)

This author is measuring GPTs on being consistent and pervasive with superhuman behaviors - but compared to what we measure as the average human capacity these systems are already superhuman.

We’re already well past the point that ChatGPT is a more coherent “thinker” than all but the top 1% of all humans. So what is good enough?


> 1. Entry level action: [...] (machines can do almost 100% of these tasks at this point)

Haha, not even close. That's true if you take it in a very specific industrial bubble, where you do not _expect/instruct_ people to raise at others capacities, depending on the situation they face. Which often enough breaks/stops the whole process.

> Most humans only act at the first level.

Very suspicious classist take. That's different than to say that everyone acts at various levels, depending on the context. Some experts are not capable of doing the most basic of cooking or house cleaning, for instance.

> We’re already well past the point that ChatGPT is a more coherent “thinker” than all but the top 1% of all humans.

[needs citation]. Especially since the tech leaders in the AI industry seem unable to not reduce the world to a small digital box. They would definitely grow up their thinking with some philosophy and humanities.

> So what is good enough?

The question is not "what is good enough?", it's "what/who is this good for?". That's the basic question for any tech or tool you design/build/have.

So far, but for being a compelling and funny toy, whether in text, image, video, sound generation/manipulation, I'm not seeing real world use-cases where I would trust/relate to it more than to something that a real person had done by themselves.

Because on one hand, it's not deterministic. And on the other, it's not accountable.


Gen AI do not understand or think. Full stop.

Is there thinking involved in saying: "2 + 2 = 4"? No. It's a 'fact'. That's how these weights work. It's essentially a very complex SQL query. It's only pulling out what's been put into it (weights).

Thinking and understanding allows for creation of new information from unrelated information, or even nothing at all. It allows for chained comprehension: i.e. "apple fell to the ground, why?" Gen AI cannot derive the concept of gravity from an apple falling without being 'trained' to.


> Nor do most human adults.

But that's not the point. LLMS are all the rage nowadays. The point is that they do not think/understand what they do and I agree with that point. They statistically predict text but don't really understand anything, at least the way we do.

But hold on to your horse, they're still quite useful and cool but not what they're hyped/marketed as.


The question is, why does anybody care whether or not they think or understand?

It’s an artificial distinction given that the majority of actors don’t think or understand so thinking, and understanding isn’t something that society cares about


>The question is, why does anybody care whether or not they think or understand?

I can turn that around and ask you why do you not care to know whether a tool is capable of thinking/understanding/reasoning/generalizing etc especially if it's a tool you want add to your toolbox.

And I say that as a person who thinks LLMs/genAI are useful for some tasks even if I know they do not think.


Should I be asking if my Totota Tundra is conscious? Absurd.

Plenty of thinking people out there that aren’t expected to use those capacities, and we don’t consider that relevant, because it’s more than we’re asking for people to demonstrate generally.

Bottom line, if you’re asking if your serviette is conscious, something went very wrong


Consciousness is a completely different matter and not what we were talking about.


I just did this on chat gpt:

  a man with a wolf and a cabbage wants to cross a river/ There is a boat but the man can only bring one more item beside himself at a time. What are the steps the man needs to do in order to cross the river?
The response was:

To solve the puzzle where a man needs to cross a river with a wolf and a cabbage, while ensuring none of them are left together unsupervised (since the wolf will eat the sheep, and the sheep will eat the cabbage), follow these steps:

  1.Man takes the sheep across the river (leaving the wolf 
 and the cabbage on the original side).
  2.Man returns alone.
  3.Man takes the cabbage across the river.
  4.Man leaves the cabbage on the other side and takes the sheep back
  5.Man takes the wolf across the river.
  6.Man returns alone.
  7.Man takes the sheep across the river.
  8.This way, at no point are the wolf and the sheep left alone together, nor are the sheep and the cabbage left alone together.
at which I prompted: I didn't mention a sheep

The response was corrected but wrongly:

  Oops, my mistake! Let’s solve the puzzle correctly with a man, a wolf, and a cabbage:

  1.Man takes the wolf across the river (leaving the cabbage on the original side).
  2.Man returns alone.
  3.Man takes the cabbage across the river.
  4.Man leaves the cabbage on the other side and takes the wolf back.
  5.Man takes the man across the river.
  6.Man returns alone.
  7.Man takes the wolf across the river.
  8.Now, both the man and the wolf are across the river, with no danger to the cabbage.
At step 3 the problem was solved but it kept on going.


>However it also fails in other areas quite spectacularly: > Anything which requires logic > Anything which requires actual understanding

Not really the experience most people is getting at using AIs nowadays.

It actually shows a high degree of understanding or plays mostly perfectly an internal process that leads to show what humans perceive continously as "understanding" stuff.

In many scenarios shows a high degree of logical reasoning/thinking/pattern detection driven behavior. Again, whatever internal process used, is what humans usually think as "logic".

Some processes are known, others are a bit black boxes yet, mainly because of the non deterministic stuff happening inside the models, I mean the reason behind why model produces a different output having been inputed the same prompt more than once.

Not everything is mistery, most processes are known by now, they are "subproducts" of emergent mathematical structures inside the model's architecture, like when you take circle and you make it rotate around a point, you get a sphere.

The emergent "sphere" would be the "logic" and "reason" behind the models.


Do humans?

Are these terms well defined or just subjective "I know it when I feel it" echoes of an unresolved debate over residual beliefs in a dualistic mind/body dichotomy?

What if it turns out that a confluent sequence of sensory inputs amounts to a unique neurophysical vector that initiates a particular activation cascade in another cluster of nerves, some outputs of which do not have images in the conscious domain (are not phenomenological) despite strongly informing it resulting in what we call 'creativity', all together defining a path back through our sensory encoding/decoding apparatus which we recognize as 'thought.'

I am not convinced that we are looking at this question through the right end of the telescope here.


That's always the main issue with any piece that is using ill-defined terms like intelligence, consciousness, self-consciousness, thinking, understanding, etc. Nobody ever came close to defining them in a practical manner in decades/centuries, but then LLMs came and suddenly lots of people are somehow absolutely sure that they don't do any of this while humans/animals do.


> What if it turns out that a confluent sequence of ,,,

That is useful way of looking at it. But the problem remains how do you train on sight, sound, touch in any sort of useful way. It takes 20 years to make an adult using its non digital hardware (which I would consider superior for the task). If it could happen quicker do you not think that would.


Thinking and understanding are related to logic and linking information together in useful patterns.

It would be perfectly possible and uncontroversial for a machine to be able to do that without any consciousness.

The point being made is that LLMs don't even do that.


I find these kinds of gotchas trivial and tiresome. Do we “understand” gravity? Magnetism? Other people? No. We have models. Some are predictive and/or useful. We aren’t Turing machines. We don’t observe nature directly — that’s why optical illusions work. Let’s figure out more about how these things work iterate and improve.


Aside from coding and customer service, what other real use business cases are there? Or is it just smoke and mirrors of another tech hyping cycle?

https://www.youtube.com/watch?v=9Z48BKTCPrc


There are many people who would like to think it does understand. And even more who just assume it does. When a baby is born it puts its fingers in its mouth. Already it knows something AI doesn't.


I'm somewhat over this debate. LLMs may not think like humans, but they sure format text to jsons well, tell me about the latest features of the C# language and fix my syntax. Good enough.


AI hate news has been coming fast and hard as of late. It is quite unusual. Its obvious intent is to crash the market, getting the authors short positions to profit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: