Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think many humans engage in metacognitive reasoning, and that this might not be strongly represented in training data so it probably isn't common to LLMs yet. They can still do it when prompted though.
 help



LLMs have zero metacognition. Don't be fooled - their output is stochastic inference and they have no self-awareness. The best you'll see is an improvised post-hoc rationalization story.

> The best you'll see is an improvised post-hoc rationalization story.

Funny, because "post-hoc rationalization" is how many neuroscientists think humans operate.

That LLMs are stochastic inference engines is obvious by construction, but you skipped the step where you proved that human thoughts, self-awareness and metacognition are not reducible to stochastic inference.


I'm not saying we don't do post-hoc rationalization. But self-awareness is a trait we possess to varying degrees, and reporting on a memory of a past internal state is at least sometimes possible, even if we don't always choose to do so.

You can turn all these argents around and prove the same is true for humans. Don't be fooled by dogmatic people who spread the idea that the human mind is the pinnacle of cognition in the universe. Best to leave that to religion.

Humans may not always be that smart, but we do at least have an internal state and an awareness of that internal state - a "self-awareness".

AI most certainly has nothing of the sort, and any appearance to the contrary is the direct result of training data.


That is a bold statement that would need proof to back it up in both cases. So far it is only dogma. And unlike humans, we actually have research hints that this assumption is false for LLMs. Just because the state is not human-explainable doesn't mean it does not exist. The same is true btw for any physical "state" that may or may not exist in the human brain. Everything else is religion and metaphysics.

You're either trolling or being incredibly obtuse. LLMs are not conscious, they're guess-the-next-state algorithms. This is so dumb I can't believe I have to share a planet with people who are losing touch with such a fundamental reality



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: