Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But what if our neurobiological reality includes a system that behaves something like an LLM?

It almost seems like we got inspiration from our brain to build neural networks!



It isn’t clear though. Neural networks were inspired by the brain, but transformers? It is totally plausible but do we really think just in words?


>Neural networks were inspired by the brain, but transformers? It is totally plausible but do we really think just in words?

LLMs might be trained via words, but as a backend transformers are not just for words.

They're for high dimensional structured sequences. To make an analogy, transformers are not working on:

  Vector<Word>
but

  Vector<ContextualizedEmbedding>
where words just happens to be a handy training set we use.

And, we too, might not think in words, but I bet that we do think using multi-dimensional sequences/vectors.


> It is totally plausible but do we really think just in words?

I find that proposition totally implausible. Some people certainly report only thinking in words & having a continuous inner monologue, but I'm not one of them. I think, then I describe my thoughts in words if I'm speaking or writing or thinking about speaking or writing.


We've been making the same metaphor ("that's how the brain works") with each new major technology we come up with...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: