Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why do you think it doesn't have understanding of semantics? I think that was one of the first things to fall to LLMs, as even early models interpreted the word "crashed" differently in "I crashed my car" and "I crashed my computer", and were able to easily conquer the Winograd schema challenge.
 help



> even early models interpreted the word "crashed" differently in "I crashed my car" and "I crashed my computer"

That has nothing to do with semantical understanding beyond word co-occurrence.

Those two phrases consistently appear in two completely different contexts with different meaning. That's how text embeddings can be created in an unsupervised way in the first place.


What do you mean? Semantics are determined by distribution. https://en.wikipedia.org/wiki/Distributional_semantics



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: