Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Gell-Mann amnesia is powerful. Hope you extrapolate from that experience!

At a technical level, they don't know because LLMs "think" (I'd really call it something more like "quickly associate" for any pre-o1 model and maybe beyond) in tokens, not letters, so unless their training data contains a representation of each token split into its constituent letters, they are literally incapable of "looking at a word". (I wouldn't be surprised if they'd fare better looking at a screenshot of the word!)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: