Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Where in the loss function of LLM training is the relationship between their model of reality and their predicted tokens?

In the part where their loss function is to predict text that humans would consider a sensible completion, in a fully general sense of that goal.

"Makes sense to a human" is strongly correlated to reality as observed and understood by humans.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: