Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Someone here mentioned a whole ago that the labs deliberately haven't tried to train these characteristics out of their models, because leaving them in makes it easier to identify, and therefore exclude, LLM-generated text from their training corpus.


But it's odd that these characteristics are the same across models from different labs. I find it hard to believe that researchers across competing companies are coordinating on something like that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: