Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“It's not a recognized acronym, code, or term in any common context I’m aware of” is pretty similar to “I don't know what that is”. I would assume that a model could be trained to output the latter.


Right. I’ve had a lot of success using structured output to force LLMs to make Boolean choices, like can they reply or not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: