Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From my experience working on projects where we trained models, usually it’s obviously completely broken the first attempt and requires a lot of iteration to get to a decent state. “Trust the AI” is not a phrase anyone involved would utter. It’s more like: trust that it is wrong for any edge case we didn’t discover yet. Can we constrain the possibility space any more?


Most hiring managers wouldn't make it to the end of the phrase "constrain the possibility space"


"Trust the AI" could mean uploading a resume to a website and getting a "candidate score" from somebody else's model.

Because I'll tell you, there's millions of landlords and they blindly trust FICO when screening candidates. Maybe not as the only signal, but they do trust it without testing it for edge cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: