Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What happens when a doctor's brain, which is also an unexplainable stochastic black box, influences your doctor to make a bad prognosis?

The intent is known by the doctor though. Whereas hatGTP does not know it’s own decision making process.

And it’s possible to ask the doctor to explain their decisions and sometimes get an honest, detailed response.



You are more correct than not. Although human self reflection is probably guesswork more often than we admit.


I agree, though would place the base probability that most self-explations are ChatGPT-like post-hoc reasoning without much insight into the actual cause for a particular decision. As someone below says, the split-brain experiments seem to suggest that our conscious mind is just reeling off bullshit on the fly. Like ChatGPT, it can approximate a correct sounding answer.


You can't trust post action reasoning in people. Check out the Split brain experiments. Your brain will happily make up reasons for performing tasks or actions.


You can't trust post action reasoning in people. Check out the Split brain experiments. Your brain will happily make up reasons for performing tasks or actions.


There is also the problem of causality. Humans are amazing at understanding those types of relationships.

I used to work on a team that was doing NLP research related to causality. Machine learning (deep learning LLM's, rules, and traditional) is a long ways away from really solving that problem.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: