Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans make the same mistakes when writing up reports too. I'm curious which is more reliable.

For example, I've done interviews for various media outlets and often enough they report I said the opposite of what I said, similar to the reasons you mentioned above.



> interviews for various media outlets

Errors then are not an accident. They start the interview with a conclusion to get to, whatever you say will be cut and pasted to push the narrative.


I hate when they do that. They're always out to get me. They're tricky like that. They have evil agendas. Damn them.


My suggestion is that humans are still noticeably better at this, not least because humans are able to backtrack and re-adjust their interpretations, which LLMs cannot in the same way. An LLM's "thought process" and "output" are essentially the same thing.

Now maybe I'm wrong about this, but even if so I think it's still a risky change. We have understood human error for pretty much all of recorded history. We have a very poor understanding of computer error, to the point that there are some legal systems where computer software is assumed to be correct by default.

Courts and laws around the world are designed to cope with humans making mistakes and lying.


> Humans make the same mistakes when writing up reports too

Sure, but then again humans are all independent and biased in different ways while the automated tools we replace them with will be making the same mistake and using the same bias over and over again.

One is a single entity, the other is a multitude of independent entities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: