Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the author took the wrong lesson here. I've had doctors misdiagnose me just as readily as I've had LLMs misdiagnose me - but I can sit there and plug at an LLM in separate unrelated contexts for hours if I'd like, and follow up assertions with checks to primary sources. That's not to say that LLMs replace doctors, but that neither is perfect and that at the end of the day you have to have your brain turned on.

The real lesson here is "learn to use an LLM without asking leading questions". The author is correct, they're very good at picking up the subtext of what you are actually asking about and shaping their responses to match. That is, after all, the entire purpose of an LLM. If you can learn to query in such a way that you avoid introducing unintended bias, and you learn to recognize when you've "tainted" a conversation and start a new one, they're marvelous exploratory (and even diagnostic) tools. But you absolutely cannot stop with their outputs - primary sources and expert input remain supreme. This should be particularly obvious to any actual experts who do use these tools on a regular basis - such as developers.



No, the lesson here is never use an LLM to diagnose you, full stop. See a real doctor. Do not make the same mistake as me


"Don't ask LLMs leading questions" is a perfectly valid lesson here too. If you're going to ask an LLM for a medical diagnosis, you should at the very least know how to use LLMs properly.

I'm certainly not suggesting that you should ask LLM for medical diagnoses, but still, someone who actually understands the tool they're using would likely not have ended up in your situation.


If you're going to ask an LLM for a medical diagnosis, stop what you're doing and ask a doctor instead. There is no good advice downstream of the decision to ask an LLM for a medical diagnosis


What about the multiple people who have reported receiving incredibly useful information after asking an LLM, when doctors were useless?

Should they not have done so?

Like this guy for example, was he being stupid? https://www.thesun.co.uk/health/37561550/teen-saves-life-cha...

Or this guy? https://www.reddit.com/r/ChatGPT/comments/1krzu6t/chatgpt_an...

Or this woman? https://news.ycombinator.com/item?id=43171639

This is a real thing that's happening every day. Doctors are not very good at recognizing rare conditions.


> What about the multiple people who have reported receiving incredibly useful information after asking an LLM, when doctors were useless?

They got lucky.

This is why I wrote this blog post. I'm sure some people got lucky when an LLM managed to give them the right answer, because they go and brag about it. How many people got the wrong answer? How many of them bragged about their bad decision? This is _selection bias_. I'm writing about my embarrassing lapse of judgment because I doubt anyone else will


That's an easy cop out!

AI saves lives, it's selection bias.

AI gives bad advice after being asked leading questions by a user who clearly doesn't know how to use AI correctly, AI is terrible and nobody should ask it about medical stuff.

Or perhaps there's a more reasonable middle ground? "It can be very useful to ask AI medical questions, but you should not rely on it exclusively."

I'm certainly not suggesting that your story isn't a useful example of what can go wrong, but I insist that the conclusions you've reached are in fact mistaken.

The difference between your story and the stories of the people whose lives were saved by AI is that they did generally not blindly trust what the AI told them. It's not necessary to trust AI to receive helpful information, it is basically necessary to trust AI in order to hurt yourself using it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: