> “For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said.
I mean, if I were a choosing person and I could choose to have a human radiologist review AND an AI review I think I would prefer that. 3/10,000 sounds like a very good rate but a false negative on a cancer diagnosis is life threatening, no?
"The AI is wrong only 3:10,000 times" is a statement screaming out for the follow up question "how often are the humans wrong". Maybe 3:10,000 is astonishingly good, maybe humans are 10x or 100x better, right now I have no real way of knowing short of a literature review in a field I know nothing about.
At a certain point the false positives start creating more harm than trying to further reduce the false negatives (which is, perhaps counterintuitively, eventually true for even the most serious of risks). Whether that's the case here depends on a lot of information not in the article.
I mean, if I were a choosing person and I could choose to have a human radiologist review AND an AI review I think I would prefer that. 3/10,000 sounds like a very good rate but a false negative on a cancer diagnosis is life threatening, no?