Because you have no idea if those reports are at all genuine, or if the reporters met up elsewhere (online or off) in order to brigade and mass-report said content, with the intention of getting it taken down despite breaking no rules. Sometimes, the coordination isn't even necessary, it just needs to be the right target posting something online. (Eg more than a few people have gone and reported every post by a politician you like/dislike for hate speech and inciting violence.)
The article well explains this downside of user reports, so I don't see what this comment adds. It does not answer my question. The article also describes problems of not acting on them, so the conclusion requires more than just finding a negative.
I'm sorry you're unable to understand my answer. Let me try saying the same thing as the article another time, and maybe you'll be able to understand the answer?
Your question was Why should no content should ever be taken down automatically just because a bunch of random people report it.
It's because the random people reporting it can't be trusted to be acting honestly. Without a human in the loop, the automated system becomes a tool for cyberbullies, and harms the very users you intend to protect. The foregone conclusion, thus, is that a fully-automated system will do more harm for the user-base than good.
I understood the answer, it just didn't address the question properly. I think you failed to understand the problem with the answer.
> The foregone conclusion, thus, is that a fully-automated system will do more harm for the user-base than good.
This is not an answer, it is just lazy circular reasoning. "Automated take downs should not be used because they do more harm than good." Yes we have already established that is your assertion, I am asking for how you were able to conclude that.