There's a fair number of people online who hate me and repeatedly report things that I do. I uploaded a youtube video showing a XSS vulnerability that I found, reported, and had been fixed. Someone reported it and it got deleted, I got a red mark on my account and it put my account in "bad standing". I appealed it and it got put back up and the stuff reversed.
On a different site I got a 2 week ban for posting a link to a completely harmless and completely rule-abiding youtube video. I appealed it twice and it wasn't lifted. On that same site I had my account deleted for posting a rickroll video. That time they listened to the appeal and reinstated my account.
My reddit account was shadowbanned. I appealed it and it got reinstated.
My point is that moderation is always somewhat subjective and usually done by low paid workers going through high volumes of disgusting stuff, so there will always be mistakes. And there are people who get mad and basically treat that report button as a dislike button. I even went to a talk by a facebook employee on the spam team, and he said there are some countries they essentially completely ignore reports from because reporting innocent stuff is so wide spread there.
You think that's bad? Try posting anything critical of what someone else says on HackerNews. Frankly, having discussions about censorship on this platform - including this very comment - is to be willfully blind to the kind of censorship that's going on here and that we're ignoring.
Not necessarily ignoring; if censorship means that only certain types of voices will be heard, sometimes that's all people want to hear.
My view is that there should always be places that are censorship-free, and after that it's okay that other places might be censored. Most participating on HN tend to know that engaging in certain ways (eg. excessive swearing, aggressiveness, etc.) will lead to being censored. As long as there remains other places to engage with people without rules, I'm also grateful for venues that are heavily moderated.
I'm often critical of what someone else says on HN, and receive a mix of up/downvotes for the effort. But I haven't yet noticed a comment deletion.
My profile has long been crapping on about censorship here, and when the topic comes up in an HN article, I usually wade in with a "not happy", but I just don't see what you're seeing.
When groupthink expresses itself as mass downvoting, and the site reduces the size of your font and/or lightens the color until it is invisible, is that not censorship? The mob can be wrong you know, when the mob relies on a bad source.
Locked threads, deleted threads, comments from pg requesting that particular people stop arguing, a flagging system where some people's flags have greater weight, hellbanned accounts where the person doesn't even know they've been banned (there's some poor schizophrenic guy who made an operating system called LoseThos that posts occasionally), badly editorialized submission titles.
But, that's what moderation looks like, generally. Every social structure is moderated in some form.
Well, ok then. Except... in that particular case schizophrenic people usually have a poor sense of reality, heck even words like "hellban" and "dead" may have religious connotations. Not really sure what a better solution is.
You don't seem to know much about the Terry Davis situation. When a persons posts are 50% racist slurs, there isn't much choice but to block them from posting.
The hellban lets that person still post and converse with people who actively seek him out, without the downsides of having tons of hate speech all over the front page of HN.
It's been said before and it'll be said again, schizophrenia is not a catch-all excuse for someone in a public forum.
I just feel bad for the guy. Originally, I was just pointing out it was a kind of censorship greater than downvoting that has some unsavory aspects to it. Like I wrote above, it's not clear what a better solution would be, except perhaps if I was running things I would try just deleting his accounts. Oh, and you can't reply to his posts, but whatever.
Deleting accounts will cause him to create new accounts. The positive of a hellban is that it lets the person still use their account to broadcast to those who explicitly want to hear it, so there's much less reason for the person to make a new one.
Being temporarily banned from an online forum is probably pretty low on the list of shitty things that happen to people with a diagnosis of a severe and enduring mental illness.
The interaction between community mechanics and the resulting political structures (e.g., "threaded vs. nonthreaded comment sections", "the effects of various moderation systems", various methods of giving out higher levels of privileges, etc.) has been a minor hobby of mine for ~15 years. Here's one result I've noticed: Once you introduce scale into a community, in that way that only the Internet really can, you encounter certain major problems such that you end up with only two choices: Literally permit everything up to and including child porn and every other kind of skeevy thing ever, or engage in curation policies that somebody will, with a straight face, call censorship. And it's not the obvious connection you'd think, either. It's not that "removing child porn" will be called censorship (it will, but not by people that very many other people care about), it's that you end up with a certain choice, and the result will inevitably be "censorship" by somebody's accounting, and it'll probably be a reasonable claim from many points of view.
Imagine you have a landscape of statements that could be made. Color some of it black for being unacceptable by your standards, color some of it white for being perfectly acceptable. No matter how you slice it, there's some grey frontier in between. Throw a dart for every statement made by, say, 10 people in a week. A gentle smattering of darts appears. Maybe only a handful of statements will even be close to that grey line, and none of them land on it. Now throw a dart for every statement made by ten million people in a year. The landscape is no longer visible under the mass of the darts. Now tell somebody they have to go on to the field and clear out precisely and exactly the statements that are "on the dark parts". It doesn't matter how you slice it... someone's gonna scream about how their message was removed.
And that metaphor implicitly includes the idea that we all agree on the colors. In reality, of course, we all wildly disagree, even if there are broad patterns within certain cultures that can be seen. [1]
The only two simple solutions to that problem, "allow everything" and "allow nothing" are respectively illegal (and at least arguably unethical), and pointless (what's a community that allows 0 communication internally? I suppose one of those crazy logic puzzles about hat colors and wardens shooting prisoners. But that's about it.).
This theory predicts that all communities at scale will have a set of people complaining about how it censors. This is more than just HN or Reddit, but anything, NYTimes, Usenet, a sufficiently large GitHub project, everything that doesn't literally allow everything. I'm not aware of any counter-example....
So, given that it's impossible to have a scaled community that doesn't have people accusing it of censorship, it doesn't seem all that interesting to abstractly accuse one of it. One would have to criticize it from a more specific point of view, like, is the stuff being removed significantly at odds with what the community it represents expects? Or is there perhaps an ethical standard that you'd like to specify that is more specific than merely "shouldn't 'censor'" that you'd like to discuss? That's a serious question, BTW; I am not merely defending HN as a status-quo defender. Of course, the ethical standard will itself be up for debate. So it goes.
[1]: Standard metaphor warning: Don't argue the metaphor. It's just a metaphor. Neither the landscape, the colors, nor the darts exist, so there's no way they could "really" have three colors or "really" be nasty fractals or "really" anything else. Argue about the point it's making.
Although Wikipedia has a cabal, they try to clearly delineate how they censor material. On HackerNews, we are almost entirely in the dark: they do whatever they want with no oversight. Given that, having a conversation about censorship on Facebook, which is much more of a free speech zone than here, is hypocritical. (Also: I mostly agree with your post)
On Facebook you can't quickly create an anonymous account to say whatever you want to your 500 friends because that account wouldn't be friends with them. You end up censoring yourself because it's friends, family, coworkers, and romantic interests on there reading your posts.
Please do not confuse censorship, which is governmental tampering with free speech, with private moderation of a free message board to whose terms of service you signed voluntarily. This trivializes real censorship issues, which are abundant.
My understanding is that Paul Graham personally censored this website for multiple hours per day for years, and that now there is someone else, or a small team of others, doing it.
There is a lot about HN's policies and the site's user-hostile features that I don't like, and occasionally complain about, but neither dang nor pg have kept the highly curated and moderated nature of Hacker News a secret. Like as not, this site just isn't supposed to be about free and open discourse, but civil, intellectually stimulating content and holding back the inevitable Eternal September effect, and many of the users here agree with that and vote in kind (and when they don't, dang stuffs the ballot.)
And to dang's credit, I follow his comments now and then and most of the times he criticizes people for their comments, they do seem to be obviously over the line.
Unmoderated forums have the potential to violate US law. As a result, none last for long. Even 4chan, while anonymous, has never been completely unmoderated. It has to comply with US laws on child pornography and hate speech.
The internet has a public reputation as being open and free. This has never been the case. You've always had to charge for access and you've always been in danger of having your content taken down at somebody else's discretion. It used to be taken care of by your hosting company or your ISP. Now it's taken care of by Facebook, Twitter and Hacker News.
There's a fair number of people online who hate me and repeatedly report things that I do. I uploaded a youtube video showing a XSS vulnerability that I found, reported, and had been fixed. Someone reported it and it got deleted, I got a red mark on my account and it put my account in "bad standing". I appealed it and it got put back up and the stuff reversed.
On a different site I got a 2 week ban for posting a link to a completely harmless and completely rule-abiding youtube video. I appealed it twice and it wasn't lifted. On that same site I had my account deleted for posting a rickroll video. That time they listened to the appeal and reinstated my account.
My reddit account was shadowbanned. I appealed it and it got reinstated.
My point is that moderation is always somewhat subjective and usually done by low paid workers going through high volumes of disgusting stuff, so there will always be mistakes. And there are people who get mad and basically treat that report button as a dislike button. I even went to a talk by a facebook employee on the spam team, and he said there are some countries they essentially completely ignore reports from because reporting innocent stuff is so wide spread there.