During all the recent debates regarding Parler, much was said about how Parler needed effective moderation like Twitter. This article is an important reminder that there is a lot more room for improvement in the moderation space and even companies that "do it right" are still making glaring mistakes.
> We’ve reviewed the content, and didn’t find a violation of our policies, so no action will be taken at this time
This is totally unacceptable. I wonder if there is some kind of legal liability that could apply here? Taken at face value, this message seems to indicate that a Twitter representative reviewed content and chose to not remove it which sounds a lot to me like knowingly and willfully distributing cp....
Maybe it's the headline that is misleading you, or it's a symptom of our times.
In any case: it should be blatantly obvious that the Twitter representative did not know or agree that the person in the video was a minor. The insinuation that it's Twitter policy to distribute child pornography is laughable. They have absolutely nothing to gain from it.
As to liability: Section 230 is about exactly that situation: trying to limit damaging material on your platform does not create any liability even if you fail. Because the alternative, where you either allow your platform to be flooded with swastikas and pornography or get sued for every single mistake you make, is unworkable.
> Because the alternative, where you either allow your platform to be flooded with swastikas and pornography or get sued for every single mistake you make, is unworkable.
The current model isn't workable either. Social media networks and their ad-driven models have hollowed out our democracies by sowing outrage and division at every opportunity. If one believes "The Social Dilemma", these networks have a dial for tuning public opinion that can even steer election outcomes (if you subscribe to the "Russia used bots to hack the 2016 election" theory, then you must agree).
Maybe we should rethink the role of social media networks. Perhaps they monopolize too much communication to trust them to curate which voices are amplified and which are suppressed. The discourse around this issue has been really strange, with the usual critics of corporate power rallying to defend social media giants and their rights and qualifications to curate such an enormous portion of our collective speech. Perhaps we should consider these networks to be more "dumb pipes" rather than "curators", and instead we should expect these networks to provide us with our own curation/moderation mechanisms. If there really is no workable social media model--that is, if they really can't deliver some net-positive social good (or at least some smaller net harm, like sex work, drugs/alcohol, tobacco, etc), then maybe we should regulate them out of existence?
This result could be non-ideological on the part of social media and advertisers. They may just be responding to the outrage mob. Advertisers are scared of any negative attention, but social media thrives on outrage. They can maximize their earnings by choosing one side to censor, and the other to amplify outrage. They end up choosing the side that generates the most ad views from the highest-spending audience, and often this is the most irrational in the “man bites dog” sort of way.
So... I was searching for the clip about this from the Howard Stern movie, and I realized that YouTube isn’t even trying to return relevant results anymore. They’re just dangling hate bait in every result. I gave up.
This honestly just reads like paranoia. Social media isn’t sowing anything. It’s the people on the platform and the recommendation algorithms behind the platform.
Social media is absolutely seeding these behaviors by leaving content discovery to engagement algorithms that push individuals further down a specific ideology without considering any ethical factors.
For example, I consider myself a centrist and yet whenever I log into YouTube account all I see are far right recommendations. I don’t have a Facebook or Twitter account, but I remember they behaved the same with their content discovery.
Read this study which goes further into the debate.
I understand what you're saying, in that platforms need to hone their algorithms to serve actually relevant and useful content to it's users but there is also the other side, where the consumers of content need to understand what they're ultimately consuming. I can't help but bring up the countless studies that violent video games do not make kids more violent. Can the same be said for content consumed on social media? At what point is content "extreme" and at what point is it just content? Whose to draw that distinction? There are very obvious examples of extreme content but there are, I'm assuming, many more subtle ones that are possibly impossible to police.
As someone who supports gay rights, women's rights and freedom of speech I always find it funny how I am suddenly alt-right because of the freedom of speech part of my approach towards life.
I consider myself a centrist because I find that there is a very pernicious and unacceptable part of the left fermenting in the last decade or so.
The right in America has been pulled so far right your likely more right wing then you think you are. Now you're using attempting to use left as an insult. You get right wing media suggested. "center" in America is basically the right that wants to act like there aren't major social issues in the country. You might want to step back and reevaluate your views. What left wing things do you agree with
> was pointing out the tribalistic attitude the modern left takes towards individuals who won’t consider themselves leftist
Do you not see the irony of what your accusing me of doing with what your saying? I'm not a self described leftist (though that is a right wing internet troll thing to say) I just have my opinions
American politics have gotten pulled so far to the right, center is pretty right wing. Idk how you can deny that? That's all I'm saying. Even if you don't, other wise sure prove me wrong explain how your actually center.
> American politics have gotten pulled so far to the right, center is pretty right wing. Idk how you can deny that?
80+ million of us Americans voted in record numbers for the left leaning candidate, our biggest media platforms promote left leaning message, right wing individuals have been censored, etc.
Maybe not everybody in the United States is left leaning, but I wouldn’t call this a politically right leaning country.
Also, the only irony I see is that this thread continues. You already alluded I reply like a ring wing troll. Perhaps it’s better we both disengage as this is going nowhere.
edit: i cant get the lines to line up, the beggining of the first line is supposed to be like two dashes over
the right has gotten more extreme than the left has gotten extreme. if you think your in the middle, your really just right wing becasue the right has gotten insane. thats my point. your free to prove me wrong when ever by explaining your thoughts on policy though
Your argument is based on a perception of America favoring right extremist ideology. Meanwhile for every popular extreme ideology on the right there is a counter extreme on the left ideology.
We have people advocating for extreme capitalism and people advocating for a socialist regime. Opposed citizen militias such as Proud Boys and Antifa. Individuals who want a wall around the country and people who believe in open borders. The examples don’t end there.
We have extremes with plenty of support on both ends and unless you can bring data that indicates there is a concentration of individuals in one side, you cannot infer America is becoming far right, or assume that the center is skewed to the right.
That first image shows what I'm talking about. America's left is center, the Republicans are all the way on the extreme right. If you think your an American center, your on the right
>Antifa isn't real, that is right wing media fear mongering
You better tell that to the lefties that label themselves antifa. What isn't real is a group called Antifa, but the anti-fascist (and anti-capitalist) movement, otherwise known as antifa, is real.
tayo42 i'm sorry to say this but I genuinely think that you're out of touch with the reality. maybe you ought to get out some more and actually interact with people.
It's the individual users who create the posts, but it's the social media companies using software to select posts to display to others. 'Sow' means to spread or disperse seeds, or when used figuratively, to spread around or propagate something in general. Automated recommendation systems seem to fit the word. They're not creating these seeds, but they sure are sowing them far and wide.
I understood the point. My point is that propoganda and it's distribution channels have existed for as long as human speech has. A new distribution channel isn't the problem, it's a society that cannot think for themselves and distinguish between what is real and what is not.
Blaming the platform is scapegoating the real problem.
Moderating the platform seems like a more tractable problem than changing what are effectively hard-wired aspects of human neurology. It’s not about blame, it’s about engineering a cost efficient solution.
I don’t think people are much worse at thinking for themselves these days, and to the extent that they are, it’s probably fallout from social media. I would like for people to have stronger independent thinking skills (just like I would like us to have infallible immune systems and no proclivity towards cancer and so on) but that’s all wishful thinking. Since we can’t do very much to make ours society more immune to social media, then we should change social media so it doesn’t wreak havoc on our society.
I suppose my point and probably the point of the other commenter above is that social media companies utilizing content recommendation systems are not merely distribution channels, by virtue of recommending content. Those recommendations go beyond distribution.
And to be clear, I am blaming corporations for creating these platforms, not blaming the platforms themselves.
I hear you. I agree that there's the incentive for platforms to serve content you want to see and will engage with but is it Facebook's responsibility to stop you from becoming more extreme? How do you even define extreme? There's obvious answers but I'm sure there's a lot less obvious ones too that are impossible to police.
Yes, Facebook has a dial and they know that they can turn that dial to make us more extreme (byproduct of generating engagement). Personal responsibility is a lovely thing but humanity isn’t just going to become more personally responsible over night, so if we’re going to save our society we have to look at the options available to us in reality and not those we wish we had (specifically an extra helping of personal responsibility). We do this all over—we don’t allow the sale of many harmful addictive substances even though one’s health, finances, etc are their own responsibility. We also regulate casinos and tobacco and alcohol. There’s certainly no reason why we can’t regulate social media.
You don't know they have a dial, everyone is taking one sensationalized (and overly dramatic) documentary and making it as gospel. Portraying it as a "dial" is also doing a complete disservice to the actual technical problem involved.
The documentary was successful in underplaying the challenges of moderating/recommending at scale and, in my opinion, scapegoating social media companies as the source of the problem when they're really a symptom.
The main focal point of The Social Dilemma, Tristan Harris, as creator of the Centre for Humane Technology has an obvious agenda (not saying he's wrong). Take the documentary for what it is but to throw the baby out with the bathwater is wrong when there are obvious benefits to social media.
Regulating social media is a fool's errand imo, it's a lose-lose. Either you let ideas flow which includes bad actors/ideas to propagate or you now create gatekeepers and censors with ever moving goalposts. I'm not against more regulation but do you really think the United States Congress is capable of passing legislation to effectively tow that line? Doubtful.
It's not a literal dial--it's an analogy and yes, it's oversimplified (the complexity of the implementation has no bearing on this debate), and the documentary is merely a touchstone. Lots and lots has been written about the subject with many first-hand accounts. Moreover, as discussed elsewhere, these networks have so much power that they can unilaterally influence democratic elections--at least that's certainly the necessary implication if you believe that Russia was able to indirectly influence these curation algorithms to hack the 2016 election (if Russia could manipulate these algorithms indirectly, then how much more power must Jack and Mark have given their direct access?).
> Regulating social media is a fool's errand imo, it's a lose-lose. Either you let ideas flow which includes bad actors/ideas to propagate or you now create gatekeepers and censors with ever moving goalposts. I'm not against more regulation but do you really think the United States Congress is capable of passing legislation to effectively tow that line? Doubtful.
This is just a generic argument against free speech. The obvious problem is that there's no way to ensure that our censors are going to be good actors, and in particular we know with some degree of certainty that Twitter, Facebook, etc are not. Congress (or whomever) doesn't have to toe that line at all--regulating these businesses out of existence is strictly a better option than allowing them to continue poisoning our society. No doubt they deliver some value, but (1) much of that value could be realized through other means (people can still organize on web fora like they did in the brief years prior to social media proper) and (2) they certainly don't deliver enough value to justify the rapid erosion of our social and political fabric. So the worst thing we can do is continue on with the status quo.
That said, I think it's entirely reasonable that we could be more surgical about regulation. There's no reason we can't keep some of the benefits of social media while doing away with the immense costs. For one, we can require social media companies to speak an open protocol such that anyone can compete--not just ad-based businesses with established large networks. We could require their curation algorithms to be made transparent. We could require that they behave as dumb pipes, but they may afford their users mechanisms to curate their own feeds. I'm sure there are many other solutions as well, but again, we oughtn't defer action until we find the best option because we know the status quo is strictly the worst option.
It’s hard to deny that social media is an entirely new way of disseminating content. There is an algorithm that determines what you see. It’s well known how that algorithm encourages echo chambers and the internet itself changes the way we think. It’s not just “people without critical thinking have always existed”.
No, getting content shoved into your face without human review is definitely a new problem. At least before recommendation algorithms, you were either looking up something you specifically sought out, or someone took on the publishing liability to recommend you something.
You're still abdicating personal responsibility. Why is it Facebook's responsibility to keep YOU from becoming more extreme? I'm just playing devil's advocate here because the easy position right now is to blame social media.
It’s same as asking why should government control abuse of meth and heroin. These drugs along with social media, exploit certain nature of how our brain works to make us addicted and alter our state of mind.
Yet we're seeing decriminalization of drugs, needle exchanges, etc that counter your argument. The world is becoming more liberal to drugs because criminalization has created a larger problem (black markets, impure drugs, etc).
I get your point is to have some more regulation, however the argument is much more nuanced than "Ban All Social Media" which is what I'm trying to portray.
I actually agree with decriminalizing drugs, but I think they should be controlled like any other prescriptions or like weed is currently controlled. Specifically portion controlled so that drug does not destroy person's life. I have experimented meth and many other drugs. Mind that if you haven't done meth and you try to compare it to some other drug like alcohol then you are just ignorant. I have to say that it's easy to get lost in meth and a lot of people don't have mental will(or maybe capacity) to get out of the drug and hence I have to say that this type of thing does need to be controlled in some manner.
Who is arguing for "banning all social media" in this comment chain? You're attacking a strawman.
We need high regulation of recommended content, probably by moving recommendations out of the scope of section 230. Pointing to a not banned but extremely highly regulated decriminalized sector kind of speaks for itself. It's not like I could go on a San Francisco street corner tomorrow and start selling pot like a paperboy.
The parent comment (the original one I responded to) said that if we can't find an equitable solution to "regulate them out of existence". That's what I was arguing against.
Moreover, at a certain point we can either hope and dream that humanity becomes endowed with super-human personal responsibility or we can accept that this isn’t going to happen and look at our available options.
The article is from the NY Post. This makes me question it right out the gate. And I just find it hard to believe that Twitter would want to keep something like this up, since it is illegal content. There is no incentive for them to do so, and the NY Post isn't the publication to dive into this at all.
I can place a contrarian, a nitpicker, a bully, even an algo, as twitter's manager of content. Twitter Users submit CP complaints. Then 99.99% responses are:
"After careful review of the content, we have not found this material violets twitter policy of no CP on our sites"
Truth is the eye of the beholder.
Thus, if the beholder is responsible for the decision and it is wrong, should they not be held accountable in the court of law? Even if it means lots of lawsuits?
> In any case: it should be blatantly obvious that the Twitter representative did not know or agree that the person in the video was a minor.
The lawsuit alleges precisely that Twitter had ample evidence to know that he was a minor. How about we let the case play out instead of assuming innocence?
The only facts being reported in this article is that someone made claims in a lawsuit.
> This is totally unacceptable. I wonder if there is some kind of legal liability that could apply here?
Yes. Both civil liability (under the fairly recent exceptions relating sex trafficking to 230 protection) and criminal liability (section 230 has never applied to criminal liability) are possible, for the kinds of things claimed in the lawsuit.
That civil liability is possible is, of course, almost certainly a factor in why the lawsuit was filed; while you can file a lawsuit where there is no available liability, it tends to be a waste of effort.
Of course, one would also do well to take claims made in a lawsuit with some skepticism. If those didn't often turn out to be untrue, we wouldn't need courts nearly as much as we do.
> This is a pointless ad hominem that is completely irrelevant to this case.
If you're trying to base your claims on appeals to authority, the very least you should do is verify what your authority defends and stands for and what it's track record.
Brushing off holes in your appeal to authority with empty references to ad hominem a does nothing to substantiate your claim.
What is the appeal to authority? That the court can easily verify the boy's age? I'm pretty comfortable with trusting the court on that matter. There is an appeals process if they drop the ball here. It works pretty well in simple matters like that.
If you mean something different by "appeal to authority", then you're going to have to spell it out because I don't see it. But in that case, I think you're probably misunderstanding my views.
Why would they waste their time in court bringing a lawsuit that would immediately get tossed? They'd hardly be able to conceal the age of a child from the court. The court will know who the child is, even though journalists will (of course!) not be told.
They would “waste their time” because they are here to claim that Twitter is distributing porn. They are not here to win the case. By the time the case is won or lost the public will have long forgotten about it.
Moral crusaders are only interested in the immediate attention the case will bring to their cause, along with the donation from people concerned about Twitter being a corrupt den of wickedness and evil.
And yet not at all surprising. These huge companies have a reputation for being faceless because they skimp on staffing their support teams and until the so-called "tech reckoning" have operated with virtual impunity.
Exactly, aside from the civil liability already being pursued in the reported lawsuit, there should also be criminal liability for people making those editorial judgements, as far up the management chain as it goes.
This is not some algorithmic failure that no human saw -it was specifically examined by a human with managers setting policy, and deemed to be acceptable to distribute.
At that point, they are willfully distributing the material, and should be held accountable.
Yes, the result may be harsh. Ideally, there would be advance notice of the potential legal jeopardy, but ignorance of the law is never an excuse, and child porn distribution is widely known to be illegal. Standard pull in the workers and get them to flip on the managers, repeat all the way up the chain as far as it goes.
It should now be obvious to anyone that internet discussions are either moderated/edited, or will descend into toxic cesspools when any of a variety of bad actors are allowed to run unchecked.
The hosts need to be responsible for their editing decisions, which are made at far larger scale than any newspaper or broadcaster.
> This is not some algorithmic failure that no human saw -it was specifically examined by a human with managers setting policy, and deemed to be acceptable to distribute.
Or, you know, the claims in the lawsuit are false.
Do you think the NYPost fabricated the existence of the lawsuit? As far as I can tell, the lawsuit really does exist.
Given the nature of the lawsuit, concerning material that is illegal to view, I don't think any journalists, whether from a fishwrapper like the NYPost or a venerable institution like the NYTimes, would be able to verify the claims made by the lawsuit.
> Do you think the NYPost fabricated the existence of the lawsuit? As far as I can tell, the lawsuit really does exist.
Straight up fabrication is not the only way to disinform, and it's in fact it's a pretty shitty way to do it, since it means your disinformation collapses at the slightest scrutiny. To make fabrication effective, the fabrication needs to be laundered to give it credibility (see https://www.youtube.com/watch?v=tR_6dibpDfo), which takes time and effort.
A more effective way to disinform is to pluck out a true story that's not representative and amplify or twist it. That will seem more credible to those who give it just a little scrutiny, even if it's just as wrong as a fabrication in some ways.
In this case, if the NY Post has a bone to pick with Twitter, it could troll through lawsuits against it until it found one that made scandalous claims, then report those as hard facts.
Suppose the NYPost has an axe to grind and went looking for a lawsuit that makes twitter look bad... so what? The lawsuit exists as they claim. That wouldn't mean they're lying, it would only mean they have a different idea of what constitutes news worthiness than you. That wouldn't make it disinformation. Virtually every sentence in the article contains some variation of "the suit alleges." That the suit alleges these things does seem to be hard fact. I see no disinformation here.
> Suppose the NYPost has an axe to grind and went looking for a lawsuit that makes twitter look bad... so what?
Because that would be more like propaganda than journalism.
> That wouldn't mean they're lying, it would only mean they have a different idea of what constitutes news worthiness than you. That wouldn't make it disinformation.
Lying is only one of the many ways you can mislead someone. My point was that lying is inferior way to deceive than assembling true facts in a deceptive way. So the fact that the NY Post themselves didn't outright lie in this story does little to refute the idea that they're a disreputable paper.
There is no clear demarcation line between journalism and propaganda. Every journalist has biases, every last one, and every journalist aspires to report on matters they care about (e.g. "want to grind an axe on", which is just a way of saying the same thing but with a derogatory connotation.)
What matters is whether their reporting is factual or not. In this case, there seems to be little doubt that the reporting is factual; the reported lawsuit does exist and does make the reported claims.
> does little to refute the idea that they're a disreputable paper.
> What matters is whether their reporting is factual or not. In this case, there seems to be little doubt that the reporting is factual; the reported lawsuit does exist and does make the reported claims.
That matters, but it isn't the only thing that matters. Again: the fact that the NY Post themselves didn't outright lie in this story does little to refute the idea that they're a disreputable paper. The way they choose facts to report and how they arrange them can be the real source of disrepute.
I am not disputing that the NYPost is disreputable, do I need to say this more? I am saying the deservedly poor reputation of the NYPost is irrelevant in this case because the reported facts are easy to independently verify. The reputation of a newspaper counts when their reputation is what we must rely on, which is not the case here.
The NYPost's source is a public filing linked elsewhere in this discussion. If you think the NYPost has lied, point out the lie. If you think they left something important out, point it out. I'm guessing you cannot do either.
> I am saying the deservedly poor reputation of the NYPost is irrelevant in this case because the reported facts are easy to independently verify. The reputation of a newspaper counts when their reputation is what we must rely on, which is not the case here.
That's where we disagree. You're saying the the only job of a paper is to report true facts (or at least try its best). I'm saying its job is to report true facts and true impressions. True facts can be used to create false impressions, and the impression created often matters the most, especially when you know your readers aren't going to read your story like a careful lawyer.
Whether any impressions somebody walks away from this article with are 'true or false' is likely a matter of subjective opinion. Maybe they think twitter is doing enough to moderate their platform already, or maybe not enough. None of this has bearing on whether or not the article is misinformation of disinformation. The article is factual, reporting facts (correct me if I'm wrong about that) concerning a real lawsuit.
And you don't have to be a lawyer to understand the meaning of 'the lawsuit alleges' repeated a dozen times. Or is your position that reporting allegations against tech companies is always misinformation because you think only lawyers know what allegations are?
It is a right leaning tabloid owned by News Corp. This is the paper that printed the Hunter Biden laptop story that was sourced from Rudy Giuliani. After the US Capitol riot, the editor told staff to stop writing articles whose source material is from CNN, MSNBC, The Washington Post, and The New York Times because Trump considered those outlets to be fake news.
The victim actually took steps to have content removed and Twitter failed to do so (initially). I wonder if payment vendors will give Twitter the "Pornhub treatment" and de-platform their access to financial services.
> Finally on Jan. 28, Twitter replied to Doe and said they wouldn’t be taking down the material, which had already racked up over 167,000 views and 2,223 retweets, the suit states.
> “Thanks for reaching out. We’ve reviewed the content, and didn’t find a violation of our policies, so no action will be taken at this time,” the response reads, according to the lawsuit.
The whole deplatforming thing is mostly virtue-signalling.
PornHub is an easy target and the people & companies involved in its deplatforming do not need it in any way (at least not in a way they would publicly admit - I'm sure some of them do consume its content) so it's an easy call to make and can gather significant support from certain conservative and religious circles.
Twitter on the other hand is near-essential to most brands and media outlets, so the virtue-signalling benefit from deplatforming it is minuscule compared to the loss (the people most vocal in support of PornHub's deplatforming would be the first ones out of a job), not to mention that virtue-signalling only works if you have a place to brag about your action - if you deplatform Twitter, where are you going to brag about it?
It can appear that way, sure. But it usually means broadcasting your ‘goodness’ visibly, to ingratiate yourself with certain people. It can also mean seeking to establish one’s moral superiority, creating a power imbalance for offensive or defensive purposes.
Not in my experience. That's why the distinction is made between "virtue signalling" and "actually being virtuous".
It's more akin to showing up to a date wearing fancy clothes and driving an expensive, but borrowed car. Giving the symbols of wealth while possessing none, to fool an audience.
Many of those I've seen signalling their virtue the loudest possess the least.
The reason for the growth and awareness of the phenomena? In prior times, one must perform virtuous actions to appear virtuous. Now, it costs nothing, takes no effort, and carries no risk; it's as simple as typing 140 characters into a phone screen.
Think of what Jeffrey Epstein was obtaining from MIT president Rafael Reif in the form of a personally signed thank-you note in exchange for donations to the MIT Media Lab that started back in 2002 with co-founder and accused co-pedophile Marvin Minsky and which continued after his conviction on Florida state child rape charges in 2008 at a time when he was being investigated by the FBI for violations of the Mann Act during "Operation Leap Year."
Epstein's donations to science were a form of "virtue signaling" designed to help him evade prosecution on federal racketeering charges for the sex-trafficking of minors.
The guy had never earned as much as an undergraduate degree in pursuit of his two loves in life, namely "science and pussy," as he once put it to a professor whose work he was funding at the time.
I always remember that biblical passage about praying in private vs public. Such concerns about virtue signalling have been around for a long, long time.
I hope this doesn't result in Twitter banning all adult content which seems to be how all platforms handle pedophilia lawsuits/legal pressure these days.
Unfortunately, mainstream media won't pick on it, nor will the story gain traction because it is published in the NY Post. (if it is true)
Edit:- to the folks down voting me, please show me some major outlets (NYT, BBC, etc) reporting on this. As far as we are concerned, this kind of stuff should be news.
> Edit:- to the folks down voting me, please show me some major outlets (NYT, BBC, etc) reporting on this. As far as we are concerned, this kind of stuff should be news.
The story is only a few hours old. "Mainstream" media often requires a level of verification that may take a few more days before you see their stories happen.
I get it, that's its the Post. But there are a lot of things going against the major outlets in this debate: a.) It's a federal case so except for identities, most of the damning information against Twitter is out there b.) Major outlets such as NYT like to brag about the size of their newsrooms, but don't seem to have the resources to cover stories of CP on Twitter and the lackadaisical attitude of its censors c.) Major outlets should not take days to investigate something like this and write it up, especially if all the content matter is out there. Else obviously openly biased outlets with an axe to grind, such as the Post will pick it up, obviously reducing the credibility of stories they output.
Real media don't make decisions on whether to publish based on whether something also appears in the NY Post. That's a ludicrous assertion.
What's actually happening: Murdoch rags like the Post have zero editorial standards, whereas others are attempting to actually vet this story before running it.
There's more than one explanation as to why "mainstream media" might not run a story. The NY Post isn't exactly known as a bastion of credible journalism. That's why they ran the Hunter Biden laptop story when everyone else passed because they couldn't verify it.
I know that it's the Post. My point is that if the Post was able to cover it, why couldn't a bigger outlet such as NYT or the BBC spare the resources to cover it too.
Not to mention, the Hunter Biden story was mostly speculative without proof. This one is a federal case with all details out in the open.
I agree with the overall sentiment of your post, but using the Hunter Biden laptop story as an example is an unfortunate choice. There's a popular view that politics had a lot to do with who picked up the story and who didn't, as opposed to just the strength of the case itself. I would recommend picking a less controversial example that highlights the shortcomings of NY Post's journalism, such as their reporting on the Boston Bombers [1].
Or, it could be a garbage story manufactured by liars, implying that of all the outlets not running it, there's a chance they had looked into it and decided it was a garbage story manufactured by liars.
Sometimes a thing that fails, fails because it's bad or disingenuous or both. It's possible this is a garbage suit that won't go anywhere because it's a garbage suit.
I remember a story about someone getting arrested for "child porn" going through an airport when some TSA agent found a video of a mans kids taking a bath on his phone. I'm not sure what happened in that case, but it has made me a) hesitant to take similar video and photos of my kids (even though bathtime is really fun and I want memories! My parents have plenty of bathtime pics of me as a kid, and I'm glad!), and b) a lot more skeptical whenever I hear someone claim "child porn". Similar to how "registered sex offender" has lost a great deal of weight since apparently cities around the country apply this label to drunk people pissing on buildings.
This doesn't invalidate ALL such labels, it only means that the label itself is NOT ENOUGH for me to assume I know what happened. In short, "the system" has lost its credibility with me. (The same applies to other labels like "convicted felon" or "ever arrested". The ease with which these labels are applied, especially as the result of a corrupt plea bargain culture, and a society where all LEOs have a "marshal law bubble" around them, ruin the effect for me. And I wish it ruined the effect for more people.)
It seems pretty clear you did not read the article, so here
The federal suit, filed Wednesday by the victim and his mother in the Northern District of California, alleges Twitter made money off the clips, which showed a 13-year-old engaged in sex acts and are a form of child sexual abuse material, or child porn, the suit states.
The teen — who is now 17 and lives in Florida — is identified only as John Doe and was between 13 and 14 years old when sex traffickers, posing as a 16-year-old female classmate, started chatting with him on Snapchat, the suit alleges.
Doe and the traffickers allegedly exchanged nude photos before the conversation turned to blackmail: If the teen didn’t share more sexually graphic photos and videos, the explicit material he’d already sent would be shared with his “parents, coach, pastor” and others, the suit states.
Doe, acting under duress, initially complied and sent videos of himself performing sex acts and was also told to include another child in his videos, which he did, the suit claims.
Eventually, Doe blocked the traffickers and they stopped harassing him, but at some point in 2019, the videos surfaced on Twitter under two accounts that were known to share child sexual abuse material, court papers allege.
Yes, it sounds like he was forced to make actual child porn...because he shared nudes. And THAT was so unacceptable to society that he felt forced to pay the blackmail.
Oh yeah, I almost forgot about the other stuff I hear about, like teenagers getting charged with child porn for sending dick picks, or rape charges because 2 16-year olds had (consenting) sex.
All of this is insane, and if not for the systemic insanity around sex in America, this kid would not have been coercable.
This is a ridiculous argument; blackmailers can and routinely do blackmail people over matters that are legal but embarrassing. This could have happened in any country. Similar things do occur in every country.
What counts as leverage depends on the time and place. Not too long ago you might have blackmailed someone for smoking weed back in the day. Or for being born out of wedlock. Or for being gay (well, that one is still potent in a lot of places). Kids being stupid is normal, and something tells me that if this happened in, e.g., Amsterdam, the kid would have gone straight to his parents and the blackmailer would have gone to jail.
I've had this happen to me on Facebook. Found a disgusting video involving children, reported it, got a response that there's nothing wrong with it. It's like the people reviewing this stuff have only a second to decide if the content should be deleted.
OTOH, I've reported posts and had them take them down within the hour...
The issue seems to be that some posts are reviewed by Facebook staff in America, and others by Facebook staff in India. The American staff acts quickly and is very good about taking violations down. The Indian staff seems to be very laissez faire, due to the different cultural standards for content.
No, but it's pretty easy to tell based on the response they provide. In a nutshell, they write the same way as my cousins (back in India) do, rather than the way Americans do.
If a person sees it at all. Twitter and Facebook are far too large to moderate effectively with humans. Most things are caught by automated image recognition, but that doesn't work for images not yet in the database.
I'm sure a lot gets caught, but the amount that slips through is just unacceptable. Fortunately I haven't come across the kind of content you describe, but I reported ISIS propaganda on Twitter and was told it didn't break the rules. Shortly after complaining publicly and tagging @jack the tweet was removed. Could be a coincidence, I'll never know.
And yet, in the ancient times when I had a Facebook and Instagram profile for my photography hobby[1], I would wake up every day to new removed images for showing "too much skin."
[1] Nude art photography, with private parts covered with white stripes or simply not being shown at all.
I think the odds of US moderators being any sort of majority or anything close to that is slim to zero. The savings are astronomical and is the same reason customer support of all kinds, even IT help, is outsourced. US mods are most likely a minority and a face.
I would assume content moderation makes Twitter absolutely nothing and actually just bleeds tens of millions of dollars annually.
They'd do their best to outsource the work somewhere that's super cheap (India, etc.) or they'd bring in international talent on H1B visas en-masse since I believe a not insignificant amount of their wages are subsidized by government.
Yeah, it’s not hard to find. Since you literally made it up and just assumed you’re correct with no basis, I will leave the research as an exercise you can pursue. FB has thousands of people in the US doing it.
India has a very low cost of living. Getting paid an average wage for the country you live in is not “slave” labor. If you use that term freely for things it doesn’t apply to, people stop taking it as seriously.
NYPost is just about the worst “mainstream” publication available and anything they’ve ever done devalues the technologies which transmit it and degrades the minds that consume it. Unfortunate to see them linked & highly ranked on HN.
The problems with NYP here is that their coverage is lazy and inept. That few details I added were the minimum that belonged in this story - and it was far from everything that was needed.
I'm not calling out the NYP here. Nearly every US news org is equally inept and lazy, no matter their ideology (real or perceived).
I acknowledge that not all orgs are like that. And often disappointing news outlets sometimes perform absolutely brilliant journalism (eg: Panama Papers).
But most basic reporting is just put out there, without even minimal vetting of details. We've long deserved better.
I have to say, I don't quite understand why services find it so difficult to remove this material. This has been a long standing problem with Twitter. They've improved a bit, but there's still problems.
Young people approach these platforms and say "here are some images of child sexual abuse that you're hosting. I know they're CSE because I'm the subject of the photos and I was <18 at the time". The platforms sometimes ignore them. Children are then stuck, not knowing how to get this "child porn"[1] taken down.
We need to help young people understand what routes are then available to them.
1) Write to the legal department. Twitter does not make this easy. Their "contact us" form doesn't have a section for "I want to report CSE material". https://help.twitter.com/en/contact-us But the post addresses are:
Twitter, Inc.
c/o Trust & Safety - Legal Policy
1355 Market Street, Suite 900
San Francisco, CA 94103
Twitter International Company
c/o Trust & Safety - Legal Policy
One Cumberland Place
Fenian Street
Dublin 2
D02 AX07
Ireland
3) Contact IWF or CEOP. IWF will generate hashes of the images and Twitter will, eventually, use those hashes to remove the images. This will take some time. https://www.iwf.org.uk/
[1] I'm only using that term for the Google search term.
It's a story that's so completely outside of my personal experience that I cannot evaluate it
In 14 years of holding a Twitter account, I've never seen anything even close to such things on Twitter, and wouldn't use the platform if I did. If I were to see child pornography on Twitter, I would immediately report it to law enforcement and close my account
i've been on twitter for the same time and last year was the first time i saw people talking about "map" which apparently means "minor attracted person"[0].
while searching around i found that there's a LOT of twitter accounts basically spewing pedophilia-related/adjacent stuff. i reported every single one of them and... nothing was done. not a single one was removed.
I have seen these MAP people, but I don't consider that child pornography
The lawsuit in the article asserts that Twitter refused to remove an video of two extorted 13 year olds being sexual together. That seems unambiguously child pornography
according to one of them, twitter changed their TOS to allow people to discuss their attraction to minors on their platform[0].
people always take about "dog whistles" -- imho this is a big one, at least for me. allowing pedophiles to discuss their attraction to minors openly on your platform is completely absurd.
There's one group of people that is universally tarred and feathered in the United States and most of the world. We never hear from them, because they can't identify themselves without putting their livelihoods and reputations at risk. That group is pedophiles. It turns out lots of them desperately want help, but because it's so hard to talk about their situation it's almost impossible for them to find it. Reporter Luke Malone spent a year and a half talking to people in this situation, and he has this story about one of them.
I remember it being a pretty remarkable episode, and kind of heartbreaking. More insightful and constructive than the usual tone of moral outrage that the subject is treated with.
From reading your article, it seems that Twitter's policy changed to allow discussion of the subject with the caveat that pedophilia could not be promoted or glorified. That seems pretty reasonable, and isn't a dog-whistle for anything.
I remember that TAL article too. It must be awful to be one of these people. I applaud those who endure without ever molesting children and staying away from them. I hope effective treatment becomes available soon
I agree that having a platform to talk about it is probably a social good, again with the caveat that pedophilia isn't glorified or promoted. I think of it like a mental illness and not a moral failing by itself. Any moral failure comes from acting upon those urges
Start reading a single (non porn related) message from one these creeps and your Twitter timeline will be inundated with illegal porn content in no time. Twitter is absolutely rife with that horrible stuff, and Twitter certainly does the minimum to remove any of that, as demonstrated by that article.
Why even give the benefit of the doubt? Someone reports CP, remove it, period. Of course, Twitter makes money off it...
People claim Twitter can't possibly moderate content at scale, except that Twitter makes money at that same scale. Social media can't have it both way, especially when it comes to CP.
"People claim Twitter can't possibly moderate content at scale, except that Twitter makes money at that same scale."
I'm sorry, but is that supposed to be a logical argument? Because it doesn't actually make any sense. Twitter is a platform that allows pretty much anyone with an internet connection to post content. There were, on average, 500 million tweets posted per day last year.
So on one side you have the set of potential content creators, churning out half a billion tweets per day, and that number will almost certainly continue to steadily increase. So, as a company with a set amount of income, and who is beholden to its shareholders, what's your plan to moderate 500 million tweets per day while still turning a reasonable profit?
> I'm sorry, but is that supposed to be a logical argument?
Yes, because these social medias can't have it both ways, I already addressed that. If you make money at scale, well you are responsible for moderation at scale, period.
Your argument is just apologizing for Twitter's bad behaviour when it comes to illegal content moderation.
And yet the App Store hasn’t banned Twitter as they did Parler. If the media started reporting about pedophilic content on Twitter and that Twitter’s TOS explicitly allowed pedophiles to discuss their attractions, would the tech gatekeepers continue to allow Twitter? Because this stuff isn’t a secret, but Twitter hasn’t been banned which makes it pretty clear that the Parler and related bans were politically motivated rather than protecting people from harmful content.
But we let Twitter get away with these things because the Blue Checks are mostly leftist or hard-left politically and they’d lose their minds if Twitter were banned from app stores.
That's because twitter actually has content moderation policies in place that they do their best to apply. They're obviously not perfect, but again, the whole reason the app store banned parler was that they had no workable moderation plan in place. Twitter does.
I'm sorry, are you claiming people posting porn on Twitter are being 'bullied' by 'malicious reporters'? when Twitter themselves can't even trace the origin of the pornographic material posted on their platform, when they legally have to since it's not covered by safe harbor laws at first place? Who the hell are you kidding?
Bullying by false reporting is a common event. Sex workers are a common target for bullying. It stands to reason that bullying by claiming CP is probably not uncommon.
> Bullying by false reporting is a common event. Sex workers are a common target for bullying. It stands to reason that bullying by claiming CP is probably not uncommon.
Completely irrelevant concern. Kid's safety is more important, than any concerns about 'sex workers' publishing pornographic content on Twitter. Avoiding the bullying of kids by removing revenge porn published on social media is more important than your concerns about 'sex workers' getting their porn removed.
Twitter is legally required to be able to trace the source of anything pornographic uploaded on their platform at first place, as porn is not protected by safe harbor laws or section 230. A sex worker didn't post that child porn video featuring that young boy Twitter refused to remove.
There is plenty of other avenues for 'sex workers' to post pornographic content on the internet, removing it from Twitter doesn't make anyone a 'victim', it protect children.
That's normal, you only find those things if you look for them. There are lots of profiles with hundreds of thousands of followers full of porn, or football, or welding memes. But you never see them. Yet, there they are.
Moderation seems to be a very slippery slope. IMHO there's no way to truly win; policies eventually trend toward either absolute free speech or blatant censorship. This is the critical flaw in social media, as either trend is problematic. "The only way to win the game is not to play."
My wife has done some work with child exploitation organizations around this exact problem. Sadly, this is not a surprise at all. I’d venture that maybe 5% of the posts they report are removed, and the offending accounts are essentially never punished.
To make matters worse, given that pedophilia and child porn is now tied up with QAnon, some of her peers have started to have their accounts banned when they report this stuff, as apparently they’re being caught up in an anti-conspiracy filter.
Folks, let's assume that the Twitter mod saw this and did not classify it as child porn. How can a twitter mod decide if it's child porn (as opposed to just 'porn')?
I am not sure there is a proper solution for this. How can they verify the age of an unknown person?
And consider the volume of false claims Twitter receives each day.
If everyone reported tweets that were CSAM just to take down a random post they disagree with, I'm sure moderators would make the occasional false negative.
This is noted in the lawsuit: the CSAM reporting functionality isn't available in app, but requires navigating to a separate webpage.
The lawsuit claims that shows negligence on the part of Twitter for how they handle these reports, but I wonder if this shows Twitter takes it seriously: brigading and mass reporting happens constantly on the internet, so pushing that reporting functionality off the application increases the friction of false reporting something especially sensitive.
They could just remove it if they can't tell, same way a bouncer will throw you out of a bar if you don't have ID.
I don't know why people are giving social media companies a pass on what goes on just because their job is hard. Nobody is holding a gun to their head and telling them that they need to run an unmanageable platform, they wanted this many users.
Maybe they do remove it if they can't tell. We don't know. Maybe they contacted the source of the video and they confirmed the age of the actors. Then what?
I didn't even know that Twitter had porn at all. It seems like it might be in their best interest to remove it altogether. There are sites that specialize in it, that I'd have to imagine are better sources for most people.
We do know, at least if we take the accuser's word as true:
> A support agent followed up and asked for a copy of Doe’s ID so they could prove it was him and after the teen complied, there was no response for a week, the family claims. (...) Finally on Jan. 28, Twitter replied to Doe and said they wouldn’t be taking down the material, which had already racked up over 167,000 views and 2,223 retweets, the suit states. “Thanks for reaching out. We’ve reviewed the content, and didn’t find a violation of our policies, so no action will be taken at this time,” the response reads, according to the lawsuit.
We still don't know what Twitter knows. It's not even clear if Twitter gave a more comprehensive reason.
That said, if we assume what the accuser said and implied is true, then Twitter was criminally negligent IMO (IANAL). But I guess we'll find out in court.
The lawsuit digs into the details of the timeline - from initial report by the plaintiff to removal of the content took nine days (with the help of law enforcement) - which makes me wonder how much merit there is to the suit, Twitter’s fumble notwithstanding.
I've noticed that Twitter is very arbitrary in terms of what they moderate and don't moderate. I've seen accounts for antifa groups or other groups engaged in criminality and violence several times, and no action has been taken on my reports. Invariably, the reports that aren't acted upon are ones involving groups that align with left-leaning political sentiments. This type of selective enforcement of rules is extremely unjust, and I am not at all surprised to see this inconsistent enforcement in this instance either.
This is critical to Twitter's crappy stance. They want to act like a neutral party to avoid liability (like common carrier status of US telco), yet they also want to censor what and who they feel like (via their ToS and being a private company). Their stance is inconsistent and this type of case may lead to resolution of that. Unfortunately I predict they will just add a few illegal things to their ToS as being twitter offenses.
Wow, I don't mean to issue an ad-hominem attack... but I do not consider anything from NY Post legitimate, I'd even venture to say NY Post is a tabloid owned by News Corporation, and surprised that this is on HackerNews.
The NY Post has a well known conservative bias, but that doesn't make their reporting illegitimate. You can disagree with their opinion pieces, but the factual reporting (in this case - that there's a lawsuit against Twitter) can be objectively checked.
Also as NYC'er, my take is that a lot of their opinions are quite accurate and correct.
A better submission would have been to the Business Insider story on this that BI published yesterday [1]. BI has a much better reputation than the NY Post, and BI had the story a day before NY Post.
Saying you don't consider ANYTHING from the NY Post to be legitimate is admitting you have personal bias against a news site because they lean one way politically. Why not read the story, see the evidence and decided yourself? If you go to some place like Twitter that you find credible, then you won't ever see this story.
A lot of people are talking about the Hunter Biden laptop story giving them a bad rep, but we now know that he is in fact under investigation for Tax Evasion and it could be in part to emails found on that computer (at least the ones that were published talked about income from sources that were likely not reported). Not saying the case is tied to that laptop since we don't know, but it could be.
maybe it's tongue in cheek but in all honesty, I find it extremely hypocritical that the Ayatollah of Iran, dis-information news organizations (esp those based in China) and now CP are fine on Twitter but god forbid some people on the right wing use the "#notmypresident" or "#learntocode" hashtags - both of which were extensively used by the left in 2016 without any repercussions what so ever.
I don't mind Twitter having it's policies (Infact I support it)- but selective enforcement of said policies is the issue.
The Hunter Biden laptop story has been confirmed by witnesses and the cryptographic signatures on the email, while not a single shred of evidence that it's a "plant" has been found. This should give the NYPost a huge boost in credibility over the media outlets that falsely claim that it was a plant. This would (sadly) make the NYPost one of the more reputable media outlets that we have.
I guess in 2021, HNers think the NY Post is a legitimate media outlet straining for objectivity, rather than the tabloid rag that they are. And I guess we'll all have a highly partisan discussion over it despite the Post obviously having a huge axe to grind against Twitter, further calling into question their "reporting".
Even as someone who's glad Biden is now president - it's a bit unnerving to be living in a time where a service bans the sitting united states president without batting an eye but engages in legal battles to defend the distribution of child pornography on their "open platform".
I guess people posting spicy takes on government happens to be more of an active risk to society than literal child predators?
People are banned from HN all the time for being rude. Does the reason even matter though? Twitter can decide who uses its services and who doesn't as long as it doesn't violate existing laws.
You shouldn't compare a small and niche community like HN to a website like Twitter.
Twitter is the web equivalent of a public square, where everyone gets an opportunity at a speakers corner, HN is the web equivalent of a club house, or something private like that. There are laws governing public spaces all around the world, like town squares, where people are guaranteed rights to speak. Laws like these need to be adapted to fit the internet as well, asap IMO.
As much as you might like to think that the Town square analogy is incorret, because the protocol HTTPS is the real Town square, and Twitter is really just another club house. (except very big) I also disagree with that, because of the nature of social media and platform monopoly. There will ever really only be one "youtube", one "facebook" and one "twitter", certainly nowadays when these social media platforms have grown to such an extent, covering the entire globe, let alone nations! This is because _the power of a social media platform lies in it being social_ i.e., the place where you find other people.
Not to fall into the category of being "defeatist" even more, but platforms that actually pose a threat to the established social media channels (like Parler did), get a very special kind of treatment by the "FAANG-cartel" as you might have noticed.
The solution is political change, imo, irregardless of all the problems that come with policing a multinational website like Twitter.
I think the argument can be made MeWe and Substack actually pose more of a threat to the established order because they are mainstream.
Parler isn't competing with Twitter or Facebook because the average person doesn't want to see a feed full of white supremacist's discussing conspiracy theories.
It's concentrated on Parler. Are there any liberal or moderate groups using their service?
Parler's only redeeming quality is that it had all of the footage from the attack on the Capitol. It would have made a good honeypot but now it looks to be weaponized as a Russian disinformation project.
I don't really think that's a fair description of the story here. Twitter hasn't yet engaged in any legal battle; this lawsuit was filed just yesterday and they haven't responded.
They didn't ban Trump "without batting an eye" they dickered about it for four entire years while he trampled all over their AUP and they didn't bother banning him until he literally tried to incite a civil war, and even then it's clear that the only reason they banned him is that it didn't work. If somehow Trump's ridiculous little revolt had succeeded in installing him as president again, Twitter would have kept him too.
Different take: they only banned him because he was headed out the door. Had Trump been re-elected, we likely would've seen 4 more years of dithering so as not to piss off a man who could make their lives very stressful.
Did you read the tweets they used as justification?
A man simply saying "I'm not going to this event" allows them to suddenly become mind-readers and clairvoyants, able to not only read one's mind to determine what they REALLY meant, but also how others in the future will interpret it. Even Nostradamus didn't possess such foresight.
Yet actual, obvious child sexual exploitation is seemingly a low priority for them.
It's just interesting, their priorities. But not exactly a mystery.
The regulation must be forward-looking. Trying to respond after a social media company has harmed the public, say by hosting illegal content, is not good enough. Social media companies cannot be allowed to say "Oh, sorry Congress, we made a mistake. We have now fixed it and it won't happen again."
It might be that we need an agency like the FCC to regulate the internal operations of these companies and bring about true shared decision making.
Attempts to exclude regulatory authorities from meetings should result in criminal charges and prison time for repeat offenders.
This all sounds reasonable to me. Seems like the victim's first recourse should be through the Law not Twitter.
>but the tech giant failed to do anything about it until a federal law enforcement officer got involved, the suit states.
>A support agent followed up and asked for a copy of Doe’s ID so they could prove it was him and after the teen complied, there was no response for a week, the family claims.
>Only after this take-down demand from a federal agent did Twitter suspend the user accounts that were distributing the CSAM and report the CSAM to the National Center on Missing and Exploited Children,” states the suit, filed by the National Center on Sexual Exploitation and two law firms.
EDIT: Here is the timeline as far as I can tell.
Dec 25, 2019: John Doe Becomes aware of Content on Twitter
january 2020: John Doe and Fam: Report Content for Breaking Twitter Policies.
January 28,2020: Twitter doesn't think the content breaks its policies
January 30, 2020: Law Enforcement contacts Twitter to Remove the Content. Content is removed.
I am confused. What part of "We’ve reviewed the content, and didn’t find a violation of our policies, so no action will be taken at this time" seems reasonable? Personally this kind of content moderation seems to be the most important and it is absurd that Twitter should wait to remove this content until they are contacted by law enforcement.
Certainly the best thing might have been a "both/and" approach where the victim contacts both Twitter and law enforcement without waiting to hear back from either. But, contacting Twitter directly should be the fasted way to get cp removed from their platform....
"Thanks for reaching out. We’ve reviewed the content, and didn’t find a violation of our policies, so no action will be taken at this time. If you believe there’s a potential copyright infringement, please start a new report. If the content is hosted on a third-party website, you’ll need to contact that website’s support team to report it. Your safety is the most important thing, and if you believe you are in danger, we encourage you to contact your local authorities."
If it was me, I’d have probably gone straight to Twitter too. My thought would be that they’re hosting the content, so they’d probably be able to get it removed the fastest. It also wouldn’t occur to me that an agent wouldn’t find CP to violate their terms. I’m surprised that Twitter didn’t take it down as a CYA precaution while they verified it to be CP or not.
Now then, I expect a swift retribution from other companies to do the "right" thing and banish Twitter from our reach.
Remove them from Google searches, de-platform them from the App Store and cut their access to hosting services.
They have failed to moderate their platform, in fact in this case it seems they did on the contrary, so it is only just they get their part of punishment.
Parler didn't get TOS'd because they had things fall through their moderation process; they got TOS'd because they did not have a workable moderation process in place, period. Twitter obviously does make mistakes - both AIs and humans are fallible - but they have about as good a moderation process in place as it's possible to have with the amount of message traffic they process.
Also, I'd like to point out that the only evidence WE'VE seen of this so far are a few claims made in a lawsuit.
> We’ve reviewed the content, and didn’t find a violation of our policies, so no action will be taken at this time
This is totally unacceptable. I wonder if there is some kind of legal liability that could apply here? Taken at face value, this message seems to indicate that a Twitter representative reviewed content and chose to not remove it which sounds a lot to me like knowingly and willfully distributing cp....
(edited to fix spelling of "Parler")