I'm sure public support for removing Facebook from app stores wouldn't be too different than Parler. If we are principled people, and FB is failing to moderate, then any reasonable person who supported Parler's removal would support Facebook's. What would be different between the two decisions is the sheer magnitude of money on the line, in the case of Facebook's removal -- in capital markets, in employment, etc.
A coworker has a lot of conservative friends. All of them were banned from FB until the 23rd. Even her daughter, who has never written any posts supporting DJT, but liked a few posts, was banned for the same time frame.
I've read absolutely nothing about this in the news. She even shared a screenshot one of her friends shared with her. It said:
"Your account is restricted right now.
You're temporarily restricted from doing things like posting or commenting on groups, Pages or events until January 23 at 3:19 AM.
Dismiss"
I take this to mean they are afraid of a 2nd around of problems.
If you know someone like that, ask them what they shared beforehand, and yourself whether they're leaving out some of their posting history to build sympathy. I also have conservative friends, none of whom had any bans or restrictions — but they also weren't suggesting violence as a response to losing an election or sharing QAnon dreck.
I don't have a news report, but my wife has an instagram she uses to share pictures of cakes and cookies that she has baked, and also likes and follows right wing politicos, had a similar message silencing her account around the election.
Parler explicitly advertises itself as a place of little or no moderation. Facebook has something like 15,000 professional content moderators whose job is so terrible that they literally get PTSD, but Facebook still fails to catch a lot of it. Being not great at moderation and actively advertising yourself as an unmoderated forum are fairly different lines.
If the negative externalities are the same in both cases, then this isn't an excuse. Just because Facebook has proportionally fewer posts that violate its policies doesn't mean it should get a free pass. The fact that up to 15,000 people have PTSD is the societal cost we pay, even if the vast majority of FB users are using the platform as expected.
Facebook gets a free pass because Facebook is an influential organization. Parler had no network of elites protecting it, because as you say, it had no other purpose beyond being 'volunteer' moderated.
If at a minimum, the attack gets us to think about the type of questions posed by The Social Dilemma, we're trending towards a better place.
No, Facebook devotes significant resources to moderation and makes good faith attempts to uphold their policies. Parler did not. That is the difference.
"If the negative externalities are the same in both cases, then this isn't an excuse."
No one is trying to rid the world of all negative externalities, only to make reasonable efforts to mitigate them. There may never be a perfectly moderated social media platform, just as there may never be a perfectly safe highway, and that is fine.
> No, Facebook devotes significant resources to moderation and makes good faith attempts to uphold their policies.
What's always missing here is that the outcome is still terrible and everyone is arguing from the premise that Facebook deserves to exist regardless.
If I maintain my rollercoaster in good faith but I just can't hire enough maintenance crew to do it well and people keep dying on it... maybe the rollercoaster doesn't deserve to be open and should be shut down.
You're right, Facebook won't get a free pass. Congress will drag Zuckerberg back into Congress, give him a scolding, and then largely things will remain the same. Whether or not they moderate enough is certainly a gray question.
All sorts of industries have negative externalities (e.g., fossil fuels). But are Facebook's worth it if they destabilize the democracy in which allowed Facebook to grow and exist?
If it's about intent, rather than results, then it's sure a big coincidence that Parler was deplatformed by Apple, Google, and AWS all within a few days.
No one gets a free pass, that is disingenuous. These two things are not the same.
Facebook makes a good faith effort to scale content moderation. Parler did not.
But facebook doesn't just moderate. They control algorithms that decide what posts are presented to users. I think they should loose their common carrier status because of that editorial control. It's like if the phone company prioritized evil phone calls and delayed non-evil phone calls, it is a horrible influence on the conversations they are supposedly not involved in.
Can you substantiate that a little? Parler promoted itself as a champion of free speech which is not the same as saying "little to no moderation". They did have a moderation system in place.
I do not know how many users they have per reviewer, do you?
Parler's slogan was "the world's premier free speech platform." They had no automated content scanning. They reportedly employed no moderators. According to Amazon's response to Parler's lawsuit, Amazon had spent almost two months repeatedly on calls for them asking them to take down several hundred specifically listed messages with death threats and such, which Parler simply declined to do.
Parler reportedly had a community "jury pool" system in which Parler users could volunteer to review reports for bad content and could decide whether to leave the messages up or not.
All this talk of Facebook being a paragon of skillful moderation while Parler was a Wild West of loose policy sounds a lot like comparing an acorn to an oak tree. Of course Parler didn’t have the same moderation tools, they weren’t a multi billion dollar company with a supranational sized user base and decades of growth. They were a small platform attempting to scale. What did Facebook moderation look like a few years after their launch? Or Reddit? They were just as cavalier as Parler.
Why does Apple and Google as a duopoly get to determine what businesses must do and not do to exist? (from a practical perspective not a legal one) For me that's the real question.
Because Apple and Google produced viable platforms that people want. Microsoft, Blackberry, and Palm all attempted to do something similar, and the market didn't coalesce around their offerings. Even Amazon tried with kindle/fire. Part of the reason this happens is that developers only have so much ability to diversify, so this is a natural marketplace where only a few big players can survive.
The Internet of the 1990's is still alive by the way. Anyone can put up their message on a website provided by a host of their choice.
Do you think it had anything to do with buying up competitors and technology that propelled them to domination as well as using their web of other services as an advantage? Even if everything is fair do you think free market theory cares about how they arrived at a duopoly?
"If we are principled people, and FB is failing to moderate, then any reasonable person who supported Parler's removal would support Facebook's."
Is Facebook failing to moderate, or failing at moderation? If the standard is perfect moderation, there is no social media. If the standard is a good faith efforts at moderation, Facebook should be tolerated (if not compelled to do better) and Parler should be punished (unless they make good faith efforts to do better).
Is Facebook actually moderating in good faith though?
Consider that divisive, offensive and false content is guaranteed to generate engagement and thus contribute to their bottom-line, while content that doesn't have these traits is less likely to do so. So they're already starting off the wrong way here, when their profits directly correlate with their negative impact on society.
Consider that there is plenty of bad content that violates their community standards on Facebook and such content doesn't even try to hide itself and is thus trivially detectable with automation: https://krebsonsecurity.com/2019/04/a-year-later-cybercrime-...
Is Facebook truly moderating in good faith, or are they only moderating when the potential PR backlash from the bad content getting media attention greater than the revenue from the engagement around said content? I strongly suspect the latter.
Keep in mind that moderating a public forum is mostly a solved problem, people have done so (often benevolently) for decades. The social media companies' pleas about moderation being impossible at scale is bullshit - it's only impossible because they're trying to eat the cake and have it too. When the incentives are aligned, moderation is a solved problem.
I suspect the bar for acceptable moderation will always be just a hair below what facebook, twitter, and youtube can manage.
Every time they fail again, they'll be hauled in front of congress and explain how they'll rub a little AI on it. It'll become just a little more expensive to compete.
People keep saying Parler intentionally did not moderate. But every actual source I've seen says that they were trying to moderate but lacked the manpower to do so because the platform grew too big too fast.
I'd be interested if anyone can share anything indicating a refusal to moderate.
My two cents: Facebook has an algorithm they use to decide what posts are presented to a user. They should therefore loose their section 230 common carrier status. They are the ones deciding to put toxic and divisive information in front of users to drive engagement, instead of simply sharing posts in chronological order and letting users control all the filtering.
That's not what section 230 says today, but there's a very interesting debate to be had about what its inevitable replacement should say tomorrow. Ranking posts according to some unexplainable algorithm which includes things like keyword extraction, often "selfishly" to favor engagement, has proven to be far from benign. I think it's quite reasonable to say that as a platform exerts more of this control it should also assume more responsibility. If you're not prepared to take on that responsibility, stick to strict chronological order and/or user defined priorities.
I don't particularly like it when Facebook (for example) buries content from my actual friends and family beneath posts that it thinks might be more engaging. They're usually wrong, BTW; the moment I recognize it as an algorithmic promotion I scroll right past as quick as I can. They certainly shouldn't be showing me stuff from pages and groups I never expressed an interest in. If I want to find new sources I'll ask. If they do those things, they are acting as editors and publishers, and should be treated as such. There are still problems to be solved around groups people have already joined and ads and privacy, but if they'd at least stop pulling every user toward more extreme content - effectively recruiting for the worst of the worst - that would be positive.
Thanks for that clarifying link! I still wonder about this, though. Relevant section (c)(1) says:
>No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
I'm hung up on the words "provided by". Facebook's algorithm controls what is presented to each user. They are providing a view of some posts, and not others. Facebook is creating the wall for each user, right?
Or would all this still considered moderation, allowing them to do what they want? Section (c)(2)(B) mentions not being liable for allowing users to control what content is accessed, but doesn't mention when the provider makes decisions like this.
At an extreme, could facebook use their secret algorithm to promote all posts saying "stolen election" to all republicans, demote contrary posts, and still claim claim section 230 protection because they didn't create the posts even though they could choose whatever they want to go viral amongst millions of various posts?
This is a political thing above all else. Its indirect bribery to the new party in power. In return, anti-trust cases (they have already been formed internally) will likely be overlooked/a call from the top or new appointments will change the calculus. This is how Washington works.
Facebook is much bigger so the amount of upset users will be greater as well if it is removed. Plus all the big tech CEOs have their own little club so I doubt apple would mess with Facebook like that.
I think it is also fair to point out that FB does have a review process, policies, and despite not being able to keep up with the volume, has tried to keep the most incendiary behavior off its platform. Meanwhile, Parler has refused to do any of these things as a matter of principle. As much as I hate on FB, I think there is a clear distinction here.
A decent distinction to make is Facebook on the face of it tries to moderate they're just bad at it and make decisions a lot of people aren't happy with where Parler's moderation system was almost entirely pro forma relying largely on showing reports to a panel of random users.
I'm not sure how hard FB actually tries to moderate extremist political content.
They're pretty damn good at quickly taking down child porn and copyrighted movies/music, because those are areas where big money & potential criminal liability are on the line.
In contrast, nobody's forcing them to censor political extremism, and the usual "engagement" metrics that they and their advertisers track would likely reward that content.
In the last few days, they've shut down thousands of groups and hundreds of thousands of accounts for sharing QAnon conspiracies, which strongly suggests to me that they've had the technical ability to do that for quite a while.
They're not "bad at" moderation, they just choose to moderate certain things and not others.
Most people here could hack together some basic keyword searches for questionable content in a day at the outside. I think we can assume that Facebook already has the tools to run those searches at scale and act on the results.
So I don't think there's any good reason to disagree with you.
At best, Facebook has prioritised "engagement" - i.e. ad revenue - over unacceptable extremism. At worst Facebook is knowingly complicit in the politics and in the polarisation that is being generated.
It would be impossible to know which of those is true without access to internal records. But there should at least be an investigation asking these questions.
And not just of Facebook, but of all the social tech and media giants.
In site moderation it's basically a guarantee that you'll wind up with an extreme echo chamber, people self select in or out the site based on the content of that site. Unless your user group is extremely broad based and siloing is good enough that people aren't driven off the site by extremists then the group of moderators you select from is inherently pretty ok with the content of the site. It has a chance to work in the real world where the same self selection effect is moderated by other factors.
How is it different than real life? Look at the county by county map of the past couple presidential elections. You'll see that there is very much a delineation between people with different ideals resulting in echo chambers. We see this in stereotypes of country folks or city folks by the other.
There's a whole trial to present the evidence and how the law is supposed to be interpreted in a court case you can't really replicate in Parler's attempted moderation system. Also the pomp and dressing of state and law do a lot to change how people act. One of the big questions any prosecutor will ask is will you judge solely on the law and they will very quickly strike you if you indicate no or that you know anything about jury nullification.
Everyone should know about jury nullification. I feel it's a violation of a right to a fair trial if the jury doesn't understand all the options, including that one.
The questions don't really mean much. People could honestly answer that they will apply the law, but how can they if their understanding of it is flawed, especially since that question takes place before the judge educates the jury on the law?
Prosecutors would really rather you not because it has the chance to completely screw their case and they already put a lot of effort in maintaining conviction records. Also it's one of those things where it's not officially an option there's just no punishment available to prevent it.
It does mean something you can say yes or no to 'will you rule based on the law and the evidence presented in the case' you don't have to know the law to agree to do that. It's not phrased exactly like that either it's a series. [0] #15 for example is basically a question directly about nullification. 13 and 14 are also around the subject as well.
Facebook is not failing to moderate. Moderation is hard and everyone knows that, but Facebook has been improving their moderation techniques and policies for years. Unlike Parler and Gab, Facebook is actually putting in the effort to moderate and was not created to be a safe haven for terrorists who were banned from other platforms.