Not to defend FB for their relentless self-interest, but blaming them for negative outcomes in the world just sounds like the same chorus that blamed heavy metal and video games in the past. FB doesn't put bad thoughts in people's heads. It just gives people with bad thoughts a thing to look at. To use the same (possibly reductive) arguments, we did a lot of horrible things to each other long before the advent of the personal computer.
I agree that heavy metal and video games were the target of psychological projection and deflection. I don't believe FB gets quite the same pass. They are a very different animal. This animal can manipulate what different people see based on their interactions thus heavily influencing public thought and actions by psychological interaction and feedback loops. These feedback loops can amplify emotions at the direction of human created algorithms. I see it as a system that learns to adapt to the audience, but tuned to each individuals strengths and weaknesses, hopes and fears, likes and dislikes, beliefs and disbelief's and so on.
Videos games at least thus far only feed specific narratives that are mostly consistent across their platform and do not change dramatically based on the individual users psychological state, political alignment, tribe or psychological profile in a variable manor, at least yet. Heavy metal is one fixed piece of artwork open to interpretation by the general audience. Heavy metal does not change based on the listeners feedback, live concerts aside. Your emotional feedback to heavy metal will not influence how the heavy metal responds and adapts to you. I am playing some heavy metal right now and what I am experiencing will not likely change based on how I react to it, nor will it change based on current political issues or my political leanings.
That's certainly a valid point, but a key theme of the article is that Facebook seems deliberately to have neglected non-US and non-European territories when it comes to seeking out and removing problematic and inflammatory posts. A couple of extracts:
> In many of the world’s most fragile nations, a company worth hundreds of billions of dollars hasn’t invested enough in the language- and dialect-specific artificial intelligence and staffing it needs to address these problems. Indeed, last year, according to the documents, only 13 percent of Facebook’s misinformation-moderation staff hours were devoted to the non-U.S. countries in which it operates, whose populations comprise more than 90 percent of Facebook’s users.
> In 2019, the human-rights group Avaaz found that Bengali Muslims in India’s Assam state were “facing an extraordinary chorus of abuse and hate” on Facebook: Posts calling Muslims “pigs,” rapists,” and “terrorists” were shared tens of thousands of times and left on the platform because Facebook’s artificial-intelligence systems weren’t built to automatically detect hate speech in Assamese, which is spoken by 23 million people.
The article also points to FB apparently caving in to political considerations (specifically abroad/non-US):
> “Time and again,” the memo quotes a Facebook researcher saying, “I’ve seen promising interventions … be prematurely stifled or severely constrained by key decisionmakers—often based on fears of public and policy stakeholder responses.”
> Among the consequences of that pattern, according to the memo: The Hindu-nationalist politician T. Raja Singh, who posted to hundreds of thousands of followers on Facebook calling for India’s Rohingya Muslims to be shot—in direct violation of Facebook’s hate-speech guidelines—was allowed to remain on the platform despite repeated requests to ban him, including from the very Facebook employees tasked with monitoring hate speech.
The article ends by repeating a point stated earlier that FB in the US is the network at its best:
> But the Facebook we see is the platform at its best. Any solutions will need to apply not only to the problems we still encounter here, but also to those with which the other 90 percent of Facebook’s users struggle every day.
When FB "caves to political pressure" you can rightly say FB could do better, but it does belie the point that there are governments actively seeking to distort information and using their political and economic power to force private companies to obey or be punished. And it's not just foreign governments, we give FB a hard time for not spending enough resources to combat misinformation shared on their platform, but when "professional" news organizations invest resources in creating misinformation it's protected by the first amendment. I understand the legal bind, but it seems like a double standard.
Those are pretty different things and unlike heavy metal et al. We can see the results of their impact.
There are literally studies from Facebook itself that demonstrate they know about the impact and turned it off occasionally when things got too tough (2020 election cycle). There were no such studies pointing at heavy metal, only Tipper Gore and other morons.
> It just gives people with bad thoughts a thing to look at.
The word "just" is doing a lot of work, here. By your line of reasoning, propaganda isn't anything anyone should worry about.
Not to go full Godwin, but the Nazi regime is an instructive analogy, here. By your reasoning, all the Nazis did was "just" promote a lot of anti-Jewish propaganda. Would you really go on to say "Blaming the Nazis for negative outcomes in the world is just like blaming violent video games. All they did was put the messages out there, then the people with the bad thoughts did the mass murder..."?
And to preempt the objection that Facebook isn't intentionally distributing material that, say, promotes ethnic violence, while Facebook corporate obviously does not have that as an official policy, their algorithm is doing precisely that, by actively promoting these types of materials. Facebook's own internal research has shown that it can and does steer individuals to increasingly extreme content.
So if we agree that a) Facebook's systems steer individuals to violent or extremist content (as proven by their own research), b) propaganda is a tool that works to steer public opinion and drive human behaviour (as proven by historical precedents like Nazi regime), c) extremist content serves as effective propaganda (which is well trod ground in research on extremism), and d) that Facebook knew all these things and failed to curtail what was going on (as now revealed in these internal documents), then I don't see how you can possibly defend Facebook, here.
I think at least regular users have to take part of the blame here. There were enough warnings against being too open on such a platform. An if you are you might need a PR agency.