It was much easier and healthier for me to just leave Twitter (never gonna call it the other name). In fact, I’ve left nearly all of social media. It would be nice to get away from all algorithmic content feeds. Maybe someday.
Social media apps have downsides, but I find the algorithmic short videos on facebook ("reels") quite incredible. The algo seems to suggest mostly science; niches I would never have known about, let alone had an active interest in, had the algo not suggested them.
I suspect experiences using social media apps differ wildly from person to person.
Fun and useful are different things though. I use X for 'news', debate, and updates on anything global. YouTube for specific tech tutorials and 'junk' viewing (e.g. something light-hearted while eating). And insta/fb reels for filling in idle gaps (e.g. while waiting in a queue). I gave up legacy media about 3-4 years ago and although Twitter is trash, would take it over mainstream media any day of the week (same with BlueSky and Mastodon, but I simply haven't needed those since they're approximately equivalent to X).
I guess people like me have zero interest in global news, debate and updates on anything global. Or rather, not zero interest, but zero time for it after filling the day with other (more important to me) things.
Yup. The variance of truthiness on X is probably the highest of all platforms. Meaning you get the most accurate news if you can filter/discern well enough, at the risk of being brainwashed if you can’t. Since alternatives also brainwash, X is more or less pareto optimal
>The variance of truthiness on X is probably the highest of all platforms. Meaning you get the most accurate news if you can filter/discern well enough
Absolute nonsense. Believing X is a simple distribution or sample of reality is bonkers.
There is a heavy Dunning-Kruger effect where people think they are very discerning and sophisticated in how they filter, but gradually descend into depths of misinformation without realizing.
Grandparents sharing nonsense on Facebook also think themselves highly informed — after all they’ve spent the past ten years in retirement “doing their own research”.
i followed 1000 scientists on X last year. Quantum mechanics, particle physics, general relativity, astrophysics and cosmology, molecular bio, chemistry, math, microbiology, environment, linguistics, psychology, middle east, islam, French. Oh and work stuff: programming languages, compilers, AI, venture capitalists of all stage. Where else can you find this? The main issue with X is it is a mirror. It puts us in a room with people like ourselves. If you like to argue, it will send you a lot of bait. I just stopped doing that. Muted a few keywords (nazi, greta, trump, elon, gun). And read a lot of paper summaries about black holes! (Oh and unfollowed most of the VCs, that was terrible)
Consider an analogy with a knife. If someone quits using knives simply because they had previously cut themselves, and someone says: “wait, there’s at least one useful application for these things”, and demonstrates the useful application, that could be enough for that person to discover how useful knives can be when used right.
Exact same goes for any tool that can cause both harm and benefit, like social media apps.
I was just making fun of desperate engagement baiting Facebook short videos that of people just about to get kicked by a camel or whatever, looping right before whatever they're baiting actually happens.
I only ever used Twitter for news because it was were all of the big names were;, but I stopped using it entirely when it took a massive dive after the recent sale. I checked back a couple of days ago, and top of my feed was some BS take on a subject (I forget what), and _every_ _single_ response was from a blue-tick agreeing with it, I immediately knew I can close it and not come back unless I want to click through to a specific tweet for whatever reason.
It pains me but I’m leaning in this direction too. I found the idea of a digital town square with a commitment to maximum truth appealing, but seeing what Twitter has become it’s hard to not feel this was naive of me.
To me the big blow was seeing the response to what’s happening in Gaza and what narratives people and algorithms in combination end up promoting. The thoughtful, balanced, humanistic view gets approximately zero traction, while completely untruthful propaganda (on both sides) has enormous reach.
Maybe it’s an insurmountable problem. Human defense mechanisms in combination with algorithms will always push people to tribalism and cheering for atrocities. I hope not, but seems like it.
I guess you could take the OPs approach of filtering out all the propaganda and keep contributing. But then you are effectively working for a propaganda machine free of charge, helping create value that will draw others in to be subjected to the propaganda.
It’s often not too hard to determine if you put in the effort. For example you have big accounts like John Spencer spreading claims that Gaza’s population growth is completely unaffected by the “war”, with thousands of likes and hundreds of thousands of views [1].
If you need an example on the other side you can take this popular post that claims the German foreign minister has said Israel has a right to target civilians [2], when in fact she says Israel has a right to target civilian infrastructure, if it is being used for military purposes.
Community notes do not appear because the algorithms require people who typically disagree to agree, which I doubt will ever happen in a military conflict.