Seems like many are just feigning lack of awareness, imo. Casual observation shows a pattern of looking the other way when bad things are happening to people they don’t like.
I also think there’s some amount of intellectualization about these issues, for varying reasons. In my own social group the most common one I observe is that it’s an academic exercise to those who aren’t part of an affected group. I don’t say that from a place of judgement, it makes perfect sense that in those cases a lack of lived experience would make it academic. For others it’s reality, and that’s why it’s important to address these kinds of issues.
Car rental agencies and airlines come to mind as well. I was almost unable to purchase a ticket for my better half after we got married as we were slow to process paperwork for name changes on our credit cards. We were able to clear the matter up, with some effort, but …
Trash mods really do hurt that place. I reported a physical assault, some punk kid with an airsoft taking potshots at pedestrians, to my local sub to warn others. Turned out the guy had been at it for weeks all over town.
Three months later, somebody didn’t like a political comment I made, went through my history, reported that earlier post, and voila. Banned for threatening violence because I warned people about a criminal’s behavior. Heh.
I was raised on tech. A third generation computer user, started writing software at seven under my father’s guidance. A Luddite I am not, but this doesn’t bode well for our future. YouTube is basically an indoctrination engine for white nationalism. It’s more or less what the right claims the American higher education system is for the left, only there’s no conspiracy fantasy to it.
The actual findings, as reported in the very link you post:
> "We found that YouTube's recommendation algorithm does not lead the vast majority of users down extremist rabbit holes, although it does push users into increasingly narrow ideological ranges of content in what we might call evidence of a (very) mild ideological echo chamber," the academics disclosed in a report for the Brookings Institution.
> "We also find that, on average, the YouTube recommendation algorithm pulls users slightly to the right of the political spectrum, which we believe is a novel finding."
So, about as close to being an "indoctrination engine for white nationalism" as a librarian that recommends books you like. And I am saying this as someone who reads Jacobin and watches any interview with Chomsky I can find.
So the indoctrination isn’t obvious? So it’s subtle? That makes it more pernicious, in my eyes.
I never said anything about a vast majority. To indoctrinate doesn’t mean convert an entire population, or even a percentage thereof. On the contrary, it refers to a process of teaching a person or group (of any size) to accept a set of beliefs uncritically. It doesn’t specify what degree of beliefs have to change, nor how rapidly, or severely.
YT recommends Fox, Shapiro, et al to kids watching anime, to adults whose sole interests are cat videos and programming tutorials. A bit different than a librarian suggesting books one might like.
And what happens if the librarian sees I’ve been checking out the likes of Mein Kampf, and makes recommendations based on that? Does indoctrination through multiple channels cancel itself out, or some? I’m not sure what point you’re trying to make there but it sounds a lot like “bad things can happen in other places so it’s acceptable if YouTube does bad things too.
One should consider the effect on those already radicalized in addition to the indoctrination of the non-radicalized when seeking to understand the political ramifications of such bias in algorithms. It’s not like they exist in a vacuum, after all.
edit: Add to that, on the topic of librarians, the decentralized nature of libraries and librarians ensures any effect of a single librarian will be limited to a local area. Don’t think we can say the same for YouTube algorithms.
My point is this: recommending related videos is not indoctrination, even if the content is political. If I'm watching Shapiro and YT recommends Fox, this is not indoctrination (same as, if I'm watching Young Turks and YT recommends Majority Report, it's not indoctrinating me).
Now, if I'm watching Anime and YT recommends Shapiro, I can agree that's closer to indoctrination. However, if it only happens like 2 times for every 10M watches of anime, and then 1 time for every 10M it's recommending Young Turks, then it's not really a significant force in this area; and it is only pushing slightly to the right - and I believe this is the sort of thing that the study found. So coming back to your first quote:
> So the indoctrination isn’t obvious? So it’s subtle? That makes it more pernicious, in my eyes.
No, that is not what the study found. It found that political recommendations for right-leaning content are slightly more common than those for left-leaning content.
I don’t need to reword the findings to make them support my assertion.
Again those findings: “We also find that, *on average*, the YouTube recommendation algorithm pulls users slightly to the right of the political spectrum”.
The whole “on average” nullifies the notion that occasionally recommending Young Turks to kids watching Anime once in a while somehow makes up for the fact that they push OANN or Newsmax even harder. That’s like saying I took one step forward so you should ignore the two steps I took backward.
Also you are ignoring the implications further down the line. If YouTube pulls neutral to the right, then it likely pushes those already right even further in that direction.
Are you familiar with the concept of network effect?
> So the indoctrination isn’t obvious? So it’s subtle? That makes it more pernicious, in my eyes.
>> No, that is not what the study found
“In my eyes” isn’t analogous to “that’s what the study found”, FYI.
> The whole “on average” nullifies your assertion that they recommend Young Turks to kids watching Anime as much as they do OANN or Newsmax.
I didn't say that they do it "as much", I specifically suggested they may do it half as often. But, per the study, they DO do it - otherwise, this would not have been a "slight" bias, it would have been a whopping huge bias.
What I meant to say was the assertion that occasionally recommending YoungTurks somehow mitigates the right-leaning bias of the site, as suggested with the statement “then it's not really a significant force in this area” is false. The site has a demonstrable rightwing bias.
Elections can and are decided by a few thousand or few hundred votes in battlegrounds. As such, the argument that it’s of negligible effect rings false to me.
Even on a much smaller scale, the algorithm is incentivised to radicalise you. A few years ago I would watch videos of helicopters with my 4-5 year old son because he loved helicopters and enjoyed watching them lift things, cut trees, put our fires, etc, etc.
Then the suggested videos started including helicopter crash compilations and he was super keen to see those and lost interest in the more "vanilla" helicopter videos. That was the end of that avenue of entertainment and it's only now he's 11-12 that he's getting some limited access to youtube again.
I’ve only had time for a cursory glance at this writing, but let me thank you for sounding the horn on the on Web 3.0. It was bad enough adding Ajax calls to websites and calling it Web 2.0. At least that had something to do with http, ECMA script, html, and web-related tech.
Can anybody demonstrate a legitimate use of deepfake software? Has it ever been used to facilitate a socially positive or desirable outcome? While I recognize my experiences are far from definitive, I hazard most would be hard pressed to name anything positive that came out of deepfake technology.
edit: I’ll take your knee-jerk DV, and any others, as an admission of an inability to speak to positive utility of this technology.
Edit: this comment is referring to deepfakes more broadly, and is not a commentary on the validity of the source linked here. I can't speak to the reputability of the community developing this, or how it has been used so far.
--
I'm a fairly visual and imaginative person, and it's pretty easy for me to come up with some very useful applications. No hostility intended, genuinely sharing my thoughts:
1. CGI for video editing - lower the bar of entry to de-age actors, or use a stand-in. Actor can't make it to a shoot that day? No worries, replace their face in post easily.
2. Identity protection - Cold call with someone that reached out to you, you're not sure if they're safe or dangerous, could be a good way to protect yourself.
3. Social media content for clients - become a fake avatar for hire essentially, customize your narrator for any video or brand. Video call centers with fake video (they already have voice modifiers and fake names), Enhanced VTuber sort of things (virtual avatars for streaming).
4. Unexpected outcomes: for example Holly Herndon created (and sold) access to an AI replica of her singing voice (n1), and I could see artists selling or renting access to their faces.
Obviously this can and will be used maliciously, but I personally could see myself using it for more positive reasons.
First let me thank you for a thoughtful riposte! I do appreciate that. My question was an honest one and I imagine, not the easiest to conceive an answer to. I genuinely appreciate your taking the time to share your thoughts.
With that said almost every use-case cited was financial or monetary gain whereas I enquired about social utility and value.
That dishonesty ie creation of a fake avatar is cited as being of social utility strikes me as a reach. I don’t see how adding more dishonesty and facades to the world adds social value, but then I may just be of limited imagination.
I would appreciate being able to morph my appearance and voice on video calls, into something more reflective of my identity (facial structure, hair color, cat ears) than the body I was provided by circumstances.
No doubt, but they wanted a non-capitalistic reason why this technology isn’t The Debil.
And having subtle clues from watching a person’s face while they’re proposing whatever action to overthrow the patriarchy is more convincing than some random person wearing a Guy Fawkes mask talking about revolution for the umpteenth time.
#2 sounds really interesting! I’m not sure of the psychological ramifications, but I can’t imagine they’d be different than any other sort of prosthesis save for an inability to actually touch it.
I could see it being used in AR to conceal identity to facilitate more equitable medical outcomes, I suppose.
Thank you again for the input! I was honestly at a loss for positive applications outside of financial gain.
I haven’t seen any ads driven by deepfake, or at least I don’t think I have. That advertising bit does sound rather obnoxious though!
Thanks for encouraging productive discussion! Your original question made me come up with #2 - I couldn't find active development on that specific concept, but I found something pretty amazing.
"Deepfake therapy" lets therapists simulate the presence of dead or non-cooperative people [1]. A study showed positive results when sexual violence victims could safely discuss with a deepfaked version of their abuser [2].
That is pretty neat, any sort of art does add cultural and social utility to a degree. Thanks for the heads up, because just about every mention I’ve seen published on the topic is more or less a horror story. I wasn’t being facetious in my query. Thanks again for the input!
What is the boundary between "deepfake" and "photoshop" (i.e. regular human "fake" or edit?)
I suspect it's going to become popular for both consensual-deepfake of oneself (PR, magazines, actors, pop stars, any form of public speaker) and "bought out" deepfake (actors selling out their image rights and then losing creative control; dead actors, etc.)
The political-deepfake is really going to accelerate the debate over how much free speech permits you to just lie about people, though.
The number of man hours that would be necessary to plausibly fake even a short film in Photoshop, if I had to guess. It strikes me as analogous to owning sidearms versus BMGs and rocket launchers. One of these tools makes doing bad things far easier.
Another analogy. Say somebody makes some hacking kit. Say it uses zero day exploits to compromise Windows, Mac, and Linux. Would any of us take issue with that? Would it be a different story if it was made into a push-button tool like WinNuke was in the 1990s? Or automated to the extent that somebody who can make a word doc could employ it against your systems? Is there really no feasible line of distinction here, in your eyes?
The social good of deepfake technology will be the destruction of the unwarranted power which has been given to image, and which the Internet has amplified.
Think about it: people choose to trust or not trust based on a face. When deepfaking becomes a tool easily available to every average joe, appearance will lose some of its power. People will learn to lose their irrational trust in face.
The technology isn't just deep fake, deep faking is one ability of techniques that do more general object/person replacement. It is such a small step between techniques for things like digital de-aging to a full fake face, that working on one makes the others possible and trying to ban one will have unintended consequences on the others.
I've been playing TTRPGs via videochat with my friends since pandemic, and I've often thought about setting up video avatars for our characters. It would be especially cool for the DM to be able to switch personas on the fly, and for players to have their characters in video chat.
I recently realized you can bypass cookie auth requests by toggling reader. Led me to wonder what the EU plans on doing to enforce compliance if JavaScript isn’t enabled, for example. Kind of makes the legal obligations of sites more or less impossible to fulfill under some limited circumstances.
I’m rarely impressed anymore for whatever reasons, but even in the midst of a miserable bout of viral induced malady, I have to stand in awe of this.
I can’t even fathom the skill required to move a piece of equipment that large in a straight line let alone produce precise graphics with its flight path.
I must be having a hard time waking up today, or something, because I can’t wrap my brain around why a chat app would need 3D acceleration.
The first chat sites on the web e.g. Bianca’s, Poolside, and Talker were built using more primitive versions of the same interface tools used by electron and while they didn’t have the functionality of video and audio chats, they also didn’t break your GPU driver.
> I can’t wrap my brain around why a chat app would need 3D acceleration.
A big part of discord's target community is gaming/gamers. One of the features it provides is the ability to livestream the game you are playing and view streams of games others are playing.
Definitely not my area, but I wouldn't be surprised if streaming some game with an overlay would require 3d acceleration to not suck.
Streaming functionality isn’t chat, though. AMD gives you the option to not install streaming functionality when you install their drivers and I’m pretty sure they target gamers too.
It reminds me of the age of Windows bloatware.
Trying to pile all the ingredients on a single piece of bread does not a good sandwich make. On the contrary, it leaves me with an impression that they have to rely on the brand recognition and snowball effect of previous products.
I would probably hold their software in higher regard if it wasn’t just one monolithic multi-tool. They don’t even have to push multiple apps, but could use a modular architecture at the app level with plug-ins to support non-core functionality.
Discord is not a "chat app". That's a pretty narrow view on how it is both used and vended. It's a communications app, at a minimum, and people use it for audio, video, chat, events, and a bunch of other stuff that may require hardware acceleration.
You’re the person who has decided that discord is a “chat app”. Discord themselves say:
Discord was started to solve a big problem: how to communicate with friends around the world while playing games online.[1]
It’s not too much of a stretch to see that they would see streaming as a logical part of their core functionality as a result.
That said, maybe it’s not the app for you. I totally get why someone would want a single-purpose tool - that’s generally the way I go and I’m not crazy about discord personally having used it a lot, written bots for etc. It’s not reasonable to criticise them for making different decisions from the ones you would make when they are going for a goal that you don’t share.
Video and game streaming is one of its core features.
It is not a good user or server owner experience to have to manage multiple applications depending on feature. You can turn off 3d acceleration, if I understand correctly.
The microkernel would like a word with you, good sir. As would the Unix toolchain, the Google suite, iWork, and Microsoft Office. If it was modular, you wouldn’t need separate apps per se.
On the other side of the multiple app token, Microsoft doesn’t build out PowerPoint, Word, and Excel as a single monolithic Office app.
And I’m pretty sure that’s among the most commercially successful consumer software in the history of computing.