Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t think anyone can claim that it’s not the user’s fault. The question is whether it’s the machine’s fault (and the creator and administrator - though not operator) as well.


The article claims Grok was generating nude images of Taylor Swift without being prompted and that there was no way for the user to take those images down

I don't know how common this is, or what the prompt was that inadvertently generated nudes. But it's at least an example where you might not blame the user


Yeah but “without being asked” here means the user has to confirm they are 18+, choose to enable NSFW video, select “spicy” in Grok’s video generation settings and then prompt “Taylor Swift celebrating Coachella with the boys”. The prompt seems fine but the rest of it is clearly “enable adult content generation”.

I know they said “without being prompted” here but if you click through you’ll see what the person actually selected (“spicy” is not default and is age-gated and opt-in via the nsfw wall).


Nice, thanks for the details!

Very weird for Taylor Swift...


Yes, the reporter should not be generating porn of her. Pretty unethical.

Let’s not lose sight of the real issue here: Grok is a mess from top to bottom run by an unethical, fickle Musk. It is the least reliable LLM of the major players and musk’s constant fiddling with it so it doesn’t stray too far from his worldview invalidates the whole project as far as I’m concerned.

Isn't it a strict liability crime to posses it in the US? So if AI-generated apparent CSAM counts as CSAM legally (not sure on that) then merely storing it on their servers would make X liable.


You are only liable if you know - or should know - that you possess it. You can help someone out by mailing their sealed letter containing CSAM and be fine since you have no reason to suspect the sealed letter isn't legal. X can store CSAM so long as they have reason to think it is legal.

Note that things change. In the early days of twitter (pre X) they could get away with not thinking of the issue at all. As technology to detect CSAM marches on they need to use it (or justify why it shouldn't be used - too many false positives???). As a large platform for such content they need to push the state of the art in such detection.. At no point do they need perfection - but they need to show they are doing their reasonable best to stop this.

The above is of course my opinion. I think the courts will go a similar direction, but time will tell...


> You are only liable if you know - or should know - that you possess it.

Which he does and responded with “I will blame and punish users.” Which yeah, you should, but you also need to fix your bot. He’s certainly has no issue doing that when Grok outputs claims/arguments that make him look bad or otherwise engages in what he considers “wrongthink,” but suddenly when there are real, serious consequences he gets to hide behind “it’s just a user problem”?

This is the same thing YouTube and social media companies have been getting away with for so long. They claim their algorithms will take care of content problems, then when they demonstrably fail they throw their hands up and go “whoops! Sorry we are just too big for real people to handle all of it but we’ll get it right this time.” Rinse repeat.


Blame and punish should be a part of this. However that only works if you can find who to blame and punish. We also should put guard rails on so people don't make mistakes. (generating CSAM should not be an easy mistake to make when you don't intend it, but in other contexts someone may accidentally ask for the wrong thing)


That’s what I’m saying ultimately.


I think platforms that host user-generated content are (rightly) treated differently. If I posted a base64 of CSAM in this comment it would be unreasonable to shut down HN.

The questions then, for me, are:

* Is Grok considered a tool for the user to generate content for X or is Grok/X considered similar to a vendor relationship

* Is X more like Backpage (not protective enough) than other platforms

I’m sure this is going to court, at least for revenge porn stuff. But why would anyone do this to their platform? Crazy. X/Twitter is full of this stuff now.


I don't think you can argue yourself out of "The Grok account is owned and operated by Twitter". In no planet is what it outputs user generated content since the content does not originate from the user, at most they requested some content from Twitter and Twitter provided it.


There's still a lot of of unanswered questions in that area regarding generated content. Whether the law deems it CSAM depends on if the image depicts a real child, and even that is ambiguous, like was it wholly generated or augmented. Also, is it "real" if it's a model trained on real images?

Some of these things are going into the ENFORCE act, but it's going to be a muddy mess for a while.


Grok loves to make things lewd without asking first.


Musk pretends he made Vision but what he made was Great Value Ultron




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: