Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That would admit legal liability for the capabilities of their model - no?


They already censor Grok when it suits them.


Yep. "Oh grok is being too woke" gets musk to comment that they'll fix it right away. But turn every woman on the platform into a sex object to be the target of humiliation? That's just good fun apparently.


And when it's CSAM suddenly they "only provide the tool", no responsibility for the output.


I even think that the discussion focusing on csam risks missing critical stuff. If musk manages to make this story exclusively about child porn and gets to declare victory after taking basic steps to address that without addressing the broader problem of the revenge porn button then we are still in a nightmare world.

Women should be able to exist in public without having to constantly have porn made of their likeness and distributed right next to their activity.


Exactly this, it's an issue of patriarchy age the domination of women and children. CSAM is far too narrow.


What does that have to do with what I said?


If censoring Grok output means legal liability (your question), then the legal liability is there anyway already.


But that’s not my question/proposition of their position.

I replied to:

> They don’t seem to have taken even the most basic step of telling Grok not to do it via system prompt.

“It” being “generating CSAM”.

I was not attempting to comment on some random censorship debate,

but instead: that CSAM is a pretty specific thing.

With pretty specific legal liabilities, dependent on region!


Directed negligence isn't much better, especially morally.


You always have liability. If you put something there you tell the court that you see the problem and are trying to prevent it. It often becomes easier to get out of liability if you can show the courts you did your best to prevent this. Courts don't like it when someone is blatantly unaware of things - ignorance is not a defense if "a reasonable person" would be aware of it. If this was the first AI in 2022 you could say "we never thought about that" and maybe get by, but by 2025 you need to tell the court "we are aware of the issue, and here is why we think we had reasonable protections that the user got around".

See a lawyer for legal details of course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: