Granting that I think X should have stronger content policies and technological interventions to bad behavior as a matter of business, I do think that the X Safety's team position[0] is the only workable legal standard here. Any sufficiently useful AI product will _inevitably_ be usable, at minimum via subversion of their safety controls, to violate current (or future!) laws, and so I don't see how it's viable to prosecute legal violations at the level of the AI model or tool developers, especially if the platform is itself still moderating the actually illegal content. Obviously X is playing much looser with their safety controls than their competitors, but we're just debating over degrees rather than principles at that point.
[0]
> Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.
[0] > Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.