Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a big, sensitive topic. Last time I researched it, I was surprised at how many things I assumed were just moralistic hand-wringing are actually well-evidenced interventions. Considering my ignorance, I will not write a lengthy response, as I am want to.

I will, instead, speak to what I know. Many models are heavily overfit on actual people's likenesses. Human artists can select non-existent people from the space of possible visages. These kinds of generative models have a latent space, many points of which do not correspond to real people. However, diffusion models working from text prompts are heavily biased towards reproducing examples resembling their training set, in a way that no prompting can counteract. Real people will end up depicted in AI-generated CSAE imagery, in a way that human artists can avoid.

There are problems with entirely-fictional human-made depictions of child sexual exploitation (which I'm not discussing here), and AI-generated CSAE imagery is at least as bad as that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: