• Is there an actual case for outlawing this that isn't based on moral panic? Wouldn't you actually want people to generate those images with AI so they are less incentivized to pay for the real stuff?

    As long as you don't need actual CSAM material in the training data and the generated images are different enough from a real person (both of which seem to be very possible technology-wise), that seems to be a good thing.

    Or is there any indication that availability of CSAM material actually increases the likelihood that people act on it later?

    • We don't have (and I doubt we will ever have) tools for distinguishing between real and ai generated images with a guaranteed 100% accuracy (and 0% false negative and false positive rates).

      Given that, I don't see how you can allow ai generated CSAM without effectively making "real" csam images be unprosectable.

      • So you think that currently, until this law is implemented, CSAM is effectively unprosecutable because people can just claim they generated the image with AI?
  • We really need it possible to push laws faster, 2026 is going to be an insane year for multimodal models and laws are simply not keeping up.
  • I don’t understand why it needs to be banned. If it is artificial, whether it is a story someone wrote, or an animation someone drew, or a photo-realistic AI generated thing, it’s just not real. There is no harm committed to a victim. It feels like this is a moralistic crusade, adjacent to age verification laws that are just backdoor porn bans (freely admitted by the conservatives who support each laws).

    The bigger issue is that these types of bans feel a lot more like banning speech than banning a real crime, and the precedent it sets can end up being used in far-reaching ways. That’s how it always is.