• yetAnotherUser@feddit.de
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    1 year ago

    You absolutely do not real CSAM in the dataset for an AI to detect it.

    It’s pretty genius actually: just like you can make the AI create an image with prompts, you can get prompts from an existing image.

    An AI detecting CSAM would have to be trained on nudity and on children separately. If an image-to-prompts conversion results in “children” AND “nudity”, it is very likely the image was of a naked child.

    This has a high false positive rate, because non-sexual nude images of children, which quite a few parents have (like images of their child bathing) would be flagged by this AI. However, the false negative rate is incredibly low.

    It therefore suffices for an upload filter for social media but not for reporting to law enforcement.

    • LeylaaLovee@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      This dude isn’t even whining about the false positives, they’re complaining that it would require a repository of CP to train the model. Which yes, some are certainly being trained with the real deal. But with law enforcement and tech companies already having massive amounts of CP for legal reasons, why the fuck is there even an issue with having an AI do something with it? We already have to train mods on what CP looks like, there is no reason its more moral to put a human through this than a machine.