CSAM (1)

Stanford Report Fuels AI Ethics Debate

Key Takeaway Stanford found over 1,000 verifiable instances of child sexual abuse material (CSAM) in the LAION-5B dataset, sparking criticism and debate around content moderation in open-source AI. Summary The report from Stanford Internet Observatory details the methodology used to identify CSAM in LAION-5B, including using unsafe classifiers, photoDNA hashes,…