Report: Twitter Fails to Block All Child Pornography

The social-media website has apparently failed to block images of child sexual abuse, with researchers detecting several dozen known images of illegal pornographic material on the platform from March through May.

Twitter  screen
Twitter screen (photo: Unsplash)

The social-media website Twitter has apparently failed to block images of child sexual abuse, with researchers detecting several dozen known images of illegal pornographic material on the platform from March through May.

Though Twitter appeared to correct the problem, it imposed new fees for the use of an application to monitor the social-media platform’s ability to block child pornography, The Wall Street Journal reported.

The Wall Street Journal’s report was based on research conducted by the Stanford Internet Observatory, which conducted a study of child protection issues across multiple social-media platforms. It used a computer program to analyze a data set of about 100,000 tweets from March 12 to May 20. The researchers found more than 40 images on Twitter flagged as CSAM (child sexual abuse material) in databases that companies use to screen content.

“This is one of the most basic things you can do to prevent CSAM online, and it did not seem to be working,” David Thiel, chief technologist at the Stanford Internet Observatory and report co-author, told The Wall Street Journal.

Thiel said it was “a surprise” to get any hits on “a small Twitter dataset.” Researchers used a digital signature analysis called PhotoDNA and their own software program to scan for the images and did not view the images themselves.

Twitter has previously said it uses PhotoDNA and other tools to detect CSAM, but it did not comment to The Wall Street Journal about whether it still uses PhotoDNA. The Stanford researchers said Twitter told them it has detected some false positives in CSAM databases that the platform’s operators manually filter out. Twitter said researchers might see false positives going forward.

The platform has touted its efforts to combat child sexual exploitation. It reported that it suspended about 404,000 accounts in the month of January for creating or engaging with material involving CSAM.

Research on Twitter involves access through an application programming interface (API). Twitter is now charging for this access, which could make analysis of Twitter unaffordable for researchers, The Wall Street Journal reported. The Stanford Internet Observatory has stopped using the enterprise-level API for Twitter because of the new costs.

The observatory, based at Stanford University, aims to study abuse of the internet in real time. Elon Musk, the owner of Twitter, in March accused the observatory of being a “propaganda machine” for its work on content moderation during the 2020 U.S. election.

The National Center on Sexual Exploitation (NCOSE), which advocates against sexual abuse and the public harms of pornography, placed Twitter on its 2023 “Dirty Dozen” list. The list aims to spotlight major mainstream entities that facilitate, enable or profit from sexual abuse and exploitation. The NCOSE Law Center is representing two plaintiffs whose abuser groomed the then-teenage boys into sending sexually explicit videos of themselves. Compilations of the illegal material were then posted and shared on Twitter.

Citing the technology news blog site TechDirt, the NCOSE said: “Most experts agree that Musk’s actions since purchasing Twitter have so far served to make the crime of child sexual exploitation worse.”