Addressing Child Exploitation on Federated Social Media

Addressing Child Exploitation on Federated Social Media

New report finds an increasingly decentralized social media landscape offers users more choice, but poses technical challenges for addressing child exploitation and other online abuse.
watercolor style image showing a nexus of social media platform icons

The social media landscape is undergoing a remarkable shift for the first time in nearly two decades. Decentralized social networks have gained significant attention and many millions of new users with rising dissatisfaction with the largest social media companies.

The Fediverse, a decentralized social network of interconnected spaces that are each independently managed with unique rules and cultural norms, has seen a surge in popularity. Mastodon is one of the most well-known decentralized projects, and even Meta’s Threads has announced future support for ActivityPub, the technical protocol that powers the Fediverse.

Decentralization has many potential advantages for users seeking greater choice and control over their data and social preferences, but it also poses significant challenges for online trust and safety.

Each instance in a federated social media network has its own content policy, with volunteer administrators typically moderating content and enforcing guidelines. There are few technical measures available or dedicated experts to set rules and handle content moderation in the Fediverse for imagery of violence or self-harm, child abuse, hate speech, terrorist propaganda or misinformation.

In a new report, Stanford Internet Observatory researchers examine issues with combating child sexual exploitation on decentralized social media with new findings and recommendations to address the prevalence of child safety issues on the Fediverse.

Analysis over a two-day period found 112 matches for known child sexual abuse material (CSAM) in addition to nearly 2,000 posts that used the 20 most common hashtags which indicate the exchange of abuse materials. The researchers reported CSAM matches to the National Center for Missing and Exploited Children.

The report finds that child safety challenges pose an issue across decentralized social media networks and require a collective response. Current tools for addressing child sexual exploitation and abuse online—such as PhotoDNA and mechanisms for detecting abusive accounts or recidivism—were developed for centrally managed services and must be adapted for the unique architecture of the Fediverse and similar decentralized social media projects.

Read More

pictures of attendees from the 2022 Trust and Safety Research Conference.
News

Registration Open for the 2023 Trust and Safety Research Conference

Tickets on sale for the Stanford Internet Observatory’s Trust and Safety Research to be held September 28-29, 2023. Lock in early bird prices by registering before August 1.
cover link Registration Open for the 2023 Trust and Safety Research Conference
Fake profiles real children internet observatory
Blogs

Fake Profiles, Real Children

A Look at the Use of Stolen Child Imagery in Social Media Role-Playing Games
cover link Fake Profiles, Real Children
three synthetically generated images of the same child, each less blurry than the previous, on a blue background showing the image code.
Blogs

New report finds generative machine learning exacerbates online sexual exploitation

The Stanford Internet Observatory and Thorn find rapid advances in generative machine learning make it possible to create realistic imagery that is facilitating child sexual exploitation.
cover link New report finds generative machine learning exacerbates online sexual exploitation