The Feasibility of Algorithmic Detection and Decentralised Moderation for Protecting Women from Online Abuse

01/17/2023
by   Sarah Barrington, et al.
0

Online abuse is becoming an increasingly prevalent issue in modern-day society, with 41 percent of Americans having experienced online harassment in some capacity in 2021. People who identify as women, in particular, can be subjected to a wide range of abusive behavior online, with gender-specific experiences cited broadly in recent literature across fields such as blogging, politics, and journalism. In response to this rise in abusive content, platforms have been found to largely employ "individualistic moderation" approaches, aiming to protect users from harmful content through the screening and management of singular interactions or accounts. Yet, previous work performed by the author of this paper has shown that in the cases of women in particular, these approaches can often be ineffective; failing to protect users from multi-dimensional abuse spanning prolonged time periods, different platforms, and varying interaction types. In recognition of its increasing complexity, platforms are beginning to outsource content moderation to users in a new and decentralized approach. The goal of this research is to examine the feasibility of using multidimensional abuse indicators in a Twitter-based moderation algorithm aiming to protect women from female-targeted online abuse. This research outlines three indicators of multidimensional abuse, explores how these indicators can be extracted as features from Twitter data, and proposes a technical framework for deploying an end-to-end moderation algorithm using these features.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset