Reporting, Reviewing, and Responding to Harassment on Twitter

05/13/2015
by   J. Nathan Matias, et al.
0

When people experience harassment online, from individual threats or invective to coordinated campaigns of harassment, they have the option to report the harassers and content to the platform where the harassment has occurred. Platforms then evaluate harassment reports against terms of use and other policies to decide whether to remove content or take action against the alleged harasser--or not. On Twitter, harassing accounts can be deleted entirely, suspended (with content made unavailable pending appeal or specific changes), or sent a warning. Some platforms, including Twitter and YouTube, grant authorized reporters or trusted flaggers special privileges to identify and report inappropriate content on behalf of others. In November 2014, Twitter granted Women, Action, and the Media (WAM!) this authorized reporter status. In three weeks, WAM! reviewers assessed 811 incoming reports of harassment and escalated 161 reports to Twitter, ultimately seeing Twitter carry out 70 account suspensions, 18 warnings, and one deleted account. This document presents findings from this three-week project; it draws on both quantitative and qualitative methods. Findings focus on the people reporting and receiving harassment, the kinds of harassment that were reported, Twitter's response to harassment reports, the process of reviewing harassment reports, and challenges for harassment reporting processes.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset