Modeling Islamist Extremist Communications on Social Media using Contextual Dimensions: Religion, Ideology, and Hate

08/18/2019
by   Ugur Kursuncu, et al.
0

Terror attacks have been linked in part to online extremist content. Although tens of thousands of Islamist extremism supporters consume such content, they are a small fraction relative to peaceful Muslims. The efforts to contain the ever-evolving extremism on social media platforms have remained inadequate and mostly ineffective. Divergent extremist and mainstream contexts challenge machine interpretation, with a particular threat to the precision of classification algorithms. Our context-aware computational approach to the analysis of extremist content on Twitter breaks down this persuasion process into building blocks that acknowledge inherent ambiguity and sparsity that likely challenge both manual and automated classification. We model this process using a combination of three contextual dimensions -- religion, ideology, and hate -- each elucidating a degree of radicalization and highlighting independent features to render them computationally accessible. We utilize domain-specific knowledge resources for each of these contextual dimensions such as Qur'an for religion, the books of extremist ideologues and preachers for political ideology and a social media hate speech corpus for hate. Our study makes three contributions to reliable analysis: (i) Development of a computational approach rooted in the contextual dimensions of religion, ideology, and hate that reflects strategies employed by online Islamist extremist groups, (ii) An in-depth analysis of relevant tweet datasets with respect to these dimensions to exclude likely mislabeled users, and (iii) A framework for understanding online radicalization as a process to assist counter-programming. Given the potentially significant social impact, we evaluate the performance of our algorithms to minimize mislabeling, where our approach outperforms a competitive baseline by 10.2

READ FULL TEXT

page 12

page 13

page 17

page 18

research
09/10/2018

Detecting Gang-Involved Escalation on Social Media Using Context

Gang-involved youth in cities such as Chicago have increasingly turned t...
research
02/27/2019

When a Tweet is Actually Sexist. A more Comprehensive Classification of Different Online Harassment Categories and The Challenges in NLP

Sexism is very common in social media and makes the boundaries of freedo...
research
09/11/2023

Quantitative Analysis of Forecasting Models:In the Aspect of Online Political Bias

Understanding and mitigating political bias in online social media platf...
research
11/11/2022

Bandits for Online Calibration: An Application to Content Moderation on Social Media Platforms

We describe the current content moderation strategy employed by Meta to ...
research
11/01/2018

Analyzing and learning the language for different types of harassment

The presence of a significant amount of harassment in user-generated con...
research
02/18/2020

A Model to Measure the Spread Power of Rumors

Nowadays, a significant portion of daily interacted posts in social medi...
research
05/16/2018

CASCADE: Contextual Sarcasm Detection in Online Discussion Forums

The literature in automated sarcasm detection has mainly focused on lexi...

Please sign up or login with your details

Forgot password? Click here to reset