Having your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest

by   Basileal Imana, et al.

Relevance estimators are algorithms used by major social media platforms to determine what content is shown to users and its presentation order. These algorithms aim to personalize the platforms' experience for users, increasing engagement and, therefore, platform revenue. However, at the large scale of many social media platforms, many have concerns that the relevance estimation and personalization algorithms are opaque and can produce outcomes that are harmful to individuals or society. Legislations have been proposed in both the U.S. and the E.U. that mandate auditing of social media algorithms by external researchers. But auditing at scale risks disclosure of users' private data and platforms' proprietary algorithms, and thus far there has been no concrete technical proposal that can provide such auditing. Our goal is to propose a new method for platform-supported auditing that can meet the goals of the proposed legislations. The first contribution of our work is to enumerate these challenges and the limitations of existing auditing methods to implement these policies at scale. Second, we suggest that limited, privileged access to relevance estimators is the key to enabling generalizable platform-supported auditing of social media platforms by external researchers. Third, we show platform-supported auditing need not risk user privacy nor disclosure of platforms' business interests by proposing an auditing framework that protects against these risks. For a particular fairness metric, we show that ensuring privacy imposes only a small constant factor increase (6.34x as an upper bound, and 4x for typical parameters) in the number of samples required for accurate auditing. Our technical contributions, combined with ongoing legal and policy efforts, can enable public oversight into how social media platforms affect individuals and society by moving past the privacy-vs-transparency hurdle.


page 1

page 2

page 3

page 4


Social Media and User Privacy

Online users generate tremendous amounts of data. To better serve users,...

The Theory, Practice, and Ethical Challenges of Designing a Diversity-Aware Platform for Social Relations

Diversity-aware platform design is a paradigm that responds to the ethic...

FAIRY: A Framework for Understanding Relationships between Users' Actions and their Social Feeds

Users increasingly rely on social media feeds for consuming daily inform...

Mathematical Framework for Online Social Media Regulation

Social media platforms (SMPs) leverage algorithmic filtering (AF) as a m...

Big Tech's Tightening Grip on Internet Speech

Online platforms have completely transformed American social life. They ...

Regulating algorithmic filtering on social media

Through the algorithmic filtering (AF) of content, social media platform...

Please sign up or login with your details

Forgot password? Click here to reset