Fairness-Preserving Text Summarzation

10/22/2018
by   Abhisek Dash, et al.
0

As the amount of textual information grows rapidly, text summarization algorithms are increasingly being used to provide users a quick overview of the information content. Traditionally, summarization algorithms have been evaluated only based on how well they match human-written summaries (as measured by ROUGE scores). In this work, we propose to evaluate summarization algorithms from a completely new perspective. Considering that an extractive summarization algorithm selects a subset of the textual units in the input data for inclusion in the summary, we investigate whether this selection is fair or not. Specifically, if the data to be summarized come from (or cover) different socially salient groups (e.g., men or women, Caucasians or African-Americans), different political groups (Republicans or Democrats), or different news media sources, then we check whether the generated summaries fairly represent these different groups or sources. Our evaluation over several real-world datasets shows that existing summarization algorithms often represent the groups very differently compared to their distributions in the input data. More importantly, some groups are frequently under-represented in the generated summaries. To reduce such adverse impacts, we propose a novel fairness-preserving summarization algorithm 'FairSumm' which produces high-quality summaries while ensuring fairness. To our knowledge, this is the first attempt to produce fair summarization, and is likely to open up an interesting research direction.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset