Statistical Discrimination in Ratings-Guided Markets
We study statistical discrimination of individuals based on payoff-irrelevant social identities in markets where ratings/recommendations facilitate social learning among users. Despite the potential promise and guarantee for the ratings/recommendation algorithms to be fair and free of human bias and prejudice, we identify the possible vulnerability of the ratings-based social learning to discriminatory inferences on social groups. In our model, users' equilibrium attention decisions may lead data to be sampled differentially across different groups so that differential inferences on individuals may emerge based on their group identities. We explore policy implications in terms of regulating trading relationships as well as algorithm design.
READ FULL TEXT