Truth Serums for Massively Crowdsourced Evaluation Tasks

07/25/2015
by   Vijay Kamble, et al.
0

A major challenge in crowdsourcing evaluation tasks like labeling objects, grading assignments in online courses, etc., is that of eliciting truthful responses from agents in the absence of verifiability. In this paper, we propose new reward mechanisms for such settings that, unlike many previously studied mechanisms, impose minimal assumptions on the structure and knowledge of the underlying generating model, can account for heterogeneity in the agents' abilities, require no extraneous elicitation from them, and furthermore allow their beliefs to be (almost) arbitrary. These mechanisms have the simple and intuitive structure of an output agreement mechanism: an agent gets a reward if her evaluation matches that of her peer, but unlike the classic output agreement mechanism, this reward is not the same across evaluations, but is inversely proportional to an appropriately defined popularity index of each evaluation. The popularity indices are computed by leveraging the existence of a large number of similar tasks, which is a typical characteristic of these settings. Experiments performed on MTurk workers demonstrate higher efficacy (with a p-value of 0.02) of these mechanisms in inducing truthful behavior compared to the state of the art.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset