The Flaws of Policies Requiring Human Oversight of Government Algorithms
Policymakers around the world are increasingly considering how to prevent government uses of algorithms from producing injustices. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. Despite the widespread turn to human oversight, these policies rest on an uninterrogated assumption: that people are able to oversee algorithmic decision-making. In this article, I survey 40 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a more stringent approach for determining whether and how to incorporate algorithms into government decision-making. First, policymakers must critically consider whether it is appropriate to use an algorithm at all in a specific context. Second, before deploying an algorithm alongside human oversight, agencies or vendors must conduct preliminary evaluations of whether people can effectively oversee the algorithm.
READ FULL TEXT