Zombies in the Loop? People are Insensitive to the Transparency of AI-Powered Moral Advisors
Departing from the assumption that AI needs to be transparent to be trusted, we find that users trustfully take ethical advice from a transparent and an opaque AI-powered algorithm alike. Even when transparency reveals information that warns against the algorithm, they continue to accept its advice. We conducted online experiments where the participants took the role of decision-makers who received AI-powered advice on how to deal with an ethical dilemma. We manipulated information about the algorithm to study its influence. Our findings suggest that AI is overtrusted rather than distrusted, and that users need digital literacy to benefit from transparency.
READ FULL TEXT