Mediation Challenges and Socio-Technical Gaps for Explainable Deep Learning Applications

07/16/2019
by   Rafael Brandão, et al.
2

The presumed data owners' right to explanations brought about by the General Data Protection Regulation in Europe has shed light on the social challenges of explainable artificial intelligence (XAI). In this paper, we present a case study with Deep Learning (DL) experts from a research and development laboratory focused on the delivery of industrial-strength AI technologies. Our aim was to investigate the social meaning (i.e. meaning to others) that DL experts assign to what they do, given a richly contextualized and familiar domain of application. Using qualitative research techniques to collect and analyze empirical data, our study has shown that participating DL experts did not spontaneously engage into considerations about the social meaning of machine learning models that they build. Moreover, when explicitly stimulated to do so, these experts expressed expectations that, with real-world DL application, there will be available mediators to bridge the gap between technical meanings that drive DL work, and social meanings that AI technology users assign to it. We concluded that current research incentives and values guiding the participants' scientific interests and conduct are at odds with those required to face some of the scientific challenges involved in advancing XAI, and thus responding to the alleged data owners' right to explanations or similar societal demands emerging from current debates. As a concrete contribution to mitigate what seems to be a more general problem, we propose three preliminary XAI Mediation Challenges with the potential to bring together technical and social meanings of DL applications, as well as to foster much needed interdisciplinary collaboration among AI and the Social Sciences researchers.

READ FULL TEXT

page 6

page 7

page 12

page 22

research
12/02/2021

On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System

Explainable AI (XAI) research has been booming, but the question "To who...
research
04/23/2019

Ethics of Artificial Intelligence Demarcations

In this paper we present a set of key demarcations, particularly importa...
research
09/03/2020

Deep Learning in Science

Much of the recent success of Artificial Intelligence (AI) has been spur...
research
07/31/2018

Security and Privacy Issues in Deep Learning

With the development of machine learning, expectations for artificial in...
research
10/19/2022

Black Box Model Explanations and the Human Interpretability Expectations – An Analysis in the Context of Homicide Prediction

Strategies based on Explainable Artificial Intelligence - XAI have promo...
research
04/13/2022

DL4SciVis: A State-of-the-Art Survey on Deep Learning for Scientific Visualization

Since 2016, we have witnessed the tremendous growth of artificial intell...
research
03/14/2023

Sensitive Region-based Metamorphic Testing Framework using Explainable AI

Deep Learning (DL) is one of the most popular research topics in machine...

Please sign up or login with your details

Forgot password? Click here to reset