Explainability Auditing for Intelligent Systems: A Rationale for Multi-Disciplinary Perspectives

08/05/2021
by   Markus Langer, et al.
0

National and international guidelines for trustworthy artificial intelligence (AI) consider explainability to be a central facet of trustworthy systems. This paper outlines a multi-disciplinary rationale for explainability auditing. Specifically, we propose that explainability auditing can ensure the quality of explainability of systems in applied contexts and can be the basis for certification as a means to communicate whether systems meet certain explainability standards and requirements. Moreover, we emphasize that explainability auditing needs to take a multi-disciplinary perspective, and we provide an overview of four perspectives (technical, psychological, ethical, legal) and their respective benefits with respect to explainability auditing.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset