Explainability Case Studies
Explainability is one of the key ethical concepts in the design of AI systems. However, attempts to operationalize this concept thus far have tended to focus on approaches such as new software for model interpretability or guidelines with checklists. Rarely do existing tools and guidance incentivize the designers of AI systems to think critically and strategically about the role of explanations in their systems. We present a set of case studies of a hypothetical AI-enabled product, which serves as a pedagogical tool to empower product designers, developers, students, and educators to develop a holistic explainability strategy for their own products.
READ FULL TEXT