Refine
Year of publication
- 2021 (3) (remove)
Document Type
Conference Type
- Konferenzartikel (2)
Language
- English (3)
Has Fulltext
- no (3)
Is part of the Bibliography
- yes (3) (remove)
Keywords
- accountability (1)
- artificial intelligence (1)
- explainability (1)
- fairness (1)
- interactive visualization (1)
- machine learning (1)
- responsibility (1)
- trust (1)
- trustworthy ai (1)
- understandability (1)
Open Access
- Open Access (2)
- Closed Access (1)
- Diamond (1)
Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI.
The increasing use of artificial intelligence (AI) technologies across application domains has prompted our society to pay closer attention to AI’s trustworthiness, fairness, interpretability, and accountability. In order to foster trust in AI, it is important to consider the potential of interactive visualization, and how such visualizations help build trust in AI systems. This manifesto discusses the relevance of interactive visualizations and makes the following four claims: i) trust is not a technical problem, ii) trust is dynamic, iii) visualization cannot address all aspects of trust, and iv) visualization is crucial for human agency in AI.
An Empirical Study of Explainable AI Techniques on Deep Learning Models For Time Series Tasks
(2021)
Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques. However, many proposed XAI methods produce unverified outputs. Evaluation and verification are usually achieved with a visual interpretation by humans on individual images or text. In this preregistration, we propose an empirical study and benchmark framework to apply attribution methods for neural networks developed for images and text data on time series. We present a methodology to automatically evaluate and rank attribution techniques on time series using perturbation methods to identify reliable approaches.