TY - CPAPER U1 - Konferenzveröffentlichung A1 - Oelke, Daniela A1 - Keim, Daniel A. A1 - Chau, Polo A1 - Endert, Alex T1 - Interactive Visualization for Fostering Trust in AI T2 - Dagstuhl Reports N2 - Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI. KW - accountability KW - artificial intelligence KW - explainability KW - fairness KW - interactive visualization KW - machine learning KW - responsibility KW - trust KW - understandability Y1 - 2021 SN - 2192-5283 SS - 2192-5283 U6 - https://doi.org/10.4230/DagRep.10.4.37 DO - https://doi.org/10.4230/DagRep.10.4.37 VL - 10 IS - 4 SP - 37 EP - 42 PB - Schloss-Dagstuhl - Leibniz Zentrum für Informatik ER -