Refine
Document Type
Conference Type
- Konferenzartikel (4)
- Konferenz-Abstract (1)
Language
- English (7)
- Multiple languages (1)
Has Fulltext
- no (8)
Is part of the Bibliography
- yes (8)
Keywords
- Deep Learning (2)
- Künstliche Intelligenz (2)
- Machine Learning (2)
- accountability (2)
- artificial intelligence (2)
- explainability (2)
- fairness (2)
- interactive visualization (2)
- machine learning (2)
- responsibility (2)
Institute
Open Access
- Open Access (6)
- Diamond (5)
- Closed (1)
- Closed Access (1)
Funding number
- 310101037 (1)
The increasing use of artificial intelligence (AI) technologies across application domains has prompted our society to pay closer attention to AI’s trustworthiness, fairness, interpretability, and accountability. In order to foster trust in AI, it is important to consider the potential of interactive visualization, and how such visualizations help build trust in AI systems. This manifesto discusses the relevance of interactive visualizations and makes the following four claims: i) trust is not a technical problem, ii) trust is dynamic, iii) visualization cannot address all aspects of trust, and iv) visualization is crucial for human agency in AI.
Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI.
The use of artificial intelligence continues to impact a broad variety of domains, application areas, and people. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. The purpose of this seminar is to understand how these components factor into the holistic view of trust. Further, this seminar seeks to identify design guidelines and best practices for how to build interactive visualization systems to calibrate trust.
Machine learning (ML) models are increasingly used for predictive tasks, yet traditional data-based models relying on expert knowledge remain prevalent. This paper examines the enhancement of an expert model for thermomechanical fatigue (TMF) life prediction of turbine components using ML. Using explainable artificial intelligence (XAI) methods such as Permutation Feature Importance (PFI) and SHAP values, we analyzed the patterns and relationships learned by the ML models. Our findings reveal that ML models can be trained on TMF data, but integrating domain knowledge remains crucial. The study concludes with a proposal to further refine the expert model using insights gained from ML models, aiming for a synergistic improvement.
Preprint: Visual Explanations with Attributions and Counterfactuals on Time Series Classification
(2023)
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
An Empirical Study of Explainable AI Techniques on Deep Learning Models For Time Series Tasks
(2021)
Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques. However, many proposed XAI methods produce unverified outputs. Evaluation and verification are usually achieved with a visual interpretation by humans on individual images or text. In this preregistration, we propose an empirical study and benchmark framework to apply attribution methods for neural networks developed for images and text data on time series. We present a methodology to automatically evaluate and rank attribution techniques on time series using perturbation methods to identify reliable approaches.
Die automatische Extraktion von Produkt- und Fertigungsinformationen (Product Manufacturing Information - PMI) aus technischen CAD-Zeichnungen ist eine Voraussetzung für die Fertigung und Qualitätskontrolle in der Produktion. Aufgrund des speziellen Stils von CAD-Zeichnungen und der begrenzten Verfügbarkeit von Trainings- und Testdaten bleibt die Digitalisierung von CAD-Zeichnungen in Rasterbildern eine Herausforderung für Texterkennungssoftware (Optical Character Recognition - OCR). Dieser Beitrag stellt ein neuartiges, auf Deep Learning basierendes Framework vor, das dieses Problem adressiert, indem es Form- und Lagetoleranzen (Geometrical Dimensioning and Tolerancing - GD&T) sowie Bemaßungen in CAD-Zeichnungen lokalisiert und erkennt. Das Framework besteht aus einem zentralen Lokalisierungsmodul und mehreren nachgelagerten Pipelines für einzelne Klassen von PMI. Die Leistungsfähigkeit des Lokalisierungsmoduls, des Netzwerks zur Zeilenerkennung und der einzelnen Pipelines wird anhand realer Datensätze evaluiert. Ihre Leistung wird mit der des OCR-Programms Tesseract verglichen.