Volltext-Downloads (blau) und Frontdoor-Views (grau)

Preprint: Visual Explanations with Attributions and Counterfactuals on Time Series Classification

  • With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions andWith the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.show moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Article (unreviewed)
Zitierlink: https://opus.hs-offenburg.de/8435
Bibliografische Angaben
Title (English):Preprint: Visual Explanations with Attributions and Counterfactuals on Time Series Classification
Author:Udo Schlegel, Daniela OelkeStaff MemberGND, Daniel A. Keim, Mennatallah El-Assady
Year of Publication:2023
Date of first Publication:2023/07/14
First Page:1
Last Page:14
DOI:https://doi.org/10.48550/arXiv.2307.08494
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Collections of the Offenburg University:Bibliografie
Tag:Deep Learning; Explainable AI; Time Series Classification; Visual Analytics
Formale Angaben
Relevance for "Jahresbericht über Forschungsleistungen":Keine Relevanz
Open Access: Open Access 
 Diamond 
Licence (German):License LogoCreative Commons - CC BY - Namensnennung 4.0 International
ArXiv Id:http://arxiv.org/abs/2307.08494