Bachelorarbeit, 2021
89 Seiten
1. Introduction
1.1. Topic and Related Work
1.2. Research Approach: Design Science
1.2.1. Aims and Objectives
1.2.2. Research Questions
1.3. Thesis Outline and Research Methods
2. Background
2.1. Time Series Data
2.2. Basic Understanding of Artificial Intelligence, Machine Learning, and Classification
2.3. Convolutional Neural Network (CNN)
2.4. Explainable Artificial Intelligence (XAI)
2.5. Counterfactual Explanations
2.6. ECG Signal Data
2.7. Openly-accessible ECG Datasets
2.7.1. ECG200
2.7.2. ECG5000
2.7.3. PTB
2.7.4. PTB-XL
3. Native Guide: A Counterfactual Explanation Technique
3.1. Reference Method
3.1.1. Learn or Load Classifier
3.1.2. Class-Activation-Map (CAM)
3.1.3. Finding the Native Guide
3.1.4. Perturbation
3.2. Investigation and Observation of the Method
3.2.1. Comparison of Classifiers
3.2.2. ECG Signal Strength and Wavelength
3.2.3. Swapped Subsequence-Length
3.2.4. Data Quantity, Length and Variety
3.2.5. Different Decision Boundaries
3.3. Experimental Approaches for Optimization
3.3.1. Normalization and Synchronization
3.3.2. Swapping Points instead of Subsequences
3.3.3. Shifted Decision Boundary
4. Evaluation: Expert Interview
4.1. Goal, Structure and Approach
4.2. Expert Background
4.3. Interview Results
4.3.1. Usage of ECG
4.3.2. ECG Data Quality
4.3.3. General Attitude towards Counterfactuals
4.3.4. Plausibility of Counterfactuals
4.3.5. Improvement Ideas
4.3.6. Possible Use-Cases
5. Discussion
6. Conclusion and Future Work
This thesis aims to investigate and optimize the "Native Guide" technique, a generative instance-based method for creating counterfactual explanations for time series data, specifically focusing on electrocardiogram (ECG) classification. The primary objective is to enhance the plausibility and utility of these explanations for medical professionals, evaluating the method through both technical experiments and expert interviews with cardiologists.
2.5. Counterfactual Explanations
What might our life be like if we had made key choices differently? What if we had moved to another city, attended a different university or chose to have no kids? It is common to ask questions like that occasionally. In fact, these types of questions are counterfactuals [6].
We will now take a look at the logical understanding and intuition behind counterfactuals. Factual conditionals state that if one fact is true, then so is another (p =⇒ q) [42]. A factual condition in natural language would be: “If I wash my hands for 30 sec, then they get clean.”
When the condition is known for sure it becomes a fact, and we can construct a counterfactual: “If I had not washed my hands for 30 sec, they would not have got clean.” That is just one possible counterfactual. Another variation is: “If I had not washed my hands for 30 sec, they would have got clean.”
The next variation is not possible, because it contradicts to the initial assumption, and therefore it is not a valid counterfactual: “If I had washed my hands for 30 sec, they would not have got clean.” Finally, we could also alter the “30 sec” and get infinite different counterfactuals like: “If I had washed my hands for 10 sec, they would have got clean.”
1. Introduction: This chapter introduces the motivation behind explainable AI for ECG classification and outlines the research questions and design science approach of the thesis.
2. Background: Provides the fundamental concepts of time series data, deep learning, explainable AI, and specific details on ECG signals and available datasets.
3. Native Guide: A Counterfactual Explanation Technique: Describes the technical implementation of the Native Guide method and presents experimental optimizations including normalization and decision boundary shifts.
4. Evaluation: Expert Interview: Details the setup and findings of interviews conducted with cardiologists to assess the plausibility and potential clinical use cases of the generated counterfactuals.
5. Discussion: Reflects on the challenges of the Native Guide method, specifically regarding proximity and plausibility, and discusses the implications of data quality and expert feedback.
6. Conclusion and Future Work: Summarizes the thesis findings and suggests future research directions, such as improved diversity criteria and testing with clinical trainees.
Artificial Intelligence, Machine Learning, Deep Learning, Counterfactual Explanations, Electrocardiogram, ECG, Explainable AI, XAI, Time Series Classification, Native Guide, Neural Networks, Healthcare, Signal Processing, Data Augmentation, Expert Interview
The research focuses on making deep learning-based ECG classification more interpretable by generating counterfactual explanations using the "Native Guide" method.
The thesis spans across Artificial Intelligence, Machine Learning (specifically deep learning for time series), Explainable AI (XAI), and medical cardiology.
The goal is to develop and evaluate a technique that can reliably explain why an ECG classification model makes certain predictions, increasing trust and providing clinical insights.
The thesis follows the Design Science research methodology, creating an artifact (the improved Native Guide method) and evaluating it through technical investigations and problem-centered expert interviews.
The main part encompasses the theoretical background, the detailed explanation of the Native Guide algorithm, experimental optimizations (like data normalization and boundary shifting), and the qualitative evaluation by cardiologists.
Key terms include Native Guide, counterfactual explanations, ECG classification, explainable AI, and time series data.
The author introduces normalization and synchronization of ECG signals to ensure the generated counterfactuals remain plausible and comparable to the query signals.
Generally, the experts found the approach promising for medical education and training, though they emphasized the need for high-quality, accurately labeled training data for the system to be clinically useful.
Shifting the decision boundary helps balance the trade-off between the risk of generating wrongly classified counterfactuals and the proximity of the counterfactual to the original data.
Der GRIN Verlag hat sich seit 1998 auf die Veröffentlichung akademischer eBooks und Bücher spezialisiert. Der GRIN Verlag steht damit als erstes Unternehmen für User Generated Quality Content. Die Verlagsseiten GRIN.com, Hausarbeiten.de und Diplomarbeiten24 bieten für Hochschullehrer, Absolventen und Studenten die ideale Plattform, wissenschaftliche Texte wie Hausarbeiten, Referate, Bachelorarbeiten, Masterarbeiten, Diplomarbeiten, Dissertationen und wissenschaftliche Aufsätze einem breiten Publikum zu präsentieren.
Kostenfreie Veröffentlichung: Hausarbeit, Bachelorarbeit, Diplomarbeit, Dissertation, Masterarbeit, Interpretation oder Referat jetzt veröffentlichen!

