Refine
Document Type
- Conference Proceeding (45)
- Article (unreviewed) (16)
- Article (reviewed) (5)
- Book (2)
- Part of a Book (2)
- Doctoral Thesis (1)
- Report (1)
Conference Type
- Konferenzartikel (43)
- Konferenz-Abstract (1)
- Sonstiges (1)
Has Fulltext
- no (72) (remove)
Is part of the Bibliography
- yes (72)
Keywords
- Deep Leaning (9)
- Machine Learning (8)
- Robustness (4)
- Data Science (3)
- Generative Adversarial Network (3)
- image classification (3)
- Aliasing (2)
- CNNs (2)
- Computer Vision (2)
- Künstliche Intelligenz (2)
- Stability (2)
- adversarial attacks (2)
- autoattack (2)
- convolutional neural networks (2)
- deep learning (2)
- neural networks (2)
- Adversarial Attacks (1)
- Adversarial Robustness (1)
- Adversarial examples (1)
- Adversarial robustness (1)
- Artificial Intelligence (1)
- Business Intelligence (1)
- CNN (1)
- Deep Reinforcement Learning (1)
- Deep learning (1)
- Eigenvalues (1)
- Image restoration (1)
- KI-Labor Südbaden (1)
- Mode Collapse (1)
- Model Calibration (1)
- Monocular Depth Estimation (1)
- Neural Architecture Search (1)
- Nyquist-Shannon (1)
- Octave Convolution (1)
- Optimization (1)
- Pattern Recognition (1)
- Periodic Table of AI (1)
- Regularization (1)
- Representation Learning (1)
- RoboCup (1)
- Roubst overfitting (1)
- Sampling (1)
- Second-order Optimization (1)
- Unsupervised Conditional Training (1)
- Use Case (1)
- adversarial (1)
- adversarial detection (1)
- aerosol modeling (1)
- artificial intelligence (1)
- attribute manipulation (1)
- autoML (1)
- cifar (1)
- climate emulation (1)
- correlation (1)
- curriculum learning (1)
- deep reinforcement learning (1)
- defense (1)
- detection (1)
- face editing (1)
- face recognition (1)
- fourier (1)
- gan (1)
- generative adversarial networks (1)
- hair (1)
- image color analysis (1)
- imagenet (1)
- interpretation (1)
- lid (1)
- mahalanobis (1)
- neural architecture search (1)
- noise measurement (1)
- nose (1)
- pattern recognition (1)
- physics-informed ML (1)
- pruning (1)
- robustness (1)
- seismic (1)
- semantics (1)
- spectral defense (1)
- spectraldefense (1)
- style transfer (1)
- transversal skills (1)
Institute
- IMLA - Institute for Machine Learning and Analytics (72) (remove)
Open Access
- Open Access (49)
- Bronze (15)
- Closed Access (11)
- Closed (10)
- Diamond (9)
- Grün (2)
- Gold (1)
- Hybrid (1)
In diesem Beitrag werden grundlegende Aspekte und Methoden der Data Science erläutert. Nach dem Vorgehensmodell CRISP-DM sind in den Phasen Data Unterstanding und Data Preparation vor allem Verfahren der Datenselektion, Datenvorverarbeitung und der explorativen Datenanalyse anzuwenden. Beim Modeling, der Hauptaufgabe der Data Science, kann man überwachte und unüberwachte Methoden sowie Reinforcement Learning unterscheiden. Auf die Evaluation der Güte eines Modells anhand von Qualitätsmaßen wird anschließend eingegangen. Der Beitrag schließt mit einem Ausblick auf weitere Themen wie Cognitive Computing.
Machine Learning als Schlüsseltechnologie für Digitalisierung: Wie funktioniert maschinelles Lernen?
(2019)
Apache Hadoop is a well-known open-source framework for storing and processing huge amounts of data. This paper shows the usage of the framework within a project of the university in cooperation with a semiconductor company. The goal of this project was to supplement the existing data landscape by the facilities of storing and analyzing the data on a new Apache Hadoop based platform.
Einleitung
(2019)
Data Science
(2019)
Data Science steht derzeit wie kein anderer Begriff für die Auswertung großer Datenmengen mit analytischen Konzepten des Machine Learning oder der künstlichen Intelligenz. Nach der bewussten Wahrnehmung der Big Data und dabei insbesondere der Verfügbarmachung in Unternehmen sind Technologien und Methoden zur Auswertung dort gefordert, wo klassische Business Intelligence an ihre Grenzen stößt.
Dieses Buch bietet eine umfassende Einführung in Data Science und deren praktische Relevanz für Unternehmen. Dabei wird auch die Integration von Data Science in ein bereits bestehendes Business-Intelligence-Ökosystem thematisiert. In verschiedenen Beiträgen werden sowohl Aufgabenfelder und Methoden als auch Rollen- und Organisationsmodelle erläutert, die im Zusammenspiel mit Konzepten und Architekturen auf Data Science wirken. Neben den Grundlagen werden unter anderem folgende Themen behandelt:
- Data Science und künstliche Intelligenz
- Konzeption und Entwicklung von Data-driven Products
- Deep Learning
- Self-Service im Data-Science-Umfeld
- Data Privacy und Fragen zur digitalen Ethik
- Customer Churn mit Keras/TensorFlow und H2O
- Wirtschaftlichkeitsbetrachtung bei der Auswahl und Entwicklung von Data Science
- Predictive Maintenance
- Scrum in Data-Science-Projekten
Zahlreiche Anwendungsfälle und Praxisbeispiele geben Einblicke in die aktuellen Erfahrungen bei Data-Science-Projekten und erlauben dem Leser einen direkten Transfer in die tägliche Arbeit.
This paper describes the concept and some results of the project "Menschen Lernen Maschinelles Lernen" (Humans Learn Machine Learning, ML2) of the University of Applied Sciences Offenburg. It brings together students of different courses of study and practitioners from companies on the subject of Machine Learning. A mixture of blended learning and practical projects ensures a tight coupling of machine learning theory and application. The paper details the phases of ML2 and mentions two successful example projects.
Diffracted waves carry high resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. Because of the low signal-to-noise ratio of diffracted waves, it is challenging to preserve them during processing and to identify them in the final data. It is, therefore, a traditional approach to pick manually the diffractions. However, such task is tedious and often prohibitive, thus, current attention is given to domain adaptation. Those methods aim to transfer knowledge from a labeled domain to train the model, and then infer on the real unlabeled data. In this regard, it is common practice to create a synthetic labeled training dataset, followed by testing on unlabeled real data. Unfortunately, such procedure may fail due to the existing gap between the synthetic and the real distribution since quite often synthetic data oversimplifies the problem, and consequently the transfer learning becomes a hard and non-trivial procedure. Furthermore, deep neural networks are characterized by their high sensitivity towards cross-domain distribution shift. In this work, we present deep learning model that builds a bridge between both distributions creating a semi-synthetic datatset that fills in the gap between synthetic and real domains. More specifically, our proposal is a feed-forward, fully convolutional neural network for imageto-image translation that allows to insert synthetic diffractions while preserving the original reflection signal. A series of experiments validate that our approach produces convincing seismic data containing the desired synthetic diffractions.
Generative convolutional deep neural networks, e.g. popular GAN architectures, are relying on convolution based up-sampling methods to produce non-scalar outputs like images or video sequences. In this paper, we show that common up-sampling methods, i.e. known as up-convolution or transposed convolution, are causing the inability of such models to reproduce spectral distributions of natural training data correctly. This effect is independent of the underlying architecture and we show that it can be used to easily detect generated data like deepfakes with up to 100% accuracy on public benchmarks. To overcome this drawback of current generative models, we propose to add a novel spectral regularization term to the training optimization objective. We show that this approach not only allows to train spectral consistent GANs that are avoiding high frequency errors. Also, we show that a correct approximation of the frequency spectrum has positive effects on the training stability and output quality of generative networks.
The term attribute transfer refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator.
In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our Attribute Transfer Inpainting Generative Adversarial Network (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the G are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.