Refine
Year of publication
- 2019 (5) (remove)
Document Type
- Article (unreviewed) (3)
- Part of a Book (1)
- Conference Proceeding (1)
Conference Type
- Konferenzartikel (1)
Has Fulltext
- no (5)
Is part of the Bibliography
- yes (5)
Keywords
- Machine Learning (5) (remove)
Institute
Open Access
- Open Access (3)
- Closed Access (1)
Deep generative models have recently achieved impressive results for many real-world applications, successfully generating high-resolution and diverse samples from complex datasets. Due to this improvement, fake digital contents have proliferated growing concern and spreading distrust in image content, leading to an urgent need for automated ways to detect these AI-generated fake images.
Despite the fact that many face editing algorithms seem to produce realistic human faces, upon closer examination, they do exhibit artifacts in certain domains which are often hidden to the naked eye. In this work, we present a simple way to detect such fake face images - so-called DeepFakes. Our method is based on a classical frequency domain analysis followed by basic classifier. Compared to previous systems, which need to be fed with large amounts of labeled data, our approach showed very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios. For the evaluation on high resolution face images, we combined several public datasets of real and fake faces into a new benchmark: Faces-HQ. Given such high-resolution images, our approach reaches a perfect classification accuracy of 100% when it is trained on as little as 20 annotated samples. In a second experiment, in the evaluation of the medium-resolution images of the CelebA dataset, our method achieves 100% accuracy supervised and 96% in an unsupervised setting. Finally, evaluating a low-resolution video sequences of the FaceForensics++ dataset, our method achieves 91% accuracy detecting manipulated videos.
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (e.g. Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Machine Learning als Schlüsseltechnologie für Digitalisierung: Wie funktioniert maschinelles Lernen?
(2019)
Apache Hadoop is a well-known open-source framework for storing and processing huge amounts of data. This paper shows the usage of the framework within a project of the university in cooperation with a semiconductor company. The goal of this project was to supplement the existing data landscape by the facilities of storing and analyzing the data on a new Apache Hadoop based platform.
In diesem Beitrag werden grundlegende Aspekte und Methoden der Data Science erläutert. Nach dem Vorgehensmodell CRISP-DM sind in den Phasen Data Unterstanding und Data Preparation vor allem Verfahren der Datenselektion, Datenvorverarbeitung und der explorativen Datenanalyse anzuwenden. Beim Modeling, der Hauptaufgabe der Data Science, kann man überwachte und unüberwachte Methoden sowie Reinforcement Learning unterscheiden. Auf die Evaluation der Güte eines Modells anhand von Qualitätsmaßen wird anschließend eingegangen. Der Beitrag schließt mit einem Ausblick auf weitere Themen wie Cognitive Computing.