Refine
Year of publication
- 2019 (41) (remove)
Document Type
- Article (unreviewed) (41) (remove)
Has Fulltext
- no (41)
Is part of the Bibliography
- yes (41)
Keywords
- Machine Learning (3)
- Bloom filters (1)
- Digitalsierung von Schule und Unterricht (1)
- E-Commerce (1)
- Hand (1)
- Human Computer Interaction (1)
- Informatik (1)
- Johann Sebastian Bach (1)
- Künstliche Intelligenz (1)
- Learning Analytics (1)
Institute
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (33)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (6)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (2)
- ACI - Affective and Cognitive Institute (1)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (1)
- Fakultät Wirtschaft (W) (1)
- IMLA - Institute for Machine Learning and Analytics (1)
- IfTI - Institute for Trade and Innovation (1)
Open Access
- Closed Access (25)
- Open Access (15)
Smart Cities und Big Data
(2019)
Sharing Economy
(2019)
Unternehmerische Resilienz
(2019)
Deep generative models have recently achieved impressive results for many real-world applications, successfully generating high-resolution and diverse samples from complex datasets. Due to this improvement, fake digital contents have proliferated growing concern and spreading distrust in image content, leading to an urgent need for automated ways to detect these AI-generated fake images.
Despite the fact that many face editing algorithms seem to produce realistic human faces, upon closer examination, they do exhibit artifacts in certain domains which are often hidden to the naked eye. In this work, we present a simple way to detect such fake face images - so-called DeepFakes. Our method is based on a classical frequency domain analysis followed by basic classifier. Compared to previous systems, which need to be fed with large amounts of labeled data, our approach showed very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios. For the evaluation on high resolution face images, we combined several public datasets of real and fake faces into a new benchmark: Faces-HQ. Given such high-resolution images, our approach reaches a perfect classification accuracy of 100% when it is trained on as little as 20 annotated samples. In a second experiment, in the evaluation of the medium-resolution images of the CelebA dataset, our method achieves 100% accuracy supervised and 96% in an unsupervised setting. Finally, evaluating a low-resolution video sequences of the FaceForensics++ dataset, our method achieves 91% accuracy detecting manipulated videos.