Refine
Year of publication
Document Type
- Conference Proceeding (1089)
- Article (unreviewed) (558)
- Article (reviewed) (528)
- Part of a Book (452)
- Book (221)
- Other (138)
- Contribution to a Periodical (123)
- Patent (94)
- Report (62)
- Letter to Editor (30)
- Doctoral Thesis (24)
- Working Paper (7)
- Periodical Part (4)
- Study Thesis (2)
- Image (1)
- Moving Images (1)
Conference Type
- Konferenzartikel (856)
- Konferenz-Abstract (153)
- Sonstiges (40)
- Konferenz-Poster (31)
- Konferenzband (13)
Language
- German (1731)
- English (1591)
- Other language (5)
- Russian (3)
- Multiple languages (2)
- French (1)
- Spanish (1)
Has Fulltext
- no (3334) (remove)
Keywords
- Digitalisierung (38)
- RoboCup (32)
- Dünnschichtchromatographie (26)
- Arbeitszeugnis (22)
- Finite-Elemente-Methode (22)
- Energieversorgung (21)
- Kommunikation (21)
- Management (19)
- Industrie 4.0 (18)
- Machine Learning (18)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (786)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (717)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (704)
- Fakultät Wirtschaft (W) (558)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (389)
- INES - Institut für nachhaltige Energiesysteme (178)
- Fakultät Medien (M) (ab 22.04.2021) (169)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (132)
- IMLA - Institute for Machine Learning and Analytics (72)
- ACI - Affective and Cognitive Institute (56)
Open Access
- Closed Access (1240)
- Open Access (862)
- Closed (530)
- Bronze (188)
- Diamond (52)
- Hybrid (11)
- Gold (10)
- Grün (7)
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37% can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.