Volltext-Downloads (blau) und Frontdoor-Views (grau)

Deep Dream for Sound Synthesis

  • In 2015, Google engineer Alexander Mordvintsev presented DeepDream as technique to visualise the feature analysis capabilities of deep neural networks that have been trained on image classification tasks. For a brief moment, this technique enjoyed some popularity among scientists, artists, and the general public because of its capability to create seemingly hallucinatory synthetic images. But soonIn 2015, Google engineer Alexander Mordvintsev presented DeepDream as technique to visualise the feature analysis capabilities of deep neural networks that have been trained on image classification tasks. For a brief moment, this technique enjoyed some popularity among scientists, artists, and the general public because of its capability to create seemingly hallucinatory synthetic images. But soon after, research moved on to generative models capable of producing more diverse and more realistic synthetic images. At the same time, the means of interaction with these models have shifted away from a direct manipulation of algorithmic properties towards a predominance of high level controls that obscure the model's internal working. In this paper, we present research that returns to DeepDream to assess its suit-ability as method for sound synthesis. We consider this research to be necessary for two reasons: it tackles a perceived lack of research on musical applications of DeepDream, and it addresses DeepDream's potential to combine data driven and algorithmic approaches. Our research includes a study of how the model architecture, choice of audio data-sets, and method of audio processing influence the acoustic characteristics of the synthesised sounds. We also look into the potential application of DeepDream in a live-performance setting. For this reason, the study limits itself to models consisting of small neural networks that process time-domain representations of audio. These models are resource-friendly enough to operate in real time. We hope that the results obtained so far highlight the attractiveness of Deep-Dream for musical approaches that combine algorithmic investigation with curiosity driven and open ended exploration.show moreshow less

Download full text files

Export metadata

Statistics

frontdoor_oas
Metadaten
Document Type:Conference Proceeding
Conference Type:Konferenzartikel
Zitierlink: https://opus.hs-offenburg.de/8629
Bibliografische Angaben
Title (English):Deep Dream for Sound Synthesis
Conference:Generative Art Conference (26. : 11th to 13th December 2023 : Rome, Italy)
Author:Daniel Bisig, Ephraim WegnerStaff Member
Year of Publication:2023
Place of publication:Rom
Publisher:Domus Argenia Publisher
First Page:82
Last Page:96
Parent Title (English):XXVI Generative Art Conference - GA2023
Editor:Celestino Soddu, Enrica Colabella
ISBN:978-88-96610-45-9
URL:https://artscience-ebookshop.com/GA2023_E-Book.pdf
Language:English
Inhaltliche Informationen
Institutes:Fakultät Medien (M) (ab 22.04.2021)
Collections of the Offenburg University:Bibliografie
Tag:Deep Learning; Sound Synthesis
Formale Angaben
Relevance for "Jahresbericht über Forschungsleistungen":Konferenzbeitrag: h5-Index < 30
Open Access: Open Access 
 Bronze 
Licence (German):License LogoCreative Commons - CC BY-NC-SA - Namensnennung - Nicht kommerziell - Weitergabe unter gleichen Bedingungen 4.0 International