Refine
Document Type
- Bachelor Thesis (1)
- Master's Thesis (1)
Has Fulltext
- yes (2)
Is part of the Bibliography
- no (2)
Keywords
- Sound (2) (remove)
Institute
Open Access
- Closed Access (1)
- Open Access (1)
Diese Arbeit beschäftigt sich mit der Frage, wie Musik und Soundeffekte in Videospielen die Emotionen der Spieler und ihre Meinung zum Spiel beeinflussen. Um diese Frage zu beantworten soll nach einer kurzen Einleitung in die Entwicklung der Videospielmusik und die Emotionsforschung eine Studie durchgeführt werden, bei der Probanden das Spiel Pinstripe mit beziehungsweise ohne Musik spielen und anschließend einen Fragebogen ausfüllen. Beim Spielen wird außerdem ihr Puls gemessen. Auch werden Zusammenhänge zwischen Big Five-Persönlichkeitsmerkmalen, den Spielertypen nach Bartle und Spielgenres untersucht. Die Auswertung zeigt, dass der Sound zwar nicht den Spielspaß, das Interesse an der Story oder die Motivation weiterzuspielen steigert, aber das Spiel immersiver gestaltet. Auch die Pulsmessung zeigt keine deutlichen Unterschiede zwischen den Gruppen. Frauen und Personen, die viel spielen, scheinen allerdings stärker von Sound beeinflusst zu werden. Abschließend wird eine Übersicht auf mögliche fortführende Studien und die zukünftige Entwicklung von Videospielmusik gegeben.
Much of the research in the field of audio-based machine learning has focused on recreating human speech via feature extraction and imitation, known as deepfakes. The current state of affairs has prompted a look into other areas, such as the recognition of recording devices, and potentially speakers, by only analysing sound files. Segregation and feature extraction are at the core of this approach.
This research focuses on determining whether a recorded sound can reveal the recording device with which it was captured. Each specific microphone manufacturer and model, among other characteristics and imperfections, can have subtle but compounding effects on the results, whether it be differences in noise, or the recording tempo and sensitivity of the microphone while recording. By studying these slight perturbations, it was found to be possible to distinguish between microphones based on the sounds they recorded.
After the recording, pre-processing, and feature extraction phases we completed, the prepared data was fed into several different machine learning algorithms, with results ranging from 70% to 100% accuracy, showing Multi-Layer Perceptron and Logistic Regression to be the most effective for this type of task.
This was further extended to be able to tell the difference between two microphones of the same make and model. Achieving the identification of identical models of a microphone suggests that the small deviations in their manufacturing process are enough of a factor to uniquely distinguish them and potentially target individuals using them. This however does not take into account any form of compression applied to the sound files, as that may alter or degrade some or most of the distinguishing features that are necessary for this experiment.
Building on top of prior research in the area, such as by Das et al. in in which different acoustic features were explored and assessed on their ability to be used to uniquely fingerprint smartphones, more concrete results along with the methodology by which they were achieved are published in this project’s publicly accessible code repository.