Refine
Document Type
- Conference Proceeding (33) (remove)
Conference Type
- Konferenzartikel (30)
- Konferenz-Abstract (1)
- Sonstiges (1)
Language
- English (33) (remove)
Is part of the Bibliography
- yes (33)
Keywords
- Education (3)
- Gamification (3)
- VR (3)
- Deep Learning (2)
- Sound Synthesis (2)
- University students (2)
- artificial dancer (2)
- dance and technology (2)
- deep learning (2)
- learning scenarios (2)
Institute
- Fakultät Medien (M) (ab 22.04.2021) (33) (remove)
Open Access
- Open Access (17)
- Closed (10)
- Closed Access (6)
- Hybrid (5)
- Bronze (4)
- Diamond (3)
- Gold (1)
- Grün (1)
When shopping online, it is usually not possible to view products in the same way as you are used to when shopping offline. With augmented reality (AR), it is not only possible to view the product in detail, but also to view it at home in the real environment. Such an AR application sets stimuli that can affect the users and their purchase decision and Word-of-mouth intention. In this work, we assume that when viewing a product in AR, not only affective internal states but also cognitive perception processes have an impact on purchase decision and Word-of-mouth intention. While positive affective reactions have already been studied in the context of AR, this paper will also describe inner cognitive perception processes, using the construct of AR authenticity. To test these assumptions, a study was conducted with 155 participants. The results show that both the purchase intention and the Word-of-mouth intention are influenced by the constructs of positive affective reactions and AR authenticity.
This paper describes the authors' first experiments in creating an artificial dancer whose movements are generated through a combination of algorithmic and interactive techniques with machine learning. This approach is inspired by the time honoured practice of puppeteering. In puppeteering, an articulated but inanimate object seemingly comes to live through the combined effects of a human controlling select limbs of a puppet while the rest of the puppet's body moves according to gravity and mechanics. In the approach described here, the puppet is a machine-learning-based artificial character that has been trained on motion capture recordings of a human dancer. A single limb of this character is controlled either manually or algorithmically while the machine-learning system takes over the role of physics in controlling the remainder of the character's body. But rather than imitating physics, the machine-learning system generates body movements that are reminiscent of the particular style and technique of the dancer who was originally recorded for acquiring training data. More specifically, the machine-learning system operates by searching for body movements that are not only similar to the training material but that it also considers compatible with the externally controlled limb. As a result, the character playing the role of a puppet is no longer passively responding to the puppeteer but makes movement decisions on its own. This form of puppeteering establishes a form of dialogue between puppeteer and puppet in which both improvise together, and in which the puppet exhibits some of the creative idiosyncrasies of the original human dancer.
Strings P
(2021)
Strings is an audiovisual performance for an acoustic violin and two generative instruments, one for creating synthetic sounds and one for creating synthetic imagery. The three instruments are related to each other conceptually , technically, and aesthetically by sharing the same physical principle, that of a vibrating string. This submission continues the work the authors have previously published at xCoAx 2020. The current submission briefly summarizes the previous publication and then describes the changes that have been made to Strings. The P in the title emphasizes, that most of these changes have been informed by experiences collected during rehearsals (in German Proben). These changes have helped Strings to progress from a predominantly technical framework to a work that is ready for performance.
Activities for rehabilitation and prevention are often lengthy and associated with pain and frustration. Their playful enrichment (hereafter: gamification) can counteract this, resulting in so-called “exergames”. However, in contrast to games designed solely for entertainment, the increased motivation and immersion in gamified training can lead to a reduced perception of pain and thus to health deterioration. Therefore, it is necessary to monitor activities continuously. However, only an AI-based system able to generate autonomous interventions could vacate the therapists’ costly time and allow better training at home. An automated adjustment of the movement training’s difficulty as well as individualized goal setting and control are essential to achieve such autonomy. This article’s contribution is two-fold: (1) We portray the potentials of gamification in the health area. (2) We present a framework for smart rehabilitation and prevention training allowing autonomous, dynamic, and gamified interactions.
In the field of network security, the detection of possible intrusions is an important task to prevent and analyse attacks. Machine learning has been adopted as a particular supporting technique over the last years. However, the majority of related published work uses post mortem log files and fails to address the required real-time capabilities of network data feature extraction and machine learning based analysis [1-5]. We introduce the network feature extractor library FEX, which is designed to allow real-time feature extraction of network data. This library incorporates 83 statistical features based on reassembled data flows. The introduced Cython implementation allows processing individual packets within 4.58 microseconds. Based on the features extracted by FEX, existing intrusion detection machine learning models were examined with respect to their real-time capabilities. An identified Decision-Tree Classifier model was thus further optimised by transpiling it into C Code. This reduced the prediction time of a single sample to 3.96 microseconds on average. Based on the feature extractor and the improved machine learning model an IDS system was implemented which supports a data throughput between 63.7 Mbit/s and 2.5 Gbit/s making it a suitable candidate for a real-time, machine-learning based IDS.
The transition from college to university can have a variety of psychological effects on students who need to cope with daily obligations by themselves in a new setting, which can result in loneliness and social isolation. Mobile technology, specifically mental health apps (MHapps), have been seen as promising solutions to assist university students who are facing these problems, however, there is little evidence around this topic. My research investigates how a mobile app can be designed to reduce social isolation and loneliness among university students. The Noneliness app is being developed to this end; it aims to create social opportunities through a quest-based gamified system in a secure and collaborative network of local users. Initial evaluations with the target audience provided evidence on how an app should be designed for this purpose. These results are presented and how they helped me to plan the further steps to reach my research goals. The paper is presented at MobileHCI 2020 Doctoral Consortium.
Loneliness, an emotional distress caused by the lack of meaningful social connections, has been increasingly affecting university students who need to deal with everyday situations in a new setting, especially those who have come from abroad. Currently there is little work on digital solutions to reduce loneliness. Therefore, this work describes the general design considerations for mobile apps in this context and outlines a potential solution. The mobile app Noneliness is used to this end: it aims to reduce loneliness by creating social opportunities through a quest-based gamified system in a secure and collaborative network of local users. The results of initial evaluations with the target audience are described. The results informed a user interface redesign as well as a review of the features and the gamification principles adopted.
The paper describes the implementation of practical laboratory settings in a virtual environment. With the entry of VR glasses into the mass market, there is a chance to establish educational and training applications for displaying some teaching materials and practical works. Therefore our project focuses on the realization of virtual experiments and environments, which gives users a deep insight into selected subfields of Optics and Photonics. Our goal is not to substitute the hand on experiments rather to extend them. By means of VR glasses, the user is offered the possibility to view the experiment from several angles and to make changes through interactive control functions. During the VR application, additional context-related information is displayed. By using object recognition, the specific graphics and texts for the respective object are loaded and supplemented at the appropriate place. Thus, complex facts are supported in an informative way. The prototype is developed using the Unity Engine and can thus be exported to different platforms and end devices. Another major advantage of virtual simulations to the real situation is the high degree of controllability as well as the easy repeatability. With slight modifications, entire experiments can be reused. Our research aims to acquire new knowledge in the field of e-learning in association with VR technology. Here we try to answer a core question of the compatibility of the individual media components.