Refine
Year of publication
Document Type
- Conference Proceeding (560) (remove)
Conference Type
- Konferenzartikel (342)
- Konferenz-Abstract (104)
- Konferenzband (68)
- Sonstiges (32)
- Konferenz-Poster (18)
Language
- English (361)
- German (198)
- Multiple languages (1)
Keywords
- Mikroelektronik (62)
- RoboCup (32)
- Machine Learning (9)
- injury (9)
- Ausbildung (6)
- Biomechanik (6)
- E-Learning (5)
- Herzkrankheit (5)
- Konstruktion (5)
- Produktion (5)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (234)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (139)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (75)
- Fakultät Wirtschaft (W) (51)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (46)
- INES - Institut für nachhaltige Energiesysteme (30)
- IMLA - Institute for Machine Learning and Analytics (29)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (19)
- Fakultät Medien (M) (ab 22.04.2021) (17)
- ACI - Affective and Cognitive Institute (8)
Open Access
- Open Access (560) (remove)
In 2015, Google engineer Alexander Mordvintsev presented DeepDream as technique to visualise the feature analysis capabilities of deep neural networks that have been trained on image classification tasks. For a brief moment, this technique enjoyed some popularity among scientists, artists, and the general public because of its capability to create seemingly hallucinatory synthetic images. But soon after, research moved on to generative models capable of producing more diverse and more realistic synthetic images. At the same time, the means of interaction with these models have shifted away from a direct manipulation of algorithmic properties towards a predominance of high level controls that obscure the model's internal working. In this paper, we present research that returns to DeepDream to assess its suit-ability as method for sound synthesis. We consider this research to be necessary for two reasons: it tackles a perceived lack of research on musical applications of DeepDream, and it addresses DeepDream's potential to combine data driven and algorithmic approaches. Our research includes a study of how the model architecture, choice of audio data-sets, and method of audio processing influence the acoustic characteristics of the synthesised sounds. We also look into the potential application of DeepDream in a live-performance setting. For this reason, the study limits itself to models consisting of small neural networks that process time-domain representations of audio. These models are resource-friendly enough to operate in real time. We hope that the results obtained so far highlight the attractiveness of Deep-Dream for musical approaches that combine algorithmic investigation with curiosity driven and open ended exploration.
This paper describes the authors' first experiments in creating an artificial dancer whose movements are generated through a combination of algorithmic and interactive techniques with machine learning. This approach is inspired by the time honoured practice of puppeteering. In puppeteering, an articulated but inanimate object seemingly comes to live through the combined effects of a human controlling select limbs of a puppet while the rest of the puppet's body moves according to gravity and mechanics. In the approach described here, the puppet is a machine-learning-based artificial character that has been trained on motion capture recordings of a human dancer. A single limb of this character is controlled either manually or algorithmically while the machine-learning system takes over the role of physics in controlling the remainder of the character's body. But rather than imitating physics, the machine-learning system generates body movements that are reminiscent of the particular style and technique of the dancer who was originally recorded for acquiring training data. More specifically, the machine-learning system operates by searching for body movements that are not only similar to the training material but that it also considers compatible with the externally controlled limb. As a result, the character playing the role of a puppet is no longer passively responding to the puppeteer but makes movement decisions on its own. This form of puppeteering establishes a form of dialogue between puppeteer and puppet in which both improvise together, and in which the puppet exhibits some of the creative idiosyncrasies of the original human dancer.
Generative machine learning models for creative purposes play an increasingly prominent role in the field of dance and technology. A particularly popular approach is the use of such models for generating synthetic motions. Such motions can either serve as source of ideation for choreographers or control an artificial dancer that acts as improvisation partner for human dancers. Several examples employ autoencoder-based deep-learning architectures that have been trained on motion capture recordings of human dancers. Synthetic motions are then generated by navigating the autoencoder's latent space. This paper proposes an alternative approach of using an autoencoder for creating synthetic motions. This approach controls the generation of synthetic motions on the level of the motion itself rather than its encoding. Two different methods are presented that follow this principle. Both methods are based on the interactive control of a single joint of an artificial dancer while the other joints remain under the control of the autoencoder. The first method combines the control of the orientation of a joint with iterative autoencoding. The second method combines the control of the target position of a joint with forward kinematics and the application of latent difference vectors. As illustrative example of an artistic application, this latter method is used for an artificial dancer that plays a digital instrument. The paper presents the implementation of these two methods and provides some preliminary results.
Strings P
(2021)
Strings is an audiovisual performance for an acoustic violin and two generative instruments, one for creating synthetic sounds and one for creating synthetic imagery. The three instruments are related to each other conceptually , technically, and aesthetically by sharing the same physical principle, that of a vibrating string. This submission continues the work the authors have previously published at xCoAx 2020. The current submission briefly summarizes the previous publication and then describes the changes that have been made to Strings. The P in the title emphasizes, that most of these changes have been informed by experiences collected during rehearsals (in German Proben). These changes have helped Strings to progress from a predominantly technical framework to a work that is ready for performance.
Strings
(2020)
This article presents the currently ongoing development of an audiovisual performance work with the title Strings. This work provides an improvisation setting for a violinist, two laptop performers, and two generative systems. At the core of Strings lies an approach that establishes a strong correlation among all participants by means of a shared physical principle. The physical principle is that of a vibrating string. The article discusses how this principle is used in both natural and simulated forms as main interaction layer between all performers and as natural or generative principle for creating audio and video.
We aim to debate and eventually be able to carefully judge how realistic the following statement of a young computer scientist is: “I would like to become an ethical correctly acting offensive cybersecurity expert”. The objective of this article is not to judge what is good and what is wrong behavior nor to present an overall solution to ethical dilemmas. Instead, the goal is to become aware of the various personal moral dilemmas a security expert may face during his work life. For this, a total of 14 cybersecurity students from HS Offenburg were asked to evaluate several case studies according to different ethical frameworks. The results and particularities are discussed, considering different ethical frameworks. We emphasize, that different ethical frameworks can lead to different preferred actions and that the moral understanding of the frameworks may differ even from student to student.
Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI.
The use of artificial intelligence continues to impact a broad variety of domains, application areas, and people. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. The purpose of this seminar is to understand how these components factor into the holistic view of trust. Further, this seminar seeks to identify design guidelines and best practices for how to build interactive visualization systems to calibrate trust.
Variable refrigerant flow (VRF) and variable air volume (VAV) systems are considered among the best heating, ventilation, and air conditioning systems (HVAC) thanks to their ability to provide cooling and heating in different thermal zones of the same building. As well as their ability to recover the heat rejected from spaces requiring cooling and reuse it to heat another space. Nevertheless, at the same time, these systems are considered one of the most energy-consuming systems in the building. So, it is crucial to well size the system according to the building’s cooling and heating needs and the indoor temperature fluctuations. This study aims to compare these two energy systems by conducting an energy model simulation of a real building under a semi-arid climate for cooling and heating periods. The developed building energy model (BEM) was validated and calibrated using measured and simulated indoor air temperature and energy consumption data. The study aims to evaluate the effect of these HVAC systems on energy consumption and the indoor thermal comfort of the building. The numerical model was based on the Energy Plus simulation engine. The approach used in this paper has allowed us to reach significant quantitative energy saving along with a high level of indoor thermal comfort by using the VRF system compared to the VAV system. The findings prove that the VRF system provides 46.18% of the annual total heating energy savings and 6.14% of the annual cooling and ventilation energy savings compared to the VAV system.
3D Bin Picking with an innovative powder filled gripper and a torque controlled collaborative robot
(2023)
A new and innovative powder filled gripper concept will be introduced to a process to pick parts out of a box without the use of a camera system which guides the robot to the part. The gripper is a combination of an inflatable skin, and a powder inside. In the unjammed condition, the powder is soft and can adjust to the geometry of the part which will be handled. By applying a vacuum to the inflatable skin, the powder gets jammed and transforms to a solid shaped form in which the gripper was brought before applying the vacuum. This physical principle is used to pick parts. The flexible skin of the gripper adjusts to all kinds of shapes, and therefore, can be used to realize 3D bin picking. With the help of a force controlled robot, the gripper can be pushed with a consistent force on flexible positions depending of the filling level of the box. A Kuka LBR iiwa with joint torque sensors in all of its seven axis’ was used to achieve a constant contact pressure. This is the basic criteria to achieve a robust picking process.
While most ultrafast time-resolved optical pump-probe experiments in magnetic materials reveal the spatially homogeneous magnetization dynamics of ferromagnetic resonance (FMR), here we explore the magneto-elastic generation of GHz-to-THz frequency spin waves (exchange magnons). Using analytical magnon oscillator equations, we apply time-domain and frequency-domain approaches to quantify the results of ultrafast time-resolved optical pump-probe experiments in free-standing ferromagnetic thin films. Simulations show excellent agreement with the experiment, provide acoustic and magnetic (Gilbert) damping constants and highlight the role of symmetry-based selection rules in phonon-magnon interactions. The analysis is extended to hybrid multilayer structures to explore the limits of resonant phonon-magnon interactions up to THz frequencies.
Sensors and actuators enable creation of context-aware applications in which applications can discover and take advantage of contextual information, such as user location, nearby people and objects. In this work, we use a general context definition, which can be applied to various devices, e.g., robots and mobile devices. Developing context-based software applications is considered as one of the most challenging application domains due to the sensors and actuators as part of a device. We introduce a new development approach for context-based applications by using use-case descriptions and Visual Programming Languages (VPL). The introduction of web-based VPLs, such as Scratch and Snap, has reinvigorated the usefulness of VPLs. We provide an in-depth discussion of our new VPL based method, a step by step development process to enable development of context-based applications. Two case studies illustrate how to apply our approach to different problem domains: Context-based mobile apps and context-based humanoid robot applications.
The main advantage of mobile context-aware applications is to provide effective and tailored services by considering the environmental context, such as location, time, nearby objects and other data, and adapting their functionality according to the changing situations in the context information without explicit user interaction. The idea behind Location-Based Services (LBS) and Object-Based Services (OBS) is to offer fully-customizable services for user needs according to the location or the objects in a mobile user's vicinity. However, developing mobile context-aware software applications is considered as one of the most challenging application domains due to the built-in sensors as part of a mobile device. Visual Programming Languages (VPL) and hybrid visual programming languages are considered to be innovative approaches to address the inherent complexity of developing programs. The key contribution of our new development approach for location and object-based mobile applications is a use case driven development approach based on use case templates and visual code templates to enable even programming beginners to create context-aware mobile applications. An example of the use of the development approach is presented and open research challenges and perspectives for further development of our approach are formulated.
Due to globalization and the resulting increase in competition on the market, products must be produced more and more cheaply, especially in series production, because buyers expect new variants or even completely new products in ever shorter cycles. Injection molding is the most important production process for manufacturing plastic components in large quantities. However, the conventional production of a mold is extremely time-consuming and costly, which creates a contradiction to the fast pace of the market. Additive tooling is an area of application of additive manufacturing, which in the field of injection molding is preferably used for the prototype production of mold inserts. This allows injection molding tools to be produced faster and more cheaply than through the subtractive manufacturing of metal tools. Material Jetting processes using polymers (MJT-UV/P), also called Polyjet Modeling (PJM), have a great potential for use in additive tooling. Due to the poorer mechanical and thermal properties compared to conventional mold insert materials, e.g. steel or aluminum, the previously used design principles cannot be applied. Accordingly, new design guidelines are necessary, which are developed in this paper. The necessary information is obtained with the help of a systematic literature research. The design guidelines are mapped in a uniform design guide, which is structured according to the design process of injection molds. The guidelines do not only refer to the constructive design of the injection mold or the polymer mold insert, but to the entire design process and describe the four phases of planning, conception, development and realization. Particular attention is paid to the special geometric designs of a polymer mold insert and the thermomechanical properties of the mold insert materials. As a result, design guidelines are available that are adapted to the special requirements of additive tooling of molds inserts made of plastics for injection molding.