Refine
Year of publication
- 2017 (131) (remove)
Document Type
- Conference Proceeding (40)
- Article (reviewed) (24)
- Bachelor Thesis (20)
- Article (unreviewed) (13)
- Part of a Book (10)
- Other (8)
- Master's Thesis (6)
- Book (5)
- Contribution to a Periodical (2)
- Doctoral Thesis (1)
Conference Type
- Konferenzartikel (34)
- Konferenz-Abstract (5)
- Sonstiges (1)
Keywords
- Games (4)
- Gamification (4)
- Computer Games (3)
- Computerspiele (3)
- Game Design (3)
- CST (2)
- E-Learning (2)
- Götz von Berlichingen (2)
- HF-Ablation (2)
- Human Resources (2)
Institute
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (51)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (41)
- Fakultät Wirtschaft (W) (25)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (14)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (10)
- ACI - Affective and Cognitive Institute (8)
- INES - Institut für nachhaltige Energiesysteme (5)
- WLRI - Work-Life Robotics Institute (5)
- IfTI - Institute for Trade and Innovation (3)
- Zentrale Einrichtungen (2)
Open Access
- Closed Access (131) (remove)
The growing complexity in RF front-ends, which support carrier aggregation and a growing number of frequency bands, leads to tightened nonlinearity requirements in all sub-components. The generation of third order intermodulation products (IMD3) are typical problems caused by the non-linearity of SAW devices. In the present work, we investigate temperature compensating (TC) SAW devices on Lithium Niobate-rot128YX. An accurate FEM simulation model [1] is employed, which allows to better understand the origin of nonlinearities in such acoustic devices.
Клиновые акустические волны в твёрдом те-ле — это третий фундаментальный тип волн, после объёмных и поверхност-ных волн, импульсы которых распространяются без изменений своих форм (дисперсия отсутствует). Систему упругого клина можно получить из систе-мы упругого полупространства, “разрезав” его вдоль некоторой плоскости, а систему упругого полупространства можно получить из распределённой в пространстве упругой среды тем же методом, поэтому связи между поверх-ностными и объёмными волнами должны во многом повторяться при рас-смотрении клиновых и поверхностных волн. Например, существование быст-рых псевдоповерхностных волн в системе упругого полупространства, излу-чающих энергию при распространении в объёмные волны, имеет свой аналог и для системы упругого клина: совсем недавно были открыты псевдоклино-вые волны, излучающие как объёмные, так и поверхностные волны по мере своего распространения. С другой стороны, в этой же последовательности объёмных, поверхностных и клиновых волн должны выделяться и отличи-тельные особенности. Если поверхностные волны отличаются от объёмных волн тем, что они локализованы на двухмерной поверхности (объёмные вол-ны являются нелокализованными), то клиновые волны локализованы вдоль одномерной поверхности (линии) — кромки клина. Клиновые волны — это волноводные акустические волны, которые распространяются без дифракци-онных потерь, а также они не обладают дисперсией, поскольку в системе бесконечного упругого клина нет ни одного параметра размерности длины.
В заключении приведены основные результаты работы, которые со-стоят в следующем:
1. С помощью метода функций Лагерра была построена функция динами-ческого отклика на импульсный линейный источник (функция Грина) для задачи Лэмба в полупространстве, а также были изучены вопросы о сходимости и устойчивости данного построения. Было показано, что в предельном случае построенная функция динамического отклика совпа-дает с классической функцией Грина для этой задачи.
2. На основе результатов предыдущего пункта была построена функция Грина для упругого клина (и функция плотности состояния на кром-ке, совпадающая с диагональными компонентами функции Грина), с по-мощью которой удалось идентифицировать импульсы псевдоклиновых волн на экспериментальных кривых.
3. Для определённых клиновых конфигураций в анизотропных упругих средах (тетрагональных кристаллах) удалось получить критерий суще-ствования клиновых волн на основе характеристик поверхностных волн, распространяющихся на гранях исследуемых конфигураций, а также в некоторых случаях удалось классифицировать клиновые волны по типу симметрии.
4. Была разработана теория, описывающая формы импульсов клиновых волн при различных режимах генерации: абляционном и термоупругом.
5. Для клиновых волн была представлена нелинейная теория второго по-рядка. Были проведены численные расчёты функции ядра эволюцион-ного уравнения клиновых волн для кремниевых клиньев с одной гранью, совпадающей с поверхностью (111) (поверхность скола), и с произволь-ной ориентацией второй грани.
6. Были описаны фундаментальные отличия нелинейных линовых волн от нелинейных объёмных и поверхностных волн, а также было проведено численное моделирование эволюции импульса клиновых волн, которое показало соответствие теории эксперименту.
7. Получены решения солитонного типа для клиновых волн. Рассмотрены взаимодействия солитонов и свойства солитонного распада.
Electrolyte-Gated Field-Effect Transistors Based on Oxide Semiconductors: Fabrication and Modeling
(2017)
A novel approach of a testbed for embedded networking nodes has been conceptualized and implemented. It is based on the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes are running in parallel and are connected via so-called virtual interfaces. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features.
In recent years, the additive manufacturing processes have rapidly developed. The additive manufacturing processes currently present a high-performance alternative to conventional manufacturing methods. In particular, they offer the opportunity of previously hardly imaginable design freedom, i.e. the implementation of complex forms and geometries. This capability can, for example, be applied in the development of especially light but still loadable components in automotive engineering. In addition, waste material is seldom produced in additive manufacturing which benefits a sustainable production of building components. Until now, this design freedom was barely used in the construction of technical components and products because, in doing so, both specific design guidelines for additive manufacturing and complex strength calculations must be simultaneously observed. Yet in order to fully take advantage of the additive manufacturing potential, the method of topology optimization, based on FEM simulation, suggests itself. It is with this method that components that are precisely matched and are especially light, thereby also resource-saving, can be produced. Current literature research indicates that this method is used in automotive manufacturing for reducing weight and improving the stability of both individual parts and assembly units. This contribution will study how this development method can be applied in the example of a brake mount from an experimental vehicle. In this, the conventional design is improved by means of a simulation tool for topology optimization in various steps. In an additional processing step, the smoothing of the thus developed component occurs. Finally, the component is generatively manufactured by means of selective laser melting technology. Models are manufactured using binder jetting for the demonstration of the process. It will also be determined how this weight reduction affects the CO2 emissions of a vehicle in use.
Additive manufacturing processes have evolved rapidly in recent years and now offer a wide range of manufacturing technologies and workable materials. This range from plastics and metals to paper and even polymer plaster composites. Due to the layer by layer structure of the components the additive processes have in comparison with conventional manufacturing processes the advantage of freedom of design, that means the simple implementation of complex geometries. Moreover, the additive processes provide the advantage of reduced consumption of resources, since essentially only the material is consumed, which is required for the actual component, since no waste in the form of chips is produced. In order to use these advantages, the potentials of additive manufacturing and the requirements of sustainable design must already be observed in the product development process. So the design of the components and products must be made so as little as possible construction and supporting material is required for the generative production and therefore little resources are consumed. Also, all steps of the additive manufacturing process must be considered properly, that includes the post processing. This allows components be designed so that for instance the effort for removing the support structure is considerably reduced. This leads to a significant reduction in manufacturing time and thus energy consumption. The implementation of these potentials in product development can be demonstrated by means of a multiple-stages model. A case study shows how this model is applied in the training of Master students in the field of product development. In a workshop the students work as a group while implementing the task of developing a miniature racing car under the rules of sustainable design in compliance with the boundary conditions for an additive manufacturing. In this case, Fused Deposition Modelling FDM using plastics as a building material is applied. The results show how the students have dealt with the different requirements and how they have implemented them in product development and in the subsequent additive manufacturing.
Defining Recrutainment: A Model and a Survey on the Gamification of Recruiting and Human Resources
(2017)
Recrutainment, is a hybrid word combining recruiting and entertainment. It describes the combination of activities in human resources and gamification. Concepts and methods from game design are now used to assess and select future employees. Beyond this area, recrutainment is also applied for internal processes like professional development or even marketing campaigns. This paper’s contribution has four components: (1) we provide a conceptual background, leading to a more precise definition of recrutainment; (2) we develop a new model for analyzing solutions in recrutainment; (3) we present a corpus of 42 applications and use the new model to assess their strengths and potentials; (4) we provide a bird’s eye view on the state of the art in recrutainment and show the current weighting of gamification and recruiting aspects.
Applications helping us to maintain the focus on work are called “Zenware” (from concentration and Zen). While form factors, use cases and functionality vary, all these applications have a common goal: creating uninterrupted, focused attention on the task at hand. The rise of such tools exemplifies the users’ desire to control their attention within the context of omnipresent distraction. In expert interviews we investigate approaches in the context of attention-management at the workplace of knowledge workers. To gain a broad understanding, we use judgement sampling in interviews with experts from several disciplines. We especially explore how focus and flow can be stimulated. Our contribution has four components: a brief overview on the state of the art (1), a presentation of the results (2), strategies for coping with digital distractions and design guidelines for future Zenware (3) and an outlook on the overall potential in digital work environments (4).
This chapter portrays the historical and mathematical background of dynamic and procedural content generation (PCG). We portray and compare various PCG methods and analyze which mathematical approach is suited for typical applications in game design. In the next step, a structural overview of games applying PCG as well as types of PCG is presented. As abundant PCG content can be overwhelming, we discuss context-aware adaptation as a way to adapt the challenge to individual players’ requirements. Finally, we take a brief look at the future of PCG.
Battery degradation is a complex physicochemical process that strongly depends on operating conditions. We present a model-based analysis of lithium-ion battery degradation in a stationary photovoltaic battery system. We use a multi-scale multi-physics model of a graphite/lithium iron phosphate (LiFePO4, LFP) cell including solid electrolyte interphase (SEI) formation. The cell-level model is dynamically coupled to a system-level model consisting of photovoltaics (PV), inverter, load, grid interaction, and energy management system, fed with historic weather data. Simulations are carried out for two load scenarios, a single-family house and an office tract, over annual operation cycles with one-minute time resolution. As key result, we show that the charging process causes a peak in degradation rate due to electrochemical charge overpotentials. The main drivers for cell ageing are therefore not only a high state of charge (SOC), but the charging process leading towards high SOC. We also show that the load situation not only influences system parameters like self-sufficiency and self-consumption, but also has a significant impact on battery ageing. We assess reduced charge cut-off voltage as ageing mitigation strategy.
The DMFC is a promising option for backup power systems and for the power supply of portable devices. However, from the modeling point of view liquid-feed DMFC are challenging systems due to the complex electrochemistry, the inherent two-phase transport and the effect of methanol crossover. In this paper we present a physical 1D cell model to describe the relevant processes for DMFC performance ranging from electrochemistry on the surface of the catalyst up to transport on the cell level. A two-phase flow model is implemented describing the transport in gas diffusion layer and catalyst layer at the anode side. Electrochemistry is described by elementary steps for the reactions occurring at anode and cathode, including adsorbed intermediate species on the platinum and ruthenium surfaces. Furthermore, a detailed membrane model including methanol crossover is employed. The model is validated using polarization curves, methanol crossover measurements and impedance spectra. It permits to analyze both steady-state and transient behavior with a high level of predictive capabilities. Steady-state simulations are used to investigate the open circuit voltage as well as the overpotentials of anode, cathode and electrolyte. Finally, the transient behavior after current interruption is studied in detail.
This book offers a compendium of best practices in game dynamics. It covers a wide range of dynamic game elements ranging from player behavior over artificial intelligence to procedural content generation. Such dynamics make virtual worlds more lively and realistic and they also create the potential for moments of amazement and surprise. In many cases, game dynamics are driven by a combination of random seeds, player records and procedural algorithms. Games can even incorporate the player’s real-world behavior to create dynamic responses. The best practices illustrate how dynamic elements improve the user experience and increase the replay value.
The book draws upon interdisciplinary approaches; researchers and practitioners from Game Studies, Computer Science, Human-Computer Interaction, Psychology and other disciplines will find this book to be an exceptional resource of both creative inspiration and hands-on process knowledge.
Gamifying rehabilitation is an efficient way to improve motivation and exercise frequency. However, between flow theory, self-determination theory or Bartle's player types there is much room for speculation regarding the mechanics required for successful gamification, which in turn leads to increased motivation. For our study, we selected a gamified solution for motion training (an exergame) where the playful design elements are extremely simple. The contribution is three-fold: we show best practices from the state of the art, present a study analyzing the effects of simple gamification mechanics on a quantitative and on a qualitative level and discuss strategies for playful design in therapeutic movement games.
Designing Authentic Emotions for Non-Human Characters. A Study Evaluating Virtual Affective Behavior
(2017)
While human emotions have been researched for decades, designing authentic emotional behavior for non-human characters has received less attention. However, virtual behavior not only affects game design, but also allows creating authentic avatars or robotic companions. After a discussion of methods to model and recognize emotions, we present three characters with a decreasing level of human features and describe how established design techniques can be adapted for such characters. In a study, 220 participants assess these characters' emotional behavior, focusing on the emotion "anger". We want to determine how reliable users can recognize emotional behavior, if characters increasingly do not look and behave like humans. A secondary aim is determining if gender has an impact on the competence in emotion recognition. The findings indicate that there is an area of insecure attribution of virtual affective behavior not distant but close to human behavior. We also found that at least for anger, men and women assess emotional behavior equally well.
This work demonstrates the potentials of procedural content generation (PCG) for games, focusing on the generation of specific graphic props (reefs) in an explorer game. We briefly portray the state-of-the-art of PCG and compare various methods to create random patterns at runtime. Taking a step towards the game industry, we describe an actual game production and provide a detailed pseudocode implementation showing how Perlin or Simplex noise can be used efficiently. In a comparative study, we investigate two alternative implementations of a decisive game prop: once created traditionally by artists and once generated by procedural algorithms. 41 test subjects played both implementations. The analysis shows that PCG can create a user experience that is significantly more realistic and at the same time perceived as more aesthetically pleasing. In addition, the ever-changing nature of the procedurally generated environments is preferred with high significance, especially by players aged 45 and above.
Gamification, die spielerische Anreicherung von Tätigkeiten, erfreut sich zunehmender Beliebtheit. Insbesondere in den Bereichen Gesundheit (Exergames) oder Lernen (Serious Games, Edutainment) gibt es eine Vielzahl erfolgreicher Anwendungen. Weniger verbreitet ist Gamification dagegen bislang bei Arbeitsprozessen. Zwar gibt es erfolgreiche Ansätze im Bereich Dienstleistung und Service (z. B. bei Callcentern), der Bereich der industriellen Produktion wurde jedoch bis vor wenigen Jahren nicht adressiert.
Dieses Kapitel gibt einen Überblick der Entwicklung von Gamification und zeigt den Stand der Technik. Wir leiten allgemeine Anforderungen für Gamification im Produktionsumfeld ab und stellen zwei neue Ansätze aus der aktuellen Forschung vor. Diese werden in einer Studie mit Trainern aus der Automobilbranche auf Akzeptanz untersucht. Die Ergebnisse zeigen eine insgesamt positive Haltung zur Gamifizierung der Produktion und eine sehr hohe Akzeptanz insbesondere des Pyramiden-Designs.
We present the design outline of a context-aware interactive system for smart learning in the STEM curriculum (science, technology, engineering, and mathematics). It is based on a gameful design approach and enables "playful coached learning" (PCL): a learning process enriched by gamification but also close to the learner's activities and emotional setting. After a brief introduction on related work, we describe the technological setup, the integration of projected visual feedback and the use of object and motion recognition to interpret the learner's actions. We explain how this combination enables rapid feedback and why this is particularly important for correct habit formation in practical skills training. In a second step, we discuss gamification methods and analyze which are best suited for the PCL system. Finally, emotion recognition, a major element of the final PCL design not yet implemented, is briefly outlined.
EuGH "comtech"
(2017)
Since their dawning, space communications have been among the strongest driving applications for the development of error correcting codes. Indeed, space-to-Earth telemetry (TM) links have extensively exploited advanced coding schemes, from convolutional codes to Reed-Solomon codes (also in concatenated form) and, more recently, from turbo codes to low-density parity-check (LDPC) codes. The efficiency of these schemes has been extensively proved in several papers and reports. The situation is a bit different for Earth-to-space telecommand (TC) links. Space TCs must reliably convey control information as well as software patches from Earth control centers to scientific payload instruments and engineering equipment onboard (O/B) spacecraft. The success of a mission may be compromised because of an error corrupting a TC message: a detected error causing no execution or, even worse, an undetected error causing a wrong execution. This imposes strict constraints on the maximum acceptable detected and undetected error rates.
Our university carries out various research projects. Among others, the project Schluckspecht is an interdisciplinary work on different ultra-efficient car concepts for international contests. Besides the engineering work, one part of the project deals with real-time data visualization. In order to increase the efficiency of the vehicle, an online monitoring of the runtime parameters is necessary. The driving parameters of the vehicle are transmitted to a processing station via a wireless network connection. We plan to use an augmented reality (AR) application to visualize different data on top of the view of the real car. By utilizing a mobile Android or iOS device a user can interactively view various real-time and statistical data. The car and its components are meant to be augmented by various additional information, whereby that information should appear at the correct position of the components. An engine e.g. could show the current rpm and consumption values. A battery on the other hand could show the current charge level. The goal of this paper is to evaluate different possible approaches, their suitability and to expand our application to other projects at our university.
We present a two-dimensional (2D) planar chromatographic separation of estrogenic active compounds on RP-18 W (Merck, 1.14296) phase. A mixture of 8 substances was separated using a solvent mix consisting of hexane, ethyl acetate, acetone (55:15:10, v/v) in the first direction and of acetone and water (15:10, v/v) in the second direction. Separation was performed on an RP-18 W plate over a distance of 70 mm. This 2D-separation method can be used to quantify 17α-ethinylestradiol (EE2) in an effect-directed analysis, using the yeast strain Saccharomyces cerevisiae BJ3505. The test strain (according to McDonnell) contains the estrogen receptor. Its activation by estrogen active compounds is measured by inducing the reporter gene lacZ which encodes the enzyme β-galactosidase. This enzyme activity is determined on plate by using the fluorescent substrate MUG (4-methylumbelliferyl-β-d-galactopyranoside).
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset where, a subspace is the subset of dimensions of the data. But exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, thus, parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage, firstly, the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation has shown linear speedup. Secondly, we are developing an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
Modelling and Simulation of Microscale Trigeneration Systems Based on Real- Life Experimental Data
(2017)
For the shift of the energy grid towards a smarter decentralised system flexible microscale trigeneration systems will play an important role due to their ability to support the demand side management in buildings. However to harness their potential modern control methods like model predictive control must be implemented for their optimal scheduling and control. To implement such supervisory control methods, first, simple analytical models representing the behaviour of the components need to be developed. At the Institute of Energy System Technologies in Offenburg we have built a real-life microscale trigeneration plant and present in this paper the models based on experimental data. These models are qualitatively validated and their application in the future for the optimal scheduling problem is briefly motivated.
Diese Bachelorarbeit erstreckt sich inhaltlich über die theoretischen Grundlagen von IPv6, verschiedenen lerntheoretischen Ansätzen, insbesondere im Bezug auf elektronisches Lernen, sowie der Konzeption einzelner E-Learning-Abschnitte und die Dokumentation der praktischen Umsetzung dieser. Ziel der im Rahmen dieses Werkes angefertigten E-Learning-Lektion war es, den Studenten an der Hochschule Offenburg eine praxisrelevante und lernfördernde E-Learning-Anwendung zu erstellen, welche dazu einlädt, Lerninhalte zum Thema IPv6, welche parallel in der Computernetze-Vorlesung vermittelt werden, zu festigen und zu vertiefen.
This book has emerged from lectures and courses given in recent years by the authors at their universities and shows how theoretical concepts of Business Intelligence are applied in SAP BW on HANA.
The authors developed a set of case studies guiding the student through the complete process of building an end-to-end BI system, based on a simple but realistic business scenario. The cases are designed in such a way that the application of many concepts such as staging, core data warehouse, data mart, reporting, etc., in SAP BW on HANA is introduced and demonstrated step by step.
Target Audience:
The cases are primarily designed for SAP BW beginners, who want a first introduction and hands-on experience with the latest version of BW on HANA. We briefly touch the general concepts of Business Intelligence and Data Warehousing. These concepts are discussed in many excellent books out in the market, which we don’t want to replace. The reader should either already be familiar with these concepts or should be willing to use the references we provide. Also, this book can NOT replace a complete consultant training for BW, but it can serve as a starting point for a journey into the world of SAP BW on HANA.
Konzeption und Durchführung der Evaluation einer virtuellen Lernumgebung: Methodenlehre-Baukasten
Der MLBK steht unter der URL http://www.methodenlehre-baukasten.de zur freien Verfügung für Lernende und Lehrende. Es finden ständig noch Entwicklungsarbeiten statt, um fehlende Übungen zu ergänzen oder fehlerhafte zu verbessern oder Teile in Modulen, mit denen wir noch nicht zufrieden sind, zu ersetzen. Rückmeldungen der Nutzer sind uns sehr erwünscht, um diesen Entwicklungsprozess voranzutreiben. Für die Zukunft wird ein Geschäftsmodell erarbeitet, das die Wartung, Pflege und Weiterentwicklung des Systems tragen soll und für die nachhaltige Bereitstellung des Systems sorgen kann.
Time-of-Flight Cameras Enabling Collaborative Robots for Improved Safety in Medical Applications
(2017)
Human-robot collaboration is being used more and more in industry applications and is finding its way into medical applications. Industrial robots that are used for human-robot collaboration, cannot detect obstacles from a distance. This paper introduced the idea of using wireless technology to connect a Time-of-Flight camera to off-the-shelf industrial robots. This way, the robot can detect obstacles up to a distance of five meters. Connecting Time-of-Flight cameras to robots increases the safety in human-robot collaboration by detecting obstacles before a collision. After looking at the state of the art, the authors elaborated the different requirements for such a system. The Time-of-Flight camera from Heptagon is able to work in a range of up to five meters and can connect to the control unit of the robot via a wireless connection.
In safety critical applications wireless technologies are not widely spread. This is mainly due to reliability and latency requirements. In this paper a new wireless architecture is presented which will allow for customizing the latency and reliability for every single participant within the network. The architecture allows for building up a network of inhomogeneous participants with different reliability and latency requirements. The used TDMA scheme with TDD as duplex method is acting gentle on resources. Therefore participants with different processing and energy resources are able to participate.
In medical applications wireless technologies are not widely spread. Today they are mainly used in non latency-critical applications where reliability can be guaranteed through retransmission protocols and error correction mechanisms. By using retransmission protocols within the disturbed shared wireless channel latency will increase. Therefore retransmission protocols are not sufficient for removing latency-critical wired connections within operating rooms such as foot switches. Todays research aims to improve reliability through the physical characteristics of the wireless channel by using diversity methods and more robust modulation. In this paper an Architecture for building up a reliable network is presented. The Architecture offers the possibility for devices with different reliability, latency and energy consumption requirements to participate. Furthermore reliability, latency and energy consumption are scalable for every single participant.