Refine
Year of publication
- 2019 (260) (remove)
Document Type
- Conference Proceeding (83)
- Article (reviewed) (45)
- Article (unreviewed) (41)
- Part of a Book (39)
- Book (20)
- Report (9)
- Contribution to a Periodical (8)
- Doctoral Thesis (5)
- Patent (5)
- Letter to Editor (2)
- Working Paper (2)
- Other (1)
Conference Type
- Konferenzartikel (69)
- Konferenz-Abstract (9)
- Sonstiges (3)
- Konferenzband (2)
Language
- German (137)
- English (121)
- Multiple languages (1)
- Russian (1)
Has Fulltext
- no (260) (remove)
Is part of the Bibliography
- yes (260)
Keywords
- Heart rhythm model (5)
- Machine Learning (5)
- Modeling and simulation (5)
- Human Computer Interaction (4)
- Virtual Reality (4)
- Augmented Reality (3)
- Education in Optics and Photonics (3)
- Herzrhythmusmodell (3)
- Informatik (3)
- Learning Analytics (3)
Institute
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (96)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (73)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (43)
- Fakultät Wirtschaft (W) (39)
- INES - Institut für nachhaltige Energiesysteme (13)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (12)
- CRT - Campus Research & Transfer (7)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (7)
- ACI - Affective and Cognitive Institute (6)
- IMLA - Institute for Machine Learning and Analytics (6)
Open Access
- Closed Access (157)
- Open Access (82)
- Bronze (10)
- Closed (1)
- Diamond (1)
New employees are supposed to quickly understand their tasks, internal processes and familiarize with colleagues. This process is called “onboarding” and is still mainly realized by organizational methods from human resource management, such as introductory events or special employee sessions. Software tools and especially mobile applications are an innovative means to support provide onboarding processes in a modern, even remote, way. In this paper we analyze how the use of gamification can enhance onboarding processes. Firstly, we describe a mobile onboarding application specifically developed for the young, technically literate generations Y and Z, who are just about to start their career. Secondly, we report on a study with 98 students and young employees. We found that participants enjoyed the gamified application. They especially appreciated the feature “Team Bingo” which facilitates social integration and teambuilding. Based on the OCEAN personality model (“Big Five”), the personality traits agreeableness and openness revealed significant correlations with a preference for the gamified onboarding application.
Die Erfindung betrifft eine Ösophaguselektrodensonde bzw. einen Ösophaguskatheter 10 zur Bioimpedanzmessung und/oder zur Neurostimulation, eine Vorrichtung 100 zur transösophagealen kardiologischen Behandlung und/oder kardiologischen Diagnose und ein Verfahren zum Steuern oder Regeln einer Ablationseinrichtung zum Durchführen einer Herzablation. Die Ösophaguselektrodensonde 10 umfasst eine Bioimpedanzmesseinrichtung zur Messung der Bioimpedanz von zumindest einem Teil des die Ösophaguselektrodensonde 10 umgebenden Gewebes. Die Bioimpedanzmesseinrichtung umfasst mindestens eine erste Elektrode 12A und mindestens eine zweite Elektrode 12B, wobei die mindestens eine erste Elektrode 12A auf einer dem Herzen zugewandten Seite 14 der Ösophaguselektrodensonde 10 angeordnet ist, und die mindestens eine zweite Elektrode 12B auf einer vom Herzen abgewandten Seite 16 der Ösophaguselektrodensonde 10 angeordnet ist.Die Vorrichtung 100 umfasst die Ösophaguselektrodensonde 10 und eine Steuer- und/oder Auswerteinrichtung 30. Die Steuer- und/oder Auswerteinrichtung 30 ist eingerichtet, ein erstes Bioimpedanzmesssignal von der mindestens einen ersten Elektrode 12A und ein zweites Bioimpedanzmesssignal von der mindestens einen zweiten Elektrode 12B zu empfangen und zu vergleichen, und ein Kontrollsignal auf Basis des Vergleichs zu generieren. Das Kontrollsignal kann ein Signal zum Steuern oder Regeln einer Ablationseinrichtung zum Durchführen einer Herzablation sein.
The invention relates to an oesophageal electrode probe (10) for bioimpedance measurement and/or for neurostimulation; a device (100) for transoesophageal cardiological treatment and/or cardiological diagnosis; and a method for the open-loop or closed-loop control of a cardiac catheter ablation device and/or a cardiac, circulatory and/or respiratory support device. The oesophageal electrode probe comprises a bioimpedance measuring device for measuring the bioimpedance of at least one part of the tissue surrounding the oesophageal electrode probe. The bioimpedance device comprises at least one first and one second electrode, wherein the at least one first electrode (12A) is arranged on a side (14) of the oesophageal electrode probe facing towards the heart and the at least one second electrode (12B) is arranged on a side (16) of the oesophageal electrode probe facing away from the heart. The device (100) comprises the oesophageal electrode probe (10) and a control and/or evaluation device (30), which is configured for receiving a first bioimpedance measurement signal from the at least one first electrode (12A) and a second bioimpedance measurement signal from the at least one second electrode (12B), and comparing same, and generating a control signal on the basis of the comparison. The control signal can be a signal for the open-loop or closed-loop control of a cardiac catheter ablation device and/or a cardiac, circulatory and/or respiratory support device.
The paper describes a systematic approach for a precise short-time cloud coverage prediction based on an optical system. We present a distinct pre-processing stage that uses a model based clear sky simulation to enhance the cloud segmentation in the images. The images are based on a sky imager system with fish-eye lens optic to cover a maximum area. After a calibration step, the image is rectified to enable linear prediction of cloud movement. In a subsequent step, the clear sky model is estimated on actual high dynamic range images and combined with a threshold based approach to segment clouds from sky. In the final stage, a multi hypothesis linear tracking framework estimates cloud movement, velocity and possible coverage of a given photovoltaic power station. We employ a Kalman filter framework that efficiently operates on the rectified images. The evaluation on real world data suggests high coverage prediction accuracy above 75%.
This paper presents an approach for implementing an automated hit detection and score calculation system for a steel dartboard using a standard webcam. First, the rectilinear field separations of the dartboard are described mathematically by means of line slopes and are than stored. These slopes serve as a basis for later score calculation. In addition, thrown darts have to be detected and the pixel at which the dart cuts the dartboard has to be determined. When this information is known, a comparison is made using the line slopes, allowing the field number of the hit to be detected. The decision for single, double or triple hit is made by evaluating the defined colors on the dartboard. All these functions are then packaged in a Matlab GUI.
Avoiding collisions between a robot arm and any obstacle in its path is essential to human-robot collaboration. Multiple systems are available that can detect obstacles in the robot's way prior and subsequent to a collision. The systems work well in different areas surrounding the robot. One area that is difficult to handle is the area that is hidden by the robot arm. This paper focuses on pick and place maneuvers, especially on obstacle detection in between the robot arm and the table that robot is located on. It introduces the use of single pixel time-of-flight sensors to detect obstacles directly from the robot arm. The proposed approach reduces the complexity of the problem by locking axes of the robot that are not needed for the pick and place movement. The comparison of simulated results and laboratory measurements show concordance.
This paper describes the concept and some results of the project "Menschen Lernen Maschinelles Lernen" (Humans Learn Machine Learning, ML2) of the University of Applied Sciences Offenburg. It brings together students of different courses of study and practitioners from companies on the subject of Machine Learning. A mixture of blended learning and practical projects ensures a tight coupling of machine learning theory and application. The paper details the phases of ML2 and mentions two successful example projects.
Virtual reality in the hotel industry: assessing the acceptance of immersive hotel presentation
(2019)
In the hotel industry, it is crucial to reduce the inherent information asymmetry with regard to the goods offered. This asymmetry can be minimised through the use of smartphone-based virtual reality applications (SBVRs), which allow virtual simulation of real experiences and thus enable more efficient information retrieval. The aim of the study is to determine for the first time the user acceptance of these immersive hotel presentations for assessing the performance of a travel accommodation. For this purpose, the Technology Acceptance Model (TAM) was used to explain the acceptance behaviour for this new technology. A virtual reality application was specially developed, in which the participants could explore a hotel virtually. A total of 569 participants took part in the study. The structural equation model and the hypotheses were tested using a Partial Least Squares (PLS) analysis. The results illustrate that the immersive product experience leads to more efficient information gathering. The perceived usefulness significantly affects the attitude towards using the technology as well as the intention to use it. In contrast to the traditional TAM, the perceived ease of use of SBVRs has no effect on the perceived usefulness or attitude towards using the technology.
Purpose
The purpose of this study is to investigate the effects of telepresence while using a smartphone-based virtual reality system (SBVR) to explore a hotel virtually and to determine the influence of this immersive experience on the booking intention of the potential customer.
Design/methodology/approach
Within the scope of this study, a conceptual research model was developed which covered utilitarian and hedonic aspects of the user experience of SBVRs and showed their relevance for the booking intention. A virtual reality application was programmed especially for the study, in which the test persons were able to virtually explore a hotel complex. A total of 569 people participated in the study. A questionnaire was used for the data collection. The structural equation modelling and hypothesis verification were carried out using the partial least squares method.
Findings
The immersive feeling of telepresence increases the perceived enjoyment and usefulness of the potential customer. In addition, the user's curiosity is aroused by the telepresence, which also significantly increases the perceived enjoyment as well as the perceived usefulness. The hedonic and utilitarian value of the virtual hotel experience increases the probability that the customer will book the travel accommodation.
Research limitations/implications
The virtual reality application developed for the study is based on static panoramic images and does not contain audio-visual elements (e.g. sound, video, animation). Audio-visual elements might increase the degree of immersion and could therefore be investigated in future research.
Practical implications
The results of the study show that the SBVR is a suitable marketing tool to present hotels in an informative and entertaining way, and can thereby increase sales and profits.
Originality/value
For the first time, this study investigates the potential of SBVRs for the virtual product presentation of hotels and provides empirical evidence that the availability of this innovative form of presentation leads to a higher booking intention.
For e-commerce retailers it is crucial to present their products both informatively and attractively. Virtual reality (VR) systems represent a new marketing tool that supports customers in their decision-making process and offers an extraordinary product experience. Despite these advantages, the use of this technology for e-commerce retailers is also associated with risks, namely cybersickness. The aim of the study is to investigate the occurrence of cybersickness in the context of the customer’s perceived enjoyment and the perceived challenge of a VR product presentation. Based on a conceptual research framework, a laboratory study with 533 participants was conducted to determine the influence of these factors on the occurrence of cybersickness. The results demonstrate that the perceived challenge has a substantially stronger impact on the occurrence of cybersickness, which can only be partially reduced by perceived enjoyment. When realizing VR applications in general and VR product presentations in particular, e-commerce retailers should therefore first minimize possible challenges instead of focusing primarily on entertainment aspects of such applications.
Hot working tools are subjected to complex thermal and mechanical loads during service. Locally, the stresses can exceed the material’s yield strength in highly loaded areas. During production, this causes cyclic plastic deformation and thus thermomechanical fatigue, which can significantly shorten the lifetime of hot working tools. To sustain this high loads, the hot working tools are typically made of tempered martensitic hot work tool steels. While the annealing temperatures of the tool steels usually lie in the range of 400 to 600 °C, the steels may experience even higher temperatures during hot working, resulting in softening of the material due to changes in microstructure. Therefore, the temperature-dependent cyclic mechanical properties of the frequently used hot work tool steel 1.2367 (X38CrMoV5-3) after tempering are investigated in this work. To this end, hardness measurements are performed. Furthermore, the Institute of Forming Technology and Machines (IFUM) provides test results from cyclic tests at temperatures ranging from 20 °C (room temperature) to 650 °C. To describe the observed time- and temperature-dependent softening during tempering, a kinetic model for the evolution of the mean size of secondary carbides based on Ostwald ripening is developed. In addition, both mechanism-based and phenomenological relationships for the cyclic mechanical properties of the Ramberg- Osgood model depending on carbide size and temperature are proposed. The stress-strain hysteresis loops measured at different temperatures and after different heat treatments can be well described with the proposed kinetic and mechanical model. Furthermore, the model is suitable for integration in advanced mechanism-based lifetime models. However, since the Ramberg-Osgood model is not suitable for finite element implementation, a temperature-dependent incremental cyclic plasticity model is presented as well. Thus, softening due to particle coarsening can be applied in the finite element method (FEM). Therefore, a kinetic model is coupled with a cyclic plasticity model including kinematic hardening. The plasticity model is implemented via subroutines in the finite element program ABAQUS for implicit integration (subroutine called UMAT) and explicit integration (subroutine called VUMAT). The implemented model is used for the simulation of an exemplary hot working process to assess the effects of softening due to particle coarsening. It shows that the thermal softening at high temperatures, which occur over a long time at a mechanically highly loaded area, has a great influence. If this influence is not considered in tool design, an unexpected tool failure might occur bringing the production to a standstill.
Fusion 360 – kurz und bündig
(2019)
Dieses Lehrbuch ermöglicht dem Anfänger in der 3D-Modellierung einen schnellen Einstieg in die Arbeit mit dem cloudbasierten CAD-System Autodesk® Fusion 360™. Der Schwerpunkt liegt dabei auf den grundlegenden Funktionen zur Modellierung von Einzelteilen und Produkten, sowie in der Erstellung von einfachen technischen Zeichnungen. Dabei werden bei jedem Schritt die besonderen Anforderungen an eine 3D-Druck-gerechte Gestaltung erläutert und umgesetzt. Somit ist das Ergebnis dieser "Schritt für Schritt"-Anleitung die vollständige Modellierung eines Miniatur-Automobils, das am 3D-Drucker in ein reales Modell umgesetzt werden kann. Das didaktische Konzept ist so ausgelegt, dass alle Schritte für ein Selbststudium geeignet sind.
Besides of conventional CAD systems, new, cloud-based CAD systems have also been available for some years. These CAD systems designed according to the principle of software as a service (SaaS) differ in some important features from the conventional CAD systems. Thus, these CAD systems are operated via a browser and it is not necessary to install the software on a computer. The CAD-data is stored in the cloud and not on a local computer or central server. This new approach should also facilitate the sharing and management of data. Finally, many of these new CAD systems are available as freeware for education purposes, so the universities can save license costs. This contribution examines newly developed, cloud-based CAD systems. In the context of a case study, the application of these new CAD systems are investigated in the training of engineers in design education. Thus, the students compare a conventional and a cloud-based CAD system as part of an exercise of designing and 3D modelling of a pinion shaft. Subsequently, the students manufacture a drawing with different views of the pinion shaft. This assessment evaluates different criteria such as user-friendliness, tutorial support and installation effort.
The ability to change aerodynamic parameters of airfoils during flying can potentially save energy as well as reducing the noise made by the unmanned aerial vehicles (UAV) because of sharp edges of the airfoil and its rudders. In this paper, an approach for the design of an adaptive wing using a multi-material 3D printer is shown. In multi-material 3D printing, up to six different materials can be combined in one component. Thus, the user can determine the mixture and the spatial arrangement of this “digital material” in advance in the pre-processing software. First, the theoretical benefits of adaptive wings are shown, and already existing adaptive wings and concepts are explicated within a literature review. Then the additive manufacturing process using photopolymer jetting and its capabilities to print multiple materials in one part are demonstrated. Within the scope of a case study, an adaptive wing is developed and the necessary steps for the product development and their implementation in CAD are presented. This contribution covers the requirements for different components and sections of an adaptive wing designed for additive manufacturing using multiple materials as well as the single steps of development with its different approaches until the final design of the adaptive wing. The developed wing section is simulated, and qualitative tests in a wind tunnel are carried out with the wing segment. Finally, the additively manufactured wing segment is evaluated under technical and economic aspects.
The additive manufacturing processes have developed significantly in recent years. Currently, new generative processes are coming onto the market. Likewise, the number of available materials that can be processed using additive processes is steadily increasing. Therefore, an important task is to integrate these new processes and materials into the university education of engineers. Due to the rapid change and the constant development in the field of additive manufacturing, a pure transfer of knowledge is not expedient, because this obsolete very quickly. Rather, the students should be enabled to use their skills in such a way that they can always handle new technologies and materials independently and meaningfully.
In this paper, therefore, a new course is developed in which the students largely independently work with additive manufacturing processes. For this purpose, teams of four to five students from different technical programs are formed. The teams have the task of developing and manufacturing a product using additive processes. The goal is to create a powerful product by taking into account the optimization of costs and use of resources.
As an example, the development and additive manufacturing of an ornithopter (aircraft that flies by flapping its wings) will be presented in this contribution. The students have to analyze and optimize the mechanics and aerodynamics of the aircraft. In addition, the rules for production-oriented design must be determined and applied. Further more, they should assess the costs and material consumption during development and production.
This contribution shows how the students have achieved the different learning outcomes. In addition, it becomes clear how the students independently acquired and applied their knowledge in development, design and additive manufacturing. Also, it will be demonstrated how much time the students spent on learning the different technologies.
The development of new processes and materials for additive manufacturing is currently progressing rapidly. In order to use the advantages of additive manufacturing, however, product development and design must also be adapted to these new processes. Therefore it is suitable to use structural optimization. To achieve the best results in lightweight design, it is important to have an approach that reduces the volume in the unloaded regions and considers the restrictions and characteristics of the additive manufacturing process. In this contribution, a case study using a humanoid robot is presented. Thus, the pelvis module of a humanoid robot is optimized regarding its weight and stiffness. Furthermore, an integrated design is implemented in order to reduce the number of parts and the screw connections. The manufacturing uses a new aluminum-based material that has been specially developed for use in additive manufacturing and lightweight construction. For the additive manufacturing by means of the Selective Laser Melting (SLM) process, different restrictions and the assembly concepts of the humanoid robot have to be taken into account. These restrictions have to be considered in the setting of the individual parameters and target functions of the structural optimization. As a result, a framework is presented that shows the steps of the redesign and the optimization of the pelvis module. In order to achieve high accuracy with the product, the redesign of the pelvis module is demonstrated with regard to mechanical and thermal postprocessing. Finally, the redesigned part and the different assembly concepts are compared to analyze the economic and technical effects of the optimization.
Direct Digital Manufacturing of Architectural Models using Binder Jetting and Polyjet Modeling
(2019)
Today, architectural models are an important tool for illustrating drawn-on plansor computer-generated virtual models and making them understandable. Inaddition to the conventional methods for the manufacturing of physical models, awide range of processes for Direct Digital Manufacturing (DDM) has spreadrapidly in recent years. In order to facilitate the application of these new methodsfor architects, this contribution examines which technical and economic resultsare possible using 3D printed architectural models. Within a case study, it will beshown on the basis of a multi-storey detached house, which kind of datapreparation is necessary. The DDM of architectural models will be demonstratedusing two widespread techniques and the resulting costs will be compared.
The fast and cost-effective manufacturing of tools for thermoforming is an essential requirement to shorten the development time of products. Thus, additive processes are used increasingly in tooling for thermoforming of plastic sheets. However, a disadvantage of many additive methods is that they are highly cost-intensive, since complex systems based on laser technology and expensive metal powders are needed. Therefore, this paper examines how to work with favorable additive methods, e.g. Binder Jetting, to manufacture tools, which provide sufficient strength for thermoforming. The use of comparatively low-priced inkjet technology for the layer construction and a polymer plaster as material can be expected to result in significant cost reductions. Based on a case study using a cowling (engine bonnet) for an Unmanned Aerial Vehicle (UAV), the development of a complex tool for thermoforming is demonstrated. The object in this study is to produce a tool for a complex-shaped component in small numbers and high quality in a short time and at reasonable costs. Within the tooling process, integrated vacuum channels are implemented in additive tooling without the need for additional post-processing (for example, drilling). In addition, special technical challenges, such as the demolding of undercuts or the parting of the tool are explained. All process steps from tool design to the use of the additively manufactured tool are analyzed. Based on the manufacturing of a small series of cowlings for a UAV made of plastic sheets (ABS), it is shown, that the Binder Jetting offers sufficient mechanical and thermal strength for additive tooling. In addition, an economic evaluation of the tool manufacturing and a detailed consideration of the required manufacturing times for the different process steps are carried out. Finally, a comparison is made with conventional and alternative additive methods of tooling.
The monitoring of industrial environments ensures that highly automated processes run without interruption. However, even if the industrial machines themselves are monitored, the communication lines are currently not continuously monitored in todays installations. They are checked usually only during maintenance intervals or in case of error. In addition, the cables or connected machines usually have to be removed from the system for the duration of the test. To overcome these drawbacks, we have developed and implemented a cost-efficient and continuous signal monitoring of Ethernet-based industrial bus systems. Several methods have been developed to assess the quality of the cable. These methods can be classified to either passive or active. Active methods are not suitable if interruption of the communication is undesired. Passive methods, on the other hand, require oversampling, which calls for expensive hardware. In this paper, a novel passive method combined with undersampling targeting cost-efficient hardware is proposed.
Provides a state-of-the-art overview of international trade policy research
The Handbook of Global Trade Policy offers readers a comprehensive resource for the study of international trade policy, governance, and financing. This timely and authoritative work presents contributions from a team of prominent experts that assess the policy implications of recent academic research on the subject. Discussions of contemporary research in fields such as economics, international business, international relations, law, and global politics help readers develop an expansive, interdisciplinary knowledge of 21st century foreign trade.
Accessible for students, yet relevant for practitioners and researchers, this book expertly guides readers through essential literature in the field while highlighting new connections between social science research and global policy-making. Authoritative chapters address new realities of the global trade environment, global governance and international institutions, multilateral trade agreements, regional trade in developing countries, value chains in the Pacific Rim, and more. Designed to provide a well-rounded survey of the subject, this book covers financing trade such as export credit arrangements in developing economies, export insurance markets, climate finance, and recent initiatives of the World Trade Organization (WTO). This state-of-the-art overview:
• Integrates new data and up-to-date research in the field
• Offers an interdisciplinary approach to examining global trade policy
• Introduces fundamental concepts of global trade in an understandable style
• Combines contemporary economic, legal, financial, and policy topics
• Presents a wide range of perspectives on current issues surrounding trade practices and policies
The Handbook of Global Trade Policy is a valuable resource for students, professionals, academics, researchers, and policy-makers in all areas of international trade, economics, business, and finance.
Quo Vadis, Global Trade?
(2019)
This introduction presents an overview of the key concepts discussed in the subsequent chapters of this book. The book provides a comprehensive resource for the study of global policy and governance, as well as economics and financing of international trade. It first deals with a general overview and in‐depth discussion of new realities, trends and further challenges for trade in the 21st century. The book then focuses on global governance and international institutions, focusing on the future for multilateral trade agreements and the activities of international financial institutions, as well as banking regulation and illicit flows. It also shows how global trade and regional development are linked up, for example by looking at the next wave of regional integration as well as what to expect from a protective US trade policy, The book further explores how to finance international trade.
Open markets, international trade and foreign direct investments are a source of prosperity in challenging times. This Special Section looks at developed economies and emerging markets, also taking into account the role of trade for impactful capacity-building in least developed countries (LDCs). Specific emphasis is placed on financing economic development and trade, analysing what roles trade and development finance should play in the quest for an efficient mobilisation of private capital for growth, trade and development.
Excellent organisations require targeted strategies to implement their vision and mission, deploying a stakeholder-focused approach. As part of evidence-based policy making, it is a common approach to measure government financing vehicles’ results. A state-of-the-art method in quantitative benchmarking to overcome the challenge of considering multiple inputs and outputs is Data Envelopment Analysis (DEA). Descriptive statistics and explorative-qualitative approaches are also applied in a modern ECA benchmarking model to substantiate DEA results and put them into perspective. This enabler-result model provides a holistic view and allows to identify top performing ECAs and Exim-Banks, providing the opportunity for inefficient institutions to learn from their most productive peers. This best practice approach for strategic benchmarking enables the senior management to develop and implement a cutting-edge strategy, and increase value for key stakeholders.
Das Verbundprojekt GEO.Cool von Partnern im Landesforschungszentrum Geothermie (LFZG) hat zum Ziel, Möglichkeiten sowie Grenzen der Kühlung mit oberflächennaher Geothermie in interdisziplinärer Arbeit zu erheben und daraus Impulse für Innovationen in diesem Bereich zu gewinnen.
Das Vorhaben ist in die folgenden sechs Arbeitspakete (AP) gegliedert:
AP 1: Bedarfe und Systemaspekte
AP 2: Systemtechnik und Planung von Anlagen zur Kühlung mit oberflächennaher Ge-othermie
AP 3: Analyse von Best-Practice-Beispielen
AP 4: Thermisches und hydrogeologisches Verhalten des Untergrunds
AP 5: Genehmigungspraxis und Grenzwerte
AP 6: Synopse, Innovationspotenzial und Transfer.
Das Projekt hat eine Laufzeit vom 23.01.2017 bis zum 30.09.2019 (Förderzeitraum für alle Arbeitspakete und Projektpartner).
In der Tiefengeothermie spielen – unter den jeweiligen geologischen Rahmenbedingungen – thermische, hydraulische, chemische und mechanische (THCM-) Prozesse eine grundlegende Rolle hinsichtlich Wirtschaftlichkeit, Sicherheit und Nachhaltigkeit einer geothermischen Nutzung. Die Etablierung einer neuen Technologie wird sich darüber hinaus ohne Akzeptanz in der Bevölkerung mittel- und langfristig nicht behaupten können. Deshalb bilden Transparenz (z.B. Überwachung der induzierten Seismizität) und ein Transfer von Ergebnissen der Geothermie-Forschung in die Öffentlichkeit wesentliche Grundlagen zur Schaffung einer hinreichenden Akzeptanz der Tiefengeothermie.
Soziale Roboter unterscheiden sich von Servicerobotern, da sie auch komplexere Interaktionen und Kommunikation beherrschen. Einige können Emotionen simulieren oder sogar erkennen. Einsatzbereiche gibt es viele: vom Haushalt über die Pflege bis in den medizinischen Bereich. Wo liegen die Grenzen der aktuellen Systeme? Wie müssen soziale Roboter aussehen und interagieren, um als nützliche Helfer statt als Konkurrenten wahrgenommen zu werden? Dieser Artikel gibt einen kurzen Überblick bestehender sozialer Roboter. Er beleuchtet deren Akzeptanz im wichtigen Bereich Gesundheit und Pflege anhand der Ergebnisse einer Expertenstudie und gibt eine zeitliche Perspektive zur weiteren Entwicklung.
Im Zentrum des Gesamtprojektes stand die nutzerzentrierte Entwicklung einer praxisorientierten Lern- und Anleitungsumgebung, in der kontextbezogene Informationen direkt in den Arbeitsbereich projiziert werden – das Lernen also sowohl am Arbeitsplatz als auch situiert erfolgen kann. Durch die Projektion in Verbindung mit Interaktivität werden Lerninhalte im wahrsten Sinne des Wortes „begreifbar“. So wurde ein kontextbewusstes System geschaffen, das Lernende interaktiv wie ein Coach begleitet und motiviert.
What emotional effects does gamification have on users who work or learn with repetitive tasks? In this work, we use biosignals to analyze these affective effects of gamification. After a brief discussion of related work, we describe the implementation of an assistive system augmenting work by projecting elements for guidance and gamification. We also show how this system can be extended to analyse users' emotions. In a user study, we analyse both biosignals (facial expressions and electrodermal activity), and regular performance measures (error rate and task completion time).
For the performance measures, the results confirm known effects like increased speed and slightly increased error rate. In addition, the analysis of the biosignals provides strong evidence for two major affective effects: the gamification of work and learning tasks incites highly significantly more positive emotions and increases emotionality altogether. The results add to the design of assistive systems, which are aware of the physical as well as the affective context.
In this article the high-temperature behavior of a cylindrical lithium iron phosphate/graphite lithium-ion cell is investigated numerically and experimentally by means of differential scanning calorimetry (DSC), accelerating rate calorimetry (ARC), and external short circuit test (ESC). For the simulations a multi-physics multi-scale (1D+1D+1D) model is used. Assuming a two-step electro-/thermochemical SEI formation mechanism, the model is able to qualitatively reproduce experimental data at temperatures up to approx. 200 °C. Model assumptions and parameters could be evaluated via comparison to experimental results, where the three types of experiments (DSC, ARC, ESC) show complementary sensitivities towards model parameters. The results underline that elevated-temperature experiments can be used to identify parameters of the multi-physics model, which then can be used to understand and interpret high-temperature behavior. The resulting model is able to describe nominal charge/discharge operation behavior, long-term calendaric aging behavior, and short-term high-temperature behavior during extreme events, demonstrating the descriptive and predictive capabilities of physicochemical models.
Die Studienanfänger in den technischen Studiengängen der Hochschulen für angewandte Wissenschaften haben nicht nur in Mathematik sondern auch in Physik sehr unterschiedliche Vorkenntnisse. Obwohl diese Fächer für das grundlegende Verständnis technischer Vorgänge von großer Bedeutung sind, kann die Ausbildung in diesen Bereichen angesichts der begrenzten dafür im Verlauf des Studiums zur Verfügung stehenden Zeitfenster nicht bei Null anfangen. Für Mathematik wurde daher von der Arbeitsgruppe cosh ein Mindestanforderungskatalog zusammengestellt und 2014 veröffentlicht. Er beschreibt Kenntnisse und Fertigkeiten, die Studienanfänger zur erfolgreichen Aufnahme eines WiMINT-Studiums (Wirtschaft, Mathematik, Informatik, Naturwissenschaft, Technik) an einer Hochschule benötigen. Inzwischen hat sich nun eine Arbeitsgruppe von Physikerinnen und Physikern an Hochschulen in Baden-Württemberg gebildet, deren Ziel es ist, einen analogen Mindestanforderungskatalog für den Bereich Physik zu erstellen. Hier wird der aktuell erreichte Stand der Arbeiten vorgestellt.
Die öffentliche Diskussion über den Einsatz digitaler Medien in Schule und Unterricht verkennt die zugrundeliegenden Interessen. Seit über 30 Jahren wird jede neue Generation von Digitaltechnik in die Schulen gedrückt. 1984 waren es Personal Computer (PC), in den 1990er Jahren Laptops, aktuell sind es WLAN, Tablets und Smartphones. Die Argumente sind identisch: Angeblich sorgen die Geräte für moderneren, innovativeren Unterricht, höhere Motivation der Schüler/innen, bessere Lernergebnisse. Wissenschaftlich valide Studien belegen das Gegenteil. Der pädagogische Nutzen war und ist bis heute negativ. PISA-Koordinator Andreas Schleicher: „Wir müssen es als Realität betrachten, dass Technologie in unseren Schulen mehr schadet als nützt.“ (Schleicher, 2016) Der Aktionsrat Bildung bestätigt in einer Studie für die Vereinigung der Bayerischer Wirtschaft (vbw) „statistisch signifikant niedrigere Kompetenzen in den Domänen Mathematik und Naturwissenschaften“, wenn Grundschülerinnen und Grundschüler im Unterricht mindestens einmal wöchentlich Computer einsetzen im Vergleich zu Grundschulkindern, die seltener als einmal pro Woche Computer im Unterricht nutzten - und fordert trotzdem, die Schulen müssten schneller digitalisiert werden.
Es geht offensichtlich um Anderes. Es sind wirtschaftliche Interessen der IT-Wirtschaft und der Global Education Industries (GEI), die die Bildungsmärkte nach angelsächsischem Vorbild privatisieren und kommerzialisieren wollen. Es sind zugleich die Geschäftsmodelle der Daten-Ökonomie, die alle Lebensbereiche verdaten und Menschen per Algorithmus und kybernetischen Modellen steuern wollen – wie in den 1950er Jahren (Behaviorismus, programmiertes Lernen). Die Digitalisierung ist „nur“ die technische Infrastruktur zur Datenerhebung, die empirische Bildungsforschung das Instrumentarium zur Quantifizierung auch des Sozialen (Mau, 2018). Nach Arbeitsmarkt und Kommunikation stehen derzeit Bildung und Gesundheit auf der Agenda der Digitalisten. Das Problem: Werden soziale Systeme nach der binären Logik der IT umgebaut, verlieren sie alles Soziale. Daher ist die vordringliche Aufgabe der Pädagogik, die derzeit dominierenden Denkstrukturen von BWL und IT, Empirie, Kennzahlenfixierung und behavioristischen Lerntheorien als dysfunktionalen und a-sozialen Irrweg zu kennzeichnen und stattdessen Schule und Unterricht wieder vom Menschen und seinen Lernprozessen her zu denken.
Die Digitalisierung aller Lebensbereiche ist kein Technik-, sondern ein Systemwechsel. Alles, was wir im Netz tun, wird verdatet; idealiter prenatal bis postmortal. Dieser Datenpool wird mit immer ausgefeilteren Algorithmen des Big Data Mining analysiert und mit Methoden der Empirie, Statistik und Mustererkennung ausgewertet. Der Mensch wird zum Datensatz. Je früher Menschen psychometrisch vermessen werden können, desto exaktere Persönlichkeits-, Lern- und Leistungsprofile entstehen – und umso leichter ist die Einflussnahme. Das ist der Grund für die Forderung nach Digitaltechnik in KiTas und Grundschulen. Menschen werden daran gewöhnt zu tun, was Maschinen ihnen sagen. Das ist Gegenaufklärung aus dem Silicon Valley per App und Web. Wie Alternativen aussehen können, zeigt dieser Beitrag.
Controlling ist ein Begriff aus der Wirtschaftslehre und bezeichnet nicht Kontrollle, sondern Prozeßsteuerung. Definierte Ziele werden durch kleinteilige Messungen und permanente Überwachung aller Arbeitsschritte und Handlungen der beteiligten Personen protokolliert und stetig optimiert. Dieses Konzept der Planungs-, Koordinations- und Kontrollaufgaben wird beim „Bildungs-Controlling“ auf Schulen und Hochschulen übertragen. Ziel ist dabei, entsprechend der Gary Beckerschen Humankapitaltheorie, die Produktion von Humankapital mit validierten Kompetenzen. Zwei Probleme gibt es dabei: Lernen und vor allem Verstehen lassen sich nicht automatisieren und auch nicht automatisiert prüfen. Und: Sozialsysteme unter dem Regime der Kennzahlen des Quality Management (QM) oder Total Quality Management (TQM) verlieren ihre Eigenschaft als soziale Systeme
Wer sich als Pädagoge und Wissenschaftler mit dem Thema „Digitalisierung und Schule“ befasst, stellt fest, dass nur Wenige die Tragweite der beabsichtigten Transformation von Bildungseinrichtungen zu automatisierten, algorithmisch gesteuerten Lernfabriken realisieren. Dabei wird übersehen, dass mit Theorien und empirischen Modellen wie der „datengestützten Schulentwicklung“ und „Learning Analytics“ grundlegende Paradigmenwechsel verbunden sind, die das humane wie das christliche Menschenbild erschüttern. Mit Kybernetik und Behaviorismus auf der einen, mit der sogenannten „Künstlichen Intelligenz“ (KI) und darauf aufbauenden Geschäftsmodellen der Datenökonomie auf der anderen Seite, untergraben diese Beschulungsmodelle die Autonomie, das Selbstbestimmungsrecht und die Handlungsfreiheit des Menschen. Vertreter dieser Disziplinen behaupten, dass sowohl der einzelne Mensch wie Sozialgemeinschaften wie Maschinen programmiert und gesteuert werden können. Sie blenden aus, dass Mündigkeit und Selbstverantwortung das Ziel von Schule und Unterricht sind, nicht maschinell berechnete Verhaltenssteuerung und -manipulation. Diese Fehlentwicklungen sind nicht der Technik an sich geschuldet, die sich anders einsetzen ließe, sondern den Geschäftsmodellen der IT-Anbieter.
Der niedersächsische Landtag entscheidet bei der Diskussion und Abstimmung über die drei genannten Anträge über mehr als nur die Verteilung der Investitionsmittel aus dem „Digitalpakt Schule“. Es geht um grundsätzliche Fragen: Wer bestimmt über Lehrinhalte an staatlichen Schulen und über eingesetzte (Medien-)Technik? Bleibt die Bildungspolitik des Landes dem Anspruch und Recht der Schülerinnen und Schüler nach individueller Bildung und Persönlichkeitsentwicklung verpflichtet, wie es in der Landesverfassung (§1(4)) und im Niedersächsischen Schulgesetz (§2 Bildungsauftrag, NschG) steht? Vermitteln öffentliche Schulen weiterhin eine fundierte Allgemeinbildung als Grundlage sozialer Teilhabe in demokratischen Gemeinschaften? Oder setzen sich Wirtschaftsverbände und IT-Lobbyisten durch, die für mehr und den immer früheren Einsatz von digitalen Endgeräten in Bildungseinrichtungen eintreten? Die „Programmieren bereits in der KiTa“ fordern und Schulen mit „leistungsstarken WLAN ausleuchten“ wollen (CDU/SPD-An-trag), ohne über Strahlung auch nur nachzudenken? Werden Schulen qua Landtagsbeschluss zu Ausbildungsstätten und Berufsvorbereitung (Münch, 2018, 177) – oder nicht?
Dabei ist wissenschaftlich belegt, dass die Qualität von Schule und Unterricht gerade nicht an Medientechnik gekoppelt ist. Entscheidend sind immer qualifizierte Lehrpersönlichkeiten, ein gut strukturierter, altersgerechter Unterricht und der soziale Umgang miteinander. (Studien von Hattie, Telekom, OECD u.a.) Lehren und Lernen sind individuelle und soziale Prozesse, keine technisch steuerbaren Abläufe. Unberücksichtigt bleiben inden Anträgen sowohl die historischen Belege des Scheiterns von Medientechnik (Pias) wie bereits gegenläufige Entwicklungen aus den USA. Kinder in (teuren) Privatschulenwerden wieder von realen Lehrerinnen und Lehrern unterrichtet und genießen den „Luxus menschlicher Interaktion“. Bildschirme sind dort aus den Schulen verbannt, während Kinder an öffentlichen Schulen an Tablets ohne LehrerInnen lernen müssen (Bowles, 2018).
Der niedersächsische Landtag entscheidet bei diesen Anträgen also darüber, ob bereits gescheiterte IT-Konzepte aus den USA wiederholt werden oder ob eine Diskussionüber sinnvolle und pädagogisch fundierte Medienkonzepte für Schulen eröffnet wird, die nicht auf Digitaltechnik verkürzt werden darf. Wer also bestimmt über Lehrinhalte und Medientechnik an Schulen? Die IT-Wirtschaft und Vertreter der Daten-Ökonomie, die Lehrangebote digitalisieren und privatisieren wollen? Oder entscheiden Volksvertreter, nach pädagogischer Expertise, die den Schülerinnen und Schülern verpflichtet sind?
Scheuklappen statt Weitblick
(2019)
Der bildungsferne Campus
(2019)
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset, where a subspace is a subset of dimensions of the data. But the exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, which means that parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation shows linear speedup. Moreover, we develop an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
Radio frequency identification (RFID) antennas are popular for high frequency (HF) RFID, energy transfer and near field communication (NFC) applications. Particularly for wireless measurement systems the RFID/NFC technology is a good option to implement a wireless communication interface. In this context, the design of corresponding reader and transmitter antennas plays a major role for achieving suitable transmission quality. This work proves the feasibility of the rapid prototyping of a RFID/NFC antenna, which is used for the wireless communication and energy harvesting at the required frequency of 13.56 MHz. A novel and low-cost direct ink writing (DIW) technology utilizing highly viscous silver nanoparticle ink is used for this process. This paper describes the development and analysis of low-cost printed flexible RFID/NFC antennas on cost-effective substrates for a microelectronic vital parameter measurement system. Furthermore, we compare the measured technical parameters with existing copper-based counterparts on a FR4 substrate.
Many sectors, like finance, medicine, manufacturing, and education, use blockchain applications to profit from the unique bundle of characteristics of this technology. Blockchain technology (BT) promises benefits in trustability, collaboration, organization, identification, credibility, and transparency. In this paper, we conduct an analysis in which we show how open science can benefit from this technology and its properties. For this, we determined the requirements of an open science ecosystem and compared them with the characteristics of BT to prove that the technology suits as an infrastructure. We also review literature and promising blockchain-based projects for open science to describe the current research situation. To this end, we examine the projects in particular for their relevance and contribution to open science and categorize them afterwards according to their primary purpose. Several of them already provide functionalities that can have a positive impact on current research workflows. So, BT offers promising possibilities for its use in science, but why is it then not used on a large-scale in that area? To answer this question, we point out various shortcomings, challenges, unanswered questions, and research potentials that we found in the literature and identified during our analysis. These topics shall serve as starting points for future research to foster the BT for open science and beyond, especially in the long-term.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new teaching method and appropriate learning materials in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appears as negative side effects of eco-friendly solutions.
Based on a literature analysis and own investigations, authors propose to introduce a manageable number of eco-innovation tools into a standard one-semester design course in process engineering with particular focus on the identification of eco-problems in existing technologies, selection of the appropriate new process intensification technologies (knowledge-based engineering), and systematic ideation and problem solving (knowledge-based innovation and invention).
The proposed educational approach equips students with the advanced knowledge, skills and competences in the field of eco-innovation. Analysis of the student’s work allows one to recommend simple-to-use tools for a fast application in process engineering, such as process mapping, database of eco-friendly process intensification technologies, and up to 20 strongest inventive operators for solving of environmental problems. For the majority of students in the survey, even the small workload has strengthened their self-confidence and skills in eco-innovation
Economic growth and ecological problems motivate industries to apply eco-friendly technologies and equipment. However, environmental impact, followed by energy and material consumption still remain the main negative implications of the technological progress in process engineering. Based on extensive patent analysis, this paper assigns more than 250 identified eco-innovation problems and requirements to 14 general eco-categories with energy consumption and losses, air pollution, and acidification as top issues. It defines primary eco-engineering contradictions, in case eco-problems appear as negative side effects of the new technologies, and secondary eco-engineering contradictions, if eco-friendly solutions have new environmental drawbacks. The study conceptualizes a correlation matrix between the eco-requirements for prediction of typical eco-contradictions on example of processes involving solids handling. Finally, it summarizes major eco-innovation approaches including Process Intensification in process engineering, and chronologically reviews 66 papers on eco-innovation adapting TRIZ methodology. Based on analysis of 100 eco-patents, 58 process intensification technologies, and literature, the study identifies 20 universal TRIZ inventive principles and sub-principles that have a higher value for environmental innovation.
The 40 Altshuller Inventive Principles with numerous sub-principles remain over decades the most frequently applied tool of the Theory of Inventive Problem Solving TRIZ for systematic idea generation. However, their application often requires a concentrated, creative and abstract way of thinking that can be fairly challenging for the newcomers to TRIZ. This paper describes an approach to reduce the abstraction level of inventive sub-principles and presents the results of the idea generation experiment conducted with three groups of undergraduate and graduate students from different years of study in mechanical and process engineering. The students were asked to generate and to record their individual ideas for three design problems using a pre-defined set of classical and modified sub-principles within 10 minutes. The overall outcomes of the experiment support the assumption that the less abstract wording of the modified sub-principles leads to higher number of ideas. The distribution of ideas between the fields of MATCHEM-IBD (Mechanical, Acoustic, Thermal, Chemical, Electrical, Magnetic, Intermolecular, Biological and Data processing) differs significantly between groups using modified and abstract sub-principles.
Classification of TRIZ Inventive Principles and Sub-Principles for Process Engineering Problems
(2019)
The paper proposes a classification approach of 40 Inventive Principles with an extended set of 160 sub-principles for process engineering, based on a thorough analysis of 155 process intensification technologies, 200 patent documents, 6 industrial case studies applying TRIZ, and other sources. The authors define problem-specific sub-principles groups as a more precise and productive ideation technique, adaptable for a large diversity of problem situations, and finally, examine the anticipated variety of ideation using 160 sub-principles with the help of MATCEM-IBD fields.
Growing demands for cleaner production and higher eco-efficiency in process engineering require a comprehensive analysis of technical and environmental outcomes of customers and society. Moreover, unexpected additional technical or ecological drawbacks may appear as negative side effects of new environ-mentally friendly technologies. The paper conceptualizes a comprehensive ap-proach for analysis and ranking of engineering and ecological requirements in process engineering in order to anticipate secondary problems in eco-design and to avoid compromising the environmental or technological goals. For this purpose, the paper presents a method based on integration of the Quality Func-tion Deployment approach with the Importance-Satisfaction Analysis for the requirements ranking. The proposed method identifies and classifies compre-hensively the potential engineering and eco-engineering contradictions through analysis of correlations within requirements groups such as stakehold-er requirements (SRs) and technical requirements (TRs), and additionally through cross-relationship between SRs and TRs.
Process engineering industries are now facing growing economic pressure and societies' demands to improve their production technologies and equipment, making them more efficient and environmentally friendly. However unexpected additional technical and ecological drawbacks may appear as negative side effects of the new environmentally-friendly technologies. Thus, in their efforts to intensify upstream and downstream processes, industrial companies require a systematic aid to avoid compromising of ecological impact. The paper conceptualises a comprehensive approach for eco-innovation and eco- design in process engineering. The approach combines the advantages of Process Intensification as Knowledge-Based Engineering (KBE), inventive tools of Knowledge-Based Innovation (KBI), and main principles and best-practices of Eco-Design and Sustainable Manufacturing. It includes a correlation matrix for identification of eco-engineering contradictions and a process mapping technique for problem definition, database of Process Intensification methods and equipment, as well as a set of strongest inventive operators for eco-ideation.
In den letzten Jahren sind verstärkt große Batteriespeichersysteme in der Mittel- und Hochspannungsebene in Deutschland installiert worden. Neben dem Einsatz für lokale Anwendungszwecke wie Eigenverbrauchsmaximierung oder Lastspitzenkappung sind seit 2016 etwa 250 MW aus Batteriespeichern für die Teilnahme am Markt für Primärregelleistung (PRL) präqualifiziert worden. Damit können bereits 40 % des aktuellen Bedarfs der deutschen Übertragungsnetzbetreiber (ÜNB) gedeckt werden. Für einen zuverlässigen Betrieb von Batteriespeichern sind intelligente Betriebsstrategien erforderlich, die im Rahmen dieser Analyse vorgestellt werden.
Smart Home or Smart Building applications are a growing market. An increasing challenge is to design energy efficient Smart Home applications to achieve sustainable and green homes. Using the example of the development of an Indoor Smart Gardening system with wireless monitoring and automated watering this paper is discussing in particular the design issue of energy autonomous working sensors and actuators for home automation. Most important part of the presented Smart Gardening system is a 3D printed smart flower pot for single plants. The smart flower pot has integrated a water reservoir for automated plant irrigation and an electronic for monitoring important plant parameters and the water level of the water reservoir. Energy harvesting with solar cells enables energy autonomous working of the flower pot. A low-power wireless interface also integrated in the flowerpot and an external gateway based on a Raspberry Pi 3 enables wireless networking of multiple of those flower pots. The gateway is used for evaluating the plant parameters and as a user interface. Particularly the architecture of the energy autonomous wireless flower pot will be considered, because fully energy autonomous sensors and actuators for home automation could not be implemented without special concepts for the energy supply and the overall electronic.
Smart Home-/Smart-Building-Anwendungen sind ein stetig wachsender Markt. Smart Gardening ist ein Beispiel dafür, Nutzern mehr Komfort und eine bessere Lebensqualität zu Hause oder in Bürogebäuden zu ermöglichen. Im Rahmen dieses Beitrags wird die Entwicklung eines Indoor-Smart-Gardening-Systems mit dem Fokus auf energieautarkes Arbeiten vorgestellt. Herzstück des Systems ist ein 3D-gedruckter Blumentopf für einzelne Pflanzen mit integrierter Elektronik zum Monitoring der wichtigsten Pflanzenparameter und einem integrierten Wasserreservoir mit Tauchpumpe für das automatisierte Bewässern der Pflanze. Energy Harvesting per Solarzellen ermöglicht ein energieautarkes Arbeiten des Blumentopfes. Eine selbstentwickelte Low-Power-Funkschnittstelle im Blumentopf und ein externes Gateway ermöglichen die drahtlose Vernetzung mehrerer Pflanzen. Das Gateway dient zur Auswertung der Pflanzenparameter, der Ansteuerung der im Netzwerk vorhandenen Blumentöpfe und als Benutzerinterface.
Amongst all the major hazard aspects for the health of people in big conglomerates is the increase of the particulate matter concentration. Traditional systems for particulate matter (PM) monitoring have a great number of drawbacks but the main issues are economical and are related to the installation costs and never ending periodical maintenance expenses. After all there are installations of such systems but their number is limited and having in mind the growth of population, cities and industry areas, there is even a bigger need for more information on air quality because PM changes non-linearly, has a wide range and different sources. In this paper, we propose an approach, based on low-cost sensor nodes, for real-time measuring and obtaining information about the PM concentration. The adoption of that approach allows for a detailed study of the intensities of pollution and its sources. The system power supply is powered by a PV module. The power supply unit is designed using a model-based design that is a new approach to prototyping power-operated electronic devices with guaranteed performance.
In this article we outline the model development planned within the joint projectModel-based city planningand application in climate change (MOSAIK). The MOSAIK project is funded by the German FederalMinistry of Education and Research (BMBF) within the frameworkUrban Climate Under Change ([UC]2)since 2016. The aim of MOSAIK is to develop a highly-efficient, modern, and high-resolution urban climatemodel that allows to be applied for building-resolving simulations of large cities such as Berlin (Germany).The new urban climate model will be based on the well-established large-eddy simulation code PALM, whichalready has numerous features related to this goal, such as an option for prescribing Cartesian obstacles. Inthis article we will outline those components that will be added or modified in the framework of MOSAIK.Moreover, we will discuss the everlasting issue of acquisition of suitable geographical information as inputdata and the underlying requirements from the model's perspective.
Modeling and simulation play a key role in analyzing the complex electrochemical behavior of lithium-ion batteries. We present the development of a thermodynamic and kinetic modeling framework for intercalation electrochemistry within the open-source software Cantera. Instead of using equilibrium potentials and single-step Butler-Volmer kinetics, Cantera is based on molar thermodynamic data and mass-action kinetics, providing a physically-based and flexible means for complex reaction pathways. Herein, we introduce a new thermodynamic class for intercalation materials into the open-source software. We discuss the derivation of molar thermodynamic data from experimental half-cell potentials, and provide practical guidelines. We then demonstrate the new class using a single-particle model of a lithium cobalt oxide/graphite lithium-ion cell, implemented in MATLAB. With the present extensions, Cantera provides a platform for the lithium-ion battery modeling community both for consistent thermodynamic and kinetic models and for exchanging the required thermodynamic and kinetic parameters. We provide the full MATLAB code and parameter files as supplementary material to this article.
The measurement of the active material volume fraction in composite electrodes of lithium-ion battery cells is difficult due to the small (sub-micrometer) and irregular structure and multi-component composition of the electrodes, particularly in the case of blend electrodes. State-of-the-art experimental methods such as focused ion beam/scanning electron microscopy (FIB/SEM) and subsequent image analysis require expensive equipment and significant expertise. We present here a simple method for identifying active material volume fractions in single-material and blend electrodes, based on the comparison of experimental equilibrium cell voltage curve (open-circuit voltage as function of charge throughput) with active material half-cell potential curves (half-cell potential as function of lithium stoichiometry). The method requires only (i) low-current cycling data of full cells, (ii) cell opening for measurement of electrode thickness and active electrode area, and (iii) literature half-cell potentials of the active materials. Mathematical optimization is used to identify volume fractions and lithium stoichiometry ranges in which the active materials are cycled. The method is particularly useful for model parameterization of either physicochemical (e.g., pseudo-two-dimensional) models or equivalent circuit models, as it yields a self-consistent set of stoichiometric and structural parameters. The method is demonstrated using a commercial LCO–NCA/graphite pouch cell with blend cathode, but can also be applied to other blends (e.g., graphite–silicon anode).
Medical devices accompany our everyday life and come across in situations of worse condition, in significant moments concerning the health or during routine checkups. To ensure flawless operations and error-free results it is essential to test applications and devices. High risks for patient’s health come with operating errors [33] so that the presented research project, called Professional UX, identifies signals and irritations caused by the interaction with a certain device by analyzing mimic, voice and eye tracking data during user experience tests. Besides, this paper will provide information on typical errors of interactive applications which are based on an empirical lab-based survey and the evaluated results achieved. The pictured proceeding of user experience tests and the following analysis can also be applied to other fields and serves as a support for the optimization of products and systems.
In thermomechanisch hochbelasteten Bauteilen begrenzt das Wachstum von Ermüdungsrissen die Bauteillebensdauer. Es kommen Lebensdauermodelle und Finite-Elemente Simulationen zum Einsatz, um ein vorzeitiges Bauteilversagen zu verhindern. Hierbei werden im Allgemeinen deterministische Werkstoffeigenschaften unterstellt, sodass die Information über die im realen Werkstoff auftretenden Streuungen verloren geht, was eine Unsicherheit im Auslegungsprozess mit sich bringt. In der vorliegenden Ausarbeitung werden Methoden zur adäquaten Bestimmung der Werkstoffkennwerte und zur Beschreibung ihrer Streuung durch statistische Verteilungen entwickelt. Einen wesentlichen Aspekt der Arbeit stellt die Bestimmung von objektiven Werkstoffkennwerten dar, zu deren Zweck ein Robustheitskriterium eingeführt wird. Anhand zahlreicher Versuchsdatensätze der Nickelbasislegierung MARM247 und des niobstabilisierten austenitischen Stahls X6 CrNiNb 18-10 kann diese Methodik ausgearbeitet werden und führt auf ein probabilistisches Lebensdauermodell, dass die Abschätzung des Einfluusses von statistisch verteilten Werkstoffkennwerten auf die Ermüdungslebensdauer erlaubt. Als Ergebnis einer Monte-Carlo Simulation zeigt sich, dass im Vergleich von deterministischer zu probabilistischer Lebensdauerbewertung eine probabilistische Auswertung bei beiden untersuchten Werkstoffen zu einem um circa Faktor zwei größeren Streuband in der Lebensdauer führt. In einem Bauteilkonzept wird die anhand der Versuchsdaten erarbeitete Methodik erweitert, sodass eine Abschätzung des Ein usses von streuenden Werkstoffeigenschaften auf Bauteilebene durch Finite-Elemente Simulationen möglich wird. Es kommt das Two-Layer-Viscoplasticity Modell zum Einsatz. Um die Streuung seiner Werkstoffkennwerte ermitteln zu können, reicht die vorliegende Datenbasis nicht aus, sodass Annahmen zu den Werkstoffkennwerten getroffen werden müssen.
Kleinstlebewesen vorgestellt, das Vitalparameter erfasst und diese in einem FRAM-Speicher bis zum Auslesen abspeichert. Durch eine drahtlose RFID-/NFC-Ausleseschnittstelle kann die erfasste Körpertemperatur und der Puls der letzten Wochen ausgelesen werden. Alle Einstellungen des Messsystems können durch einen geeigneten RFID-Reader für Laptops oder durch Smartphones über die NFC-Schnittstelle geändert werden. Das vollständige Aufladen des nur 3 g leichten und 15 mm x 25 mm großen Messsystems erfolgt durch eine selbstgedruckte RFID-Reader-Antenne in Verbindung mit einem RFID-Reader und benötigt hierzu weniger als 21 Stunden. Bei vollständig aufgeladenem Energiespeicher ist ein Betrieb von 47 Tagen möglich. Dies wird durch ein speziell für das Messsystem konzipiertes Lade- und Powermanagement erreicht. Neben der Auswahl von energiesparenden Komponenten für die Hardware und deren bestmöglichen Nutzung, wurde die Software so optimiert, dass das Programm schnell und stromsparend abgearbeitet wird. Die Erweiterbarkeit und Anpassung wird durch das modulare Konzept auch in anderen Bereichen gewährleistet.
Top-level staff prefers to live in urban areas with perfect social infrastructure. This is a common problem for excellent companies (“hidden champions”) in rural areas: even if they can provide the services qualified applicants appreciate for daily living, they fail to attract them because important facts are not presented sufficiently in social media or on the corporate website. This is especially true for applicants with families. The contribution of this paper is four-fold: we provide an overview of the current state of online recruiting activities of hidden champions (1). Based on this corpus, we describe the applicant service gap for company information in rural communes (2). A study on user experience (UX) identifies the applicants’ wishes and needs, focusing on a family-oriented information system on living conditions in rural areas (3). Finally, we present the results of an online survey on the value of such information systems with more than 200 participants (4).
Apache Hadoop is a well-known open-source framework for storing and processing huge amounts of data. This paper shows the usage of the framework within a project of the university in cooperation with a semiconductor company. The goal of this project was to supplement the existing data landscape by the facilities of storing and analyzing the data on a new Apache Hadoop based platform.
Hintergrund: Die Pulmonalvenenisolation (PVI) mit Hilfe von Kryoballonkathetern ist eine anerkannte Methode zur Behandlung von Vorhofflimmern (AF). Diese Methode bietet eine kürzere Behandlungsdauer als die klassische Therapie durch die Hochfrequenzablation (HF). Ziel dieser Studie war es, verschiedene Kryoballonkatheter, HF-Katheter und Ösophaguskatheter in ein Herzrhythmusmodell zu integrieren und mittels statischer und dynamischer Simulation elektrische und thermische Felder bei PVI unter Vorhofflimmern zu untersuchen.
Methodik: Die Modellierung und Simulation erfolgte mit der elektromagnetischen und thermischen Simulationssoftware CST (CST Darmstadt). Zwei Kryoballons, ein HF-Ablationskatheter und ein Ösophaguskatheter wurden auf der Grundlage der technischen Handbücher der Hersteller Medtronic und Osypka modelliert. Der 23 mm Kryoballon und ein kreisförmiger Mappingkatheter wurden in das Offenburger Herzrhythmusmodell integriert, insbesondere die left inferior pulmonary vein (LIPV) zur Simulation der thermischen Feldausbreitung während einer PVI. Die Simulation einer PVI mit HF-Energie wurde mit dem integrierten HF-Ablationskatheter in der Nähe der LIPV durchgeführt. Der im Herzrhythmusmodell platzierte TO8 Ösophaguskatheter ermöglichte die Ableitung linksatrialer elektrischer Felder bei AF und die Analyse thermischer Felder während PVI.
Ergebnisse: Elektrische Felder konnten bei Sinusrhythmus und AF mit einem AF-Fokus in der LIVP statisch und dynamisch im Herzen und Ösophagus simuliert werden. Bei einer simulierten 20 Sekunden Applikation eines Kryoballon-Katheters bei -50°C wurde eine Temperatur von -24°C in einer Tiefe von 0,5 mm im Myokard gemessen. In einer Tiefe von 1 mm betrug die Temperatur -3°C, bei 2 mm Tiefe 18°C und bei 3 mm Tiefe 29°C. Unter der 15 sekündigen Anwendung eines HF-Katheters mit einer 8-mm-Elektrode und einer Leistung von 5 W bei 420 kHz betrug die Temperatur an der Spitze der Elektrode 110°C. In einer Tiefe von 0,5 mm im Myokard betrug die Temperatur 75°C, in einer Tiefe von 1 mm 58°C, in einer Tiefe von 2 mm 45°C und in einer Tiefe von 3 mm 38°C. Im Ösophagus konnte bei den meisten Simulationen eine konstante Temperatur von 37°C gemessen und die Gefahr einer Ösophagus-Fistel ausgeschlossen werden. Bei Kryoablation der LIPV wurde eine Abkühlung des Ösophagus auf 30°C gemessen.
Schlussfolgerungen: Die Herzrhythmussimulation elektrischer und thermaler Felder ermöglichen mit Anwendung unterschiedlicher Herzkatheter eine statische und dynamische Simulation von PVI durch Kryoablation, HF-Ablation und Temperaturanalyse im Ösophagus. Unter Einbeziehung von MRT- oder CT-Daten können elektrische und thermale Simulationen möglicherweise zur Optimierung von PVIs genutzt werden.
Die Pulmonalvenenisolation (PVI) mithilfe von Kryoballonkathetern ist eine anerkannte Methode zur Behandlung von Vorhofflimmern (AF). Diese Methode bietet eine kürzere Behandlungsdauer als die klassische Therapie durch die Hochfrequenz- (HF) Ablation. Ziel dieser Studie war es, verschie-dene Kryoballonkatheter, HF-Ablationskatheter und Ösophaguskatheter in ein Herzrhythmusmodell zu integrieren und mit statischer und dynamischer Simulation elektrische und thermische Felder bei PVI unter Vorhofflimmern zu untersuchen.
Background: Pulmonary vein isolation (PVI) using cryoballoon catheters are a recognized method for the treatment of atrial fibrillation (AF). This method offers shorter treatment duration in contrast to the classical therapy with high-frequency (HF) ablation.
Purpose: The aim of this study was to integrate different cryoballoon catheters and a HF catheter into a heart rhythm model and to compare them by means of static and dynamic electromagnetic and thermal simulation in use under AF.
Methods: The cryoballoon catheters from Medtronic and the HF ablation catheter from Osypka were modelled virtually with the aid of manufacturer specifications and the CST (Computer Simulation Technology, Darmstadt) simulation program. The cryoballoon catheter was located in the lower left pulmonary vein of the virtual heart rhythm model for the realization of pulmonary vein isolation (PVI) by cryoenergy. The simulated temperature at the balloon surface was -50°C during the simulation.
Results: During a simulated 20 second application of a cryoballoon catheter at -50°C, a temperature of -24°C was measured at a depth of 0.5 mm in the myocardium. At a depth of 1 mm the temperature was -3°C, at 2 mm depth 18°C and at 3 mm depth 29°C. Under the 15 second application of a RF catheter with a 8 mm electrode and a power of 5 W at 420 kHz, the temperature at the tip of the electrode was 110°C. At a depth of 0.5 mm in the myocardium, the temperature was 75°C, at a depth of 1 mm 58°C, at 2 mm depth 45°C and at 3 mm depth 38°C.
Conclusions: The simulation of temperature profiles during the virtual application of several catheter models in the heart rhythm model allows the static and dynamic simulation of PVI by cryoballoon ablation and RF ablation. The three-dimensional simulation can be used to improve ablation applications by creating a model in personalized cardiac rhythm therapy from MRI or CT data of a heart and finding a favourable position for ablation of AF.
Mathematik 2 Beweisaufgaben
(2019)
Der zweite Band der Beweisaufgabensammlung richtet sich an angehende Ingenieure und Naturwissenschaftler, die die im Rahmen einer Mathematik 2-Vorlesung behandelten Formeln nicht nur anwenden, sondern selbst herleiten wollen. Bei der Zusammenstellung des Inhalts wurde großer Wert auf Vollständigkeit gelegt, weshalb sich die Beweise hinsichtlich Umfang und Schwierigkeitsgrad mitunter sehr deutlich voneinander unterscheiden. Um die Herleitung der auf Lern- und Klausur-Formelsammlung aufgeteilten mathematischen Gleichungen und Regeln zu erleichtern, wird eine Dreiteilung der Beweise in Aufgabe, Lösungshinweis und Lösung vorgenommen. Umfangreichere Herleitungen sind in Teilaufgaben zerlegt und anspruchsvollere Beweise durch Sternchen bzw. durch Sterne kenntlich gemacht.
Bei der Auslegung von geschraubten Stirnplattenstößen mit elastomerer Trennschicht dürfen gemäß Eurocode 3 lediglich die Flansche für eine Übertragung der Schnittgrößen berücksichtigt werden. Unsere Untersuchungen zeigen, dass auch die Stege für eine Bemessung herangezogen werden sollten. Sie tragen zu einer gleichmäßigeren Spannungsverteilung im Elastomerlager bei und erlauben somit höhere Belastungen bei gleichbleibenden Abmessungen.
Basis der FE-Analysen sind ein- und zweiachsige Zug- und Druckversuche, die das komplexe Materialverhalten der elastomeren Trennschicht erfassen. Die Übereinstimmung von Messung und Simulation ist sehr gut, was insbesondere auf das verwendete Materialgesetz zurückzuführen ist: ein nicht-linear viskoelastischer Ansatz in Kombination mit dem hyperelastischen Marlow-Modell.
Es hat sich herausgestellt, dass der Reibungskoeffizient und die Querkontraktionszahl des Elastomerlagers maßgeblich das Tragverhalten der geschraubten Stirnplattenstöße beeinflussen.
Oxidation of the nickel electrode is a severe aging mechanism of solid oxide fuel cells (SOFC) and solid oxide electrolyzer cells (SOEC). This work presents a modeling study of safe operating conditions with respect to nickel oxide formation. Microkinetic reaction mechanisms for thermochemical and electrochemical nickel oxidation are integrated into a 2D multiphase model of an anode‐supported solid oxide cell. Local oxidation propensity can be separated into four regimes. Simulations show that the thermochemical pathway generally dominates the electrochemical pathway. As a consequence, as long as fuel utilization is low, cell operation considerably below electrochemical oxidation limit of 0.704 V is possible without the risk of reoxidation.
Printed systems spark immense interest in industry, and for several parts such as solar cells or radio frequency identification antennas, printed products are already available on the market. This has led to intense research; however, printed field-effect transistors (FETs) and logics derived thereof still have not been sufficiently developed to be adapted by industry. Among others, one of the reasons for this is the lack of control of the threshold voltage during production. In this work, we show an approach to adjust the threshold voltage (Vth) in printed electrolyte-gated FETs (EGFETs) with high accuracy by doping indium-oxide semiconducting channels with chromium. Despite high doping concentrations achieved by a wet chemical process during precursor ink preparation, good on/off-ratios of more than five orders of magnitude could be demonstrated. The synthesis process is simple, inexpensive, and easily scalable and leads to depletion-mode EGFETs, which are fully functional at operation potentials below 2 V and allows us to increase Vth by approximately 0.5 V.
Low latency communication is essential to enable mission-critical machine-type communication (mMTC) use cases in cellular networks. Factory and process automation are major areas that require such low latency communication. In this paper, we investigate the potential of adopting the semi-persistent scheduling (SPS) latency reduction technique in narrowband LTE (NB-LTE) networks and provide a comprehensive performance evaluation. First, we investigate and implement SPS in an open-source network simulator (NS3). We perform simulations with a focus on LTE-M and Narrowband IoT (NB-IoT) systems and evaluate the impact of the SPS technique on the uplink latency of these narrowband systems in real industrial automation scenarios. The performance gain of adopting SPS is analyzed and the results is compared with the legacy dynamic scheduling. Our results show that SPS has the potential to reduce the latency of cellular Internet of Things (cIoT) networks. We believe that SPS can be integrated into LTE-M and NB-IoT systems to support low-latency industrial applications.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
Im Beitrag wird für lineare, zeitinvariante, zeitdiskrete und stabile Regelstrecken beschrieben, wie zwei bekannte Zustandsraumverfahren zur Windup-Vermeidung so miteinander kombiniert werden können, dass dadurch für sämtliche PI-Zustandsregler Strecken- und Regler-Windup verhindert wird, sofern diese Regler im unbegrenzten Fall stabil sind. Zurückgegriffen wird hierbei auf das „Additional Dynamic Element“ (ADE) von Hippe zur Vermeidung von Strecken-Windup [Hippe, P.: Windup in control – Its effects and their prevention, 2006; at – Automatisierungstechnik, 2007], dessen Übertragung auf zeitdiskrete Systeme im Beitrag kurz skizziert wird, sowie auf das Verfahren der Führungsgrößenkorrektur [Nuß, U.: at – Automatisierungstechnik, 2017] zur Vermeidung von Regler-Windup. Das vorgestellte Kombinationsverfahren setzt für die jeweilige Regelstrecke lediglich die Einbeziehung eines bereits existierenden P-Zustandsreglers voraus, der Strecken-Windup vermeidet. Die Bereitstellung eines möglichst einfachen und dennoch nicht allzu einschränkenden Kriteriums zur Überprüfung, ob ein P-Zustandsregler diese Eigenschaft besitzt, ist ebenfalls ein Anliegen des Beitrags. Diesbezüglich wird auf der Basis einer geeigneten Ljapunow-Funktion ein hinreichendes Kriterium angegeben, das umfassender ist als das in [Nuß, U.: at – Automatisierungstechnik, 2017] verwendete. Ein Beispiel aus der elektrischen Antriebstechnik demonstriert die Leistungsfähigkeit der vorgestellten Methode.
Elektrische Antriebe sind ein innovationstreibender Kernbestandteil vieler industrieller Anlagen und Einrichtungen. Damit sie diese herausragende Rolle mit den daran geknüpften Erwartungen erfüllen können, ist neben optimierten Elektromotoren und sie speisenden, schnell schaltenden Stromrichterstellgliedern auch eine hochdynamische Regelung erforderlich. Diesem Umstand wird mit der angebotenen Veranstaltung in Form einer detaillierten Einführung in die Thematik der Regelung elektrischer Antriebe Rechnung getragen.
This book, now in its second, completely revised and updated edition, offers a critical approach to the challenging interpretation of the latest research data obtained using functional neuroimaging in whiplash injury. Such a comprehensive guide to recent and current international research in the field is more necessary than ever, given that the confusion regarding the condition and the medicolegal discussions surrounding it have increased further despite the publication of much literature on the subject. In recent decades especially the functional imaging methods – such as single-photon emission tomography, positron emission tomography, functional MRI, and hybrid techniques – have demonstrated a variety of significant brain alterations. Functional Neuroimaging in Whiplash Injury - New Approaches covers all aspects, including the imaging tools themselves, the various methods of image analysis, different atlas systems, and diagnostic and clinical aspects. The book will help physicians, patients and their relatives and friends, and others to understand this condition as a disease.
In this paper pathophysiological interrelated deactivation/activation phenomena are set out in the example of whiplash injury. These phenomena could have been underestimated in previous positron emission tomography studies as their focus was on hypoperfusion rather than hyperperfusion. In addition, statistical parametric mapping analysis of cerebral studies is normally not fine-tuned to special interesting areas rather than to obvious clusters of difference.
Hatte Maria einen Jodmangel?
(2019)
Auch wenn sie im Internet-Zeitalter zu einer Normalität werden, bleiben Ferndiagnosen unter Medizinern umstritten. Erst recht vorsichtig sollte man sein, wenn es sich bei dem Patienten um die leibliche Mutter Gottes handelt. Doch wenn man in diesem Gemälde eine authentische Dokumentation sieht, ist der Befund eindeutig: Maria hatte zum Zeitpunkt der Geburt ihres berühmten Sohns auffällig lange und schlanke Finger sowie eine Struma des Grads II bis III.
The Baroque composer Johann Sebastian Bach (1685–1750) has left us with many puzzles. The well-known oil painting by Elias Gottlob Haußmann is the only painting for which Bach actually posed in person. According to this portrait, Bach must have been quite obese. The cheeks and nose are flushed – possibly as signs of hypertension – and the eye lids are narrow – a sign of myopia. Furthermore, there is a thinning of the lateral third of the right eyebrow, which is known as Hertoghe’s sign, and indicated periorbital edema. Both signs are compatible with hypothyroidism. Bach might have been suffering from type-2 diabetes as the origin of his final illness, and the obituary reports two cataract surgeries by oculist John Taylor in March/April 1750, and, four months later, “apoplexy” followed by a high fever, of which Bach died. It may be speculated, however, that Bach’s entire illness was the result of his presumed obesity, possibly in combination with hypothyroidism.
Kommentar zum Artikel "Arthur Willis Goodspeed" von Otto Glasser, veröffentlicht in Science Vol. 98, Issue 2540, Seite 219 (doi.org/10.1126/science.98.2536.125).
Thermisch angetriebene (Adsorptions-)Kältemaschinen können mit einem verhältnismäßig geringen elektrischen Energieaufwand bzw. mit einer hohen elektrischen Leistungszahl Kälte bereitstel-len. Wird die zum Antrieb erforderliche Wärme aus industrieller Abwärme bereitgestellt, ist diese Kältebereitstellung energetisch effizienter als die Kältebereitstellung über eine Kompressionskäl-temaschine. Wird die Wärme jedoch in Kraft-Wärme-Kopplung bereitgestellt, ist die primärenergetische Bewertung sowohl von mehreren Teilwirkungsgraden als auch den Primärenergiefaktoren für den eingesetzten Brennstoff und die erzeugte bzw. bezogene elektrische Energie abhängig. Eine umfangreiche Messkampagne im Sommer 2018 liefert unter realitätsnahen Randbedingungen in einer Labor umgebung detaillierte Energiekennzahlen für einen typischen Tagesgang des Kältebedarfs. Damit gelingt es, Teilenergiekennwerte für die Planungspraxis abzuleiten und das Gesamtsystem energetisch mit einer konventionellen Kompressionskältemaschine zu vergleichen.
The high peak power in comparison to the average transmit power is one of the major long-standing problems in multicarrier modulation and is known as the PAPR (peak to average power ratio) problem. Many PAPR reduction methods have been devised and their comparison is usually based on the complementary cumulative distribution function (CCDF) of the PAPR. While this comparison is straightforward and easy to compute, its relationship with system performance metrics like the (uncoded) BER or the word error rate (WER) for coded systems is considerably more involved. We evaluate the impact of the PAPR on performance metrics like uncoded BER, EVM (error vector magnitude), mutual information and the WER for soft decoding. In this context, we find that system performance is not necessarily degraded by an increasing PAPR. We show that a high number of subcarriers, despite the corresponding high PAPR, is actually not a problem for the system performance and provide a simple explanation for this seemingly counter-intuitive fact.
In numerical calculations, guided acoustic waves, localized in two spatial dimensions, have been shown to exist and their properties have been investigated in three different geometries, (i) a half-space consisting of two elastic media with a planar interface inclined to the common surface, (ii) a wedge made of two elastic media with a planar interface, and (iii) the free edge of an elastic layer between two quarter-spaces or two wedge-shaped pieces of a material with elastic properties and density differing from those of the intermediate layer.
For the special case of Poisson media forming systems (i) and (ii), the existence ranges of these 1D guided waves in parameter space have been determined and found to strongly depend on the inclination angle between surface and interface in case (i) and the wedge angle in case (ii). In a system of type (ii) made of two materials with strong acoustic mismatch and in systems of type (iii), leaky waves have been found with a high degree of spatial localization of the associated displacements, although the two materials constituting these structures are isotropic.
Both the fully guided and the leaky waves analyzed in this work could find applications in non-destructive evaluation of composite structures and should be accounted for in geophysical prospecting, for example.
A critical comparison is presented of the two computational approaches employed, namely a semi-analytical finite element scheme and a method based on an expansion of the displacement field in a double series of special functions.
Most machine learning methods require careful selection of hyper-parameters in order to train a high performing model with good generalization abilities. Hence, several automatic selection algorithms have been introduced to overcome tedious manual (try and error) tuning of these parameters. Due to its very high sample efficiency, Bayesian Optimization over a Gaussian Processes modeling of the parameter space has become the method of choice. Unfortunately, this approach suffers from a cubic compute complexity due to underlying Cholesky factorization, which makes it very hard to be scaled beyond a small number of sampling steps. In this paper, we present a novel, highly accurate approximation of the underlying Gaussian Process. Reducing its computational complexity from cubic to quadratic allows an efficient strong scaling of Bayesian Optimization while outperforming the previous approach regarding optimization accuracy. First experiments show speedups of a factor of 162 in single node and further speed up by a factor of 5 in a parallel environment.
Printed Electronics is perceived to have a major impact in the fields of smart sensors, Internet of Things and wearables. Especially low power printed technologies such as electrolyte gated field effect transistors (EGFETs) using solution-processed inorganic materials and inkjet printing are very promising in such application domains. In this paper, we discuss a modeling approach to describe the variations of printed devices. Incorporating these models and design flows into our previously developed printed design system allows for robust circuit design. Additionally, we propose a reliability-aware routing solution for printed electronics technology based on the technology constraints in printing crossovers. The proposed methodology was validated on multiple benchmark circuits and can be easily integrated with the design automation tools-set.
A car is only useful, when it runs properly – but keeping a car it running is getting more and more complex. Car service providers need a deep knowledge about technical details of the different car models. On the other hand car producers try to keep this information in their ownership. Digital data collection takes place every second on the car´s product life cycle and is stored on the car producers´ servers. The contribution of this paper is three-fold: we will provide an overview of the current concepts of intelligent order assistant technologies (I). This corpus is used to come to a more precise description of the specific service performance aspects (II). Finally, a representative empirical study with German motor mechanics will help to evaluate the wishes and needs regarding an intelligent order assistant in the garage (III).
Die Einführung von Projektmanagementstandards kostet nachweislich Zeit und Geld, bringt vorübergehende Unruhe in die Organisation und ist nicht selten durch eine lästige Kundenforderung initiiert. Durch die Beschränkung der Sichtweise auf diese Aspekte wird das Thema häufig als unangenehm empfunden. Wir möchten die Implementierung von PM-Standards aber als lohnende Investition vorstellen, Potenziale, Chancen und Synergien aufzeigen und eine solide Basis für zahlreiche Organisations- und Verbesserungsprojekte zur Einführung von PM-Standards schaffen.
Internationale Projektarbeit
(2019)
This article deals with the problem of wireless synchronization between onboard computing devices of small-sized unmanned aerial vehicles (SUAV) equipped with integrated wireless chips (IWC). Accurate synchronization between several devices requires the precise timestamping of batches transmitting and receiving on each of them. The best precision is demonstrated by those solutions where timestamping is performed on the PHY level, right after modulation/demodulation of the batch. Nowadays, most of the currently produced IWC are Systems-on-a-Chip (SoC) that include both PHY and MAC, implemented with one or several processor cores application. SoC allows create more cost and energy efficient wireless devices. At the same time, it limits the developers direct access to the internal signals and significantly complicates precise timestamping for sent and received batches, required for mutual synchronization of industrial devices. Some modern IEEE 802.11 IWCs have inbuilt functions that use internal chip clock to register timestamps. However, high jitter of the interfaces between the external device and IWC degrades the comparison of the timestamps from the internal clock to those registered by external devices. To solve this problem, the article proposes a novel approach to the synchronization, based on the analysis of IWC receiver input potential. The benefit of this approach is that there is no need to demodulate and decode the received batches, thus allowing it implementation with low-cost IWCs. In this araticle, Cypress CYW43438 was taken as an example for designing hardware and software solutions for synchronization between two SUAV onboard computing devices, equipped with IWC. The results of the performed experimental studies reveal that mutual synchronization error of the proposed method does not exceed 10 μs.
With the growing share of renewable energies in the electricity supply, transmission and distribution grids have to be adapted. A profound understanding of the structural characteristics of distribution grids is essential to define suitable strategies for grid expansion. Many countries have a large number of distribution system operators (DSOs) whose standards vary widely, which contributes to coordination problems during peak load hours. This study contributes to targeted distribution grid development by classifying DSOs according to their remuneration requirement. To examine the amendment potential, structural and grid development data from 109 distribution grids in South-Western Germany, are collected, referring to publications of the respective DSOs. The resulting data base is assessed statistically to identify clusters of DSOs according to the fit of demographic requirements and grid-construction status and thus identify development needs to enable a broader use of regenerative energy resources. Three alternative algorithms are explored to manage this task. The study finds the novel Gauss-Newton algorithm optimal to analyse the fit of grid conditions to regional requirements and successfully identifies grids with remuneration needs. It is superior to the so far used K-Means algorithm. The method developed here is transferable to other areas for grid analysis and targeted, cost-efficient development.
Protecting software from illegal access, intentional modification or reverse engineering is an inherently difficult practical problem involving code obfuscation techniques and real-time cryptographic protection of code. In traditional systems a secure element (the "dongle") is used to protect software. However, this approach suffers from several technical and economical drawbacks such as the dongle being lost or broken.
We present a system that provides such dongles as a cloud service, and more importantly, provides the required cryptographic material to control access to software functionality in real-time.
This system is developed as part of an ongoing nationally funded research project and is now entering a first trial stage with stakeholders from different industrial sectors.
The development of secure software systems is of ever-increasing importance. While software companies often invest large amounts of resources into the upkeeping and general security properties of large-scale applications when in production, they appear to neglect utilizing threat modeling in the earlier stages of the software development lifecycle. When applied during the design phase of development, and continuously throughout development iterations, threat modeling can help to establish a "Secure by Design" approach. This approach allows issues relating to IT security to be found early during development, reducing the need for later improvement – and thus saving resources in the long term. In this paper the current state of threat modeling is investigated. This investigation drove the derivation of requirements for the development of a new threat modelling framework and tool, called OVVL. OVVL utilizes concepts of established threat modeling methodologies, as well as functionality not available in existing solutions.