Refine
Year of publication
- 2019 (121) (remove)
Document Type
- Conference Proceeding (70)
- Article (reviewed) (36)
- Article (unreviewed) (5)
- Book (3)
- Doctoral Thesis (3)
- Letter to Editor (2)
- Part of a Book (1)
- Patent (1)
Conference Type
- Konferenzartikel (59)
- Konferenz-Abstract (8)
- Sonstiges (2)
- Konferenzband (1)
Language
- English (121) (remove)
Has Fulltext
- no (121) (remove)
Is part of the Bibliography
- yes (121)
Keywords
- Heart rhythm model (5)
- Modeling and simulation (5)
- Virtual Reality (4)
- Education in Optics and Photonics (3)
- Human Computer Interaction (3)
- Machine Learning (3)
- Plastizität (3)
- RoboCup (3)
- Cryoballoon catheter ablation (2)
- Ecodesign (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (57)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (21)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (20)
- Fakultät Wirtschaft (W) (20)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (11)
- INES - Institut für nachhaltige Energiesysteme (8)
- ACI - Affective and Cognitive Institute (5)
- IfTI - Institute for Trade and Innovation (4)
- CRT - Campus Research & Transfer (3)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (3)
Open Access
- Closed Access (69)
- Open Access (43)
- Bronze (6)
The monitoring of industrial environments ensures that highly automated processes run without interruption. However, even if the industrial machines themselves are monitored, the communication lines are currently not continuously monitored in todays installations. They are checked usually only during maintenance intervals or in case of error. In addition, the cables or connected machines usually have to be removed from the system for the duration of the test. To overcome these drawbacks, we have developed and implemented a cost-efficient and continuous signal monitoring of Ethernet-based industrial bus systems. Several methods have been developed to assess the quality of the cable. These methods can be classified to either passive or active. Active methods are not suitable if interruption of the communication is undesired. Passive methods, on the other hand, require oversampling, which calls for expensive hardware. In this paper, a novel passive method combined with undersampling targeting cost-efficient hardware is proposed.
Provides a state-of-the-art overview of international trade policy research
The Handbook of Global Trade Policy offers readers a comprehensive resource for the study of international trade policy, governance, and financing. This timely and authoritative work presents contributions from a team of prominent experts that assess the policy implications of recent academic research on the subject. Discussions of contemporary research in fields such as economics, international business, international relations, law, and global politics help readers develop an expansive, interdisciplinary knowledge of 21st century foreign trade.
Accessible for students, yet relevant for practitioners and researchers, this book expertly guides readers through essential literature in the field while highlighting new connections between social science research and global policy-making. Authoritative chapters address new realities of the global trade environment, global governance and international institutions, multilateral trade agreements, regional trade in developing countries, value chains in the Pacific Rim, and more. Designed to provide a well-rounded survey of the subject, this book covers financing trade such as export credit arrangements in developing economies, export insurance markets, climate finance, and recent initiatives of the World Trade Organization (WTO). This state-of-the-art overview:
• Integrates new data and up-to-date research in the field
• Offers an interdisciplinary approach to examining global trade policy
• Introduces fundamental concepts of global trade in an understandable style
• Combines contemporary economic, legal, financial, and policy topics
• Presents a wide range of perspectives on current issues surrounding trade practices and policies
The Handbook of Global Trade Policy is a valuable resource for students, professionals, academics, researchers, and policy-makers in all areas of international trade, economics, business, and finance.
Open markets, international trade and foreign direct investments are a source of prosperity in challenging times. This Special Section looks at developed economies and emerging markets, also taking into account the role of trade for impactful capacity-building in least developed countries (LDCs). Specific emphasis is placed on financing economic development and trade, analysing what roles trade and development finance should play in the quest for an efficient mobilisation of private capital for growth, trade and development.
Excellent organisations require targeted strategies to implement their vision and mission, deploying a stakeholder-focused approach. As part of evidence-based policy making, it is a common approach to measure government financing vehicles’ results. A state-of-the-art method in quantitative benchmarking to overcome the challenge of considering multiple inputs and outputs is Data Envelopment Analysis (DEA). Descriptive statistics and explorative-qualitative approaches are also applied in a modern ECA benchmarking model to substantiate DEA results and put them into perspective. This enabler-result model provides a holistic view and allows to identify top performing ECAs and Exim-Banks, providing the opportunity for inefficient institutions to learn from their most productive peers. This best practice approach for strategic benchmarking enables the senior management to develop and implement a cutting-edge strategy, and increase value for key stakeholders.
What emotional effects does gamification have on users who work or learn with repetitive tasks? In this work, we use biosignals to analyze these affective effects of gamification. After a brief discussion of related work, we describe the implementation of an assistive system augmenting work by projecting elements for guidance and gamification. We also show how this system can be extended to analyse users' emotions. In a user study, we analyse both biosignals (facial expressions and electrodermal activity), and regular performance measures (error rate and task completion time).
For the performance measures, the results confirm known effects like increased speed and slightly increased error rate. In addition, the analysis of the biosignals provides strong evidence for two major affective effects: the gamification of work and learning tasks incites highly significantly more positive emotions and increases emotionality altogether. The results add to the design of assistive systems, which are aware of the physical as well as the affective context.
In this article the high-temperature behavior of a cylindrical lithium iron phosphate/graphite lithium-ion cell is investigated numerically and experimentally by means of differential scanning calorimetry (DSC), accelerating rate calorimetry (ARC), and external short circuit test (ESC). For the simulations a multi-physics multi-scale (1D+1D+1D) model is used. Assuming a two-step electro-/thermochemical SEI formation mechanism, the model is able to qualitatively reproduce experimental data at temperatures up to approx. 200 °C. Model assumptions and parameters could be evaluated via comparison to experimental results, where the three types of experiments (DSC, ARC, ESC) show complementary sensitivities towards model parameters. The results underline that elevated-temperature experiments can be used to identify parameters of the multi-physics model, which then can be used to understand and interpret high-temperature behavior. The resulting model is able to describe nominal charge/discharge operation behavior, long-term calendaric aging behavior, and short-term high-temperature behavior during extreme events, demonstrating the descriptive and predictive capabilities of physicochemical models.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset, where a subspace is a subset of dimensions of the data. But the exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, which means that parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation shows linear speedup. Moreover, we develop an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
Radio frequency identification (RFID) antennas are popular for high frequency (HF) RFID, energy transfer and near field communication (NFC) applications. Particularly for wireless measurement systems the RFID/NFC technology is a good option to implement a wireless communication interface. In this context, the design of corresponding reader and transmitter antennas plays a major role for achieving suitable transmission quality. This work proves the feasibility of the rapid prototyping of a RFID/NFC antenna, which is used for the wireless communication and energy harvesting at the required frequency of 13.56 MHz. A novel and low-cost direct ink writing (DIW) technology utilizing highly viscous silver nanoparticle ink is used for this process. This paper describes the development and analysis of low-cost printed flexible RFID/NFC antennas on cost-effective substrates for a microelectronic vital parameter measurement system. Furthermore, we compare the measured technical parameters with existing copper-based counterparts on a FR4 substrate.
Many sectors, like finance, medicine, manufacturing, and education, use blockchain applications to profit from the unique bundle of characteristics of this technology. Blockchain technology (BT) promises benefits in trustability, collaboration, organization, identification, credibility, and transparency. In this paper, we conduct an analysis in which we show how open science can benefit from this technology and its properties. For this, we determined the requirements of an open science ecosystem and compared them with the characteristics of BT to prove that the technology suits as an infrastructure. We also review literature and promising blockchain-based projects for open science to describe the current research situation. To this end, we examine the projects in particular for their relevance and contribution to open science and categorize them afterwards according to their primary purpose. Several of them already provide functionalities that can have a positive impact on current research workflows. So, BT offers promising possibilities for its use in science, but why is it then not used on a large-scale in that area? To answer this question, we point out various shortcomings, challenges, unanswered questions, and research potentials that we found in the literature and identified during our analysis. These topics shall serve as starting points for future research to foster the BT for open science and beyond, especially in the long-term.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new teaching method and appropriate learning materials in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appears as negative side effects of eco-friendly solutions.
Based on a literature analysis and own investigations, authors propose to introduce a manageable number of eco-innovation tools into a standard one-semester design course in process engineering with particular focus on the identification of eco-problems in existing technologies, selection of the appropriate new process intensification technologies (knowledge-based engineering), and systematic ideation and problem solving (knowledge-based innovation and invention).
The proposed educational approach equips students with the advanced knowledge, skills and competences in the field of eco-innovation. Analysis of the student’s work allows one to recommend simple-to-use tools for a fast application in process engineering, such as process mapping, database of eco-friendly process intensification technologies, and up to 20 strongest inventive operators for solving of environmental problems. For the majority of students in the survey, even the small workload has strengthened their self-confidence and skills in eco-innovation
Economic growth and ecological problems motivate industries to apply eco-friendly technologies and equipment. However, environmental impact, followed by energy and material consumption still remain the main negative implications of the technological progress in process engineering. Based on extensive patent analysis, this paper assigns more than 250 identified eco-innovation problems and requirements to 14 general eco-categories with energy consumption and losses, air pollution, and acidification as top issues. It defines primary eco-engineering contradictions, in case eco-problems appear as negative side effects of the new technologies, and secondary eco-engineering contradictions, if eco-friendly solutions have new environmental drawbacks. The study conceptualizes a correlation matrix between the eco-requirements for prediction of typical eco-contradictions on example of processes involving solids handling. Finally, it summarizes major eco-innovation approaches including Process Intensification in process engineering, and chronologically reviews 66 papers on eco-innovation adapting TRIZ methodology. Based on analysis of 100 eco-patents, 58 process intensification technologies, and literature, the study identifies 20 universal TRIZ inventive principles and sub-principles that have a higher value for environmental innovation.
The 40 Altshuller Inventive Principles with numerous sub-principles remain over decades the most frequently applied tool of the Theory of Inventive Problem Solving TRIZ for systematic idea generation. However, their application often requires a concentrated, creative and abstract way of thinking that can be fairly challenging for the newcomers to TRIZ. This paper describes an approach to reduce the abstraction level of inventive sub-principles and presents the results of the idea generation experiment conducted with three groups of undergraduate and graduate students from different years of study in mechanical and process engineering. The students were asked to generate and to record their individual ideas for three design problems using a pre-defined set of classical and modified sub-principles within 10 minutes. The overall outcomes of the experiment support the assumption that the less abstract wording of the modified sub-principles leads to higher number of ideas. The distribution of ideas between the fields of MATCHEM-IBD (Mechanical, Acoustic, Thermal, Chemical, Electrical, Magnetic, Intermolecular, Biological and Data processing) differs significantly between groups using modified and abstract sub-principles.
Classification of TRIZ Inventive Principles and Sub-Principles for Process Engineering Problems
(2019)
The paper proposes a classification approach of 40 Inventive Principles with an extended set of 160 sub-principles for process engineering, based on a thorough analysis of 155 process intensification technologies, 200 patent documents, 6 industrial case studies applying TRIZ, and other sources. The authors define problem-specific sub-principles groups as a more precise and productive ideation technique, adaptable for a large diversity of problem situations, and finally, examine the anticipated variety of ideation using 160 sub-principles with the help of MATCEM-IBD fields.
Growing demands for cleaner production and higher eco-efficiency in process engineering require a comprehensive analysis of technical and environmental outcomes of customers and society. Moreover, unexpected additional technical or ecological drawbacks may appear as negative side effects of new environ-mentally friendly technologies. The paper conceptualizes a comprehensive ap-proach for analysis and ranking of engineering and ecological requirements in process engineering in order to anticipate secondary problems in eco-design and to avoid compromising the environmental or technological goals. For this purpose, the paper presents a method based on integration of the Quality Func-tion Deployment approach with the Importance-Satisfaction Analysis for the requirements ranking. The proposed method identifies and classifies compre-hensively the potential engineering and eco-engineering contradictions through analysis of correlations within requirements groups such as stakehold-er requirements (SRs) and technical requirements (TRs), and additionally through cross-relationship between SRs and TRs.
Process engineering industries are now facing growing economic pressure and societies' demands to improve their production technologies and equipment, making them more efficient and environmentally friendly. However unexpected additional technical and ecological drawbacks may appear as negative side effects of the new environmentally-friendly technologies. Thus, in their efforts to intensify upstream and downstream processes, industrial companies require a systematic aid to avoid compromising of ecological impact. The paper conceptualises a comprehensive approach for eco-innovation and eco- design in process engineering. The approach combines the advantages of Process Intensification as Knowledge-Based Engineering (KBE), inventive tools of Knowledge-Based Innovation (KBI), and main principles and best-practices of Eco-Design and Sustainable Manufacturing. It includes a correlation matrix for identification of eco-engineering contradictions and a process mapping technique for problem definition, database of Process Intensification methods and equipment, as well as a set of strongest inventive operators for eco-ideation.
Smart Home or Smart Building applications are a growing market. An increasing challenge is to design energy efficient Smart Home applications to achieve sustainable and green homes. Using the example of the development of an Indoor Smart Gardening system with wireless monitoring and automated watering this paper is discussing in particular the design issue of energy autonomous working sensors and actuators for home automation. Most important part of the presented Smart Gardening system is a 3D printed smart flower pot for single plants. The smart flower pot has integrated a water reservoir for automated plant irrigation and an electronic for monitoring important plant parameters and the water level of the water reservoir. Energy harvesting with solar cells enables energy autonomous working of the flower pot. A low-power wireless interface also integrated in the flowerpot and an external gateway based on a Raspberry Pi 3 enables wireless networking of multiple of those flower pots. The gateway is used for evaluating the plant parameters and as a user interface. Particularly the architecture of the energy autonomous wireless flower pot will be considered, because fully energy autonomous sensors and actuators for home automation could not be implemented without special concepts for the energy supply and the overall electronic.
Amongst all the major hazard aspects for the health of people in big conglomerates is the increase of the particulate matter concentration. Traditional systems for particulate matter (PM) monitoring have a great number of drawbacks but the main issues are economical and are related to the installation costs and never ending periodical maintenance expenses. After all there are installations of such systems but their number is limited and having in mind the growth of population, cities and industry areas, there is even a bigger need for more information on air quality because PM changes non-linearly, has a wide range and different sources. In this paper, we propose an approach, based on low-cost sensor nodes, for real-time measuring and obtaining information about the PM concentration. The adoption of that approach allows for a detailed study of the intensities of pollution and its sources. The system power supply is powered by a PV module. The power supply unit is designed using a model-based design that is a new approach to prototyping power-operated electronic devices with guaranteed performance.
In this article we outline the model development planned within the joint projectModel-based city planningand application in climate change (MOSAIK). The MOSAIK project is funded by the German FederalMinistry of Education and Research (BMBF) within the frameworkUrban Climate Under Change ([UC]2)since 2016. The aim of MOSAIK is to develop a highly-efficient, modern, and high-resolution urban climatemodel that allows to be applied for building-resolving simulations of large cities such as Berlin (Germany).The new urban climate model will be based on the well-established large-eddy simulation code PALM, whichalready has numerous features related to this goal, such as an option for prescribing Cartesian obstacles. Inthis article we will outline those components that will be added or modified in the framework of MOSAIK.Moreover, we will discuss the everlasting issue of acquisition of suitable geographical information as inputdata and the underlying requirements from the model's perspective.
Modeling and simulation play a key role in analyzing the complex electrochemical behavior of lithium-ion batteries. We present the development of a thermodynamic and kinetic modeling framework for intercalation electrochemistry within the open-source software Cantera. Instead of using equilibrium potentials and single-step Butler-Volmer kinetics, Cantera is based on molar thermodynamic data and mass-action kinetics, providing a physically-based and flexible means for complex reaction pathways. Herein, we introduce a new thermodynamic class for intercalation materials into the open-source software. We discuss the derivation of molar thermodynamic data from experimental half-cell potentials, and provide practical guidelines. We then demonstrate the new class using a single-particle model of a lithium cobalt oxide/graphite lithium-ion cell, implemented in MATLAB. With the present extensions, Cantera provides a platform for the lithium-ion battery modeling community both for consistent thermodynamic and kinetic models and for exchanging the required thermodynamic and kinetic parameters. We provide the full MATLAB code and parameter files as supplementary material to this article.
The measurement of the active material volume fraction in composite electrodes of lithium-ion battery cells is difficult due to the small (sub-micrometer) and irregular structure and multi-component composition of the electrodes, particularly in the case of blend electrodes. State-of-the-art experimental methods such as focused ion beam/scanning electron microscopy (FIB/SEM) and subsequent image analysis require expensive equipment and significant expertise. We present here a simple method for identifying active material volume fractions in single-material and blend electrodes, based on the comparison of experimental equilibrium cell voltage curve (open-circuit voltage as function of charge throughput) with active material half-cell potential curves (half-cell potential as function of lithium stoichiometry). The method requires only (i) low-current cycling data of full cells, (ii) cell opening for measurement of electrode thickness and active electrode area, and (iii) literature half-cell potentials of the active materials. Mathematical optimization is used to identify volume fractions and lithium stoichiometry ranges in which the active materials are cycled. The method is particularly useful for model parameterization of either physicochemical (e.g., pseudo-two-dimensional) models or equivalent circuit models, as it yields a self-consistent set of stoichiometric and structural parameters. The method is demonstrated using a commercial LCO–NCA/graphite pouch cell with blend cathode, but can also be applied to other blends (e.g., graphite–silicon anode).
Medical devices accompany our everyday life and come across in situations of worse condition, in significant moments concerning the health or during routine checkups. To ensure flawless operations and error-free results it is essential to test applications and devices. High risks for patient’s health come with operating errors [33] so that the presented research project, called Professional UX, identifies signals and irritations caused by the interaction with a certain device by analyzing mimic, voice and eye tracking data during user experience tests. Besides, this paper will provide information on typical errors of interactive applications which are based on an empirical lab-based survey and the evaluated results achieved. The pictured proceeding of user experience tests and the following analysis can also be applied to other fields and serves as a support for the optimization of products and systems.
Top-level staff prefers to live in urban areas with perfect social infrastructure. This is a common problem for excellent companies (“hidden champions”) in rural areas: even if they can provide the services qualified applicants appreciate for daily living, they fail to attract them because important facts are not presented sufficiently in social media or on the corporate website. This is especially true for applicants with families. The contribution of this paper is four-fold: we provide an overview of the current state of online recruiting activities of hidden champions (1). Based on this corpus, we describe the applicant service gap for company information in rural communes (2). A study on user experience (UX) identifies the applicants’ wishes and needs, focusing on a family-oriented information system on living conditions in rural areas (3). Finally, we present the results of an online survey on the value of such information systems with more than 200 participants (4).
Apache Hadoop is a well-known open-source framework for storing and processing huge amounts of data. This paper shows the usage of the framework within a project of the university in cooperation with a semiconductor company. The goal of this project was to supplement the existing data landscape by the facilities of storing and analyzing the data on a new Apache Hadoop based platform.
Background: Pulmonary vein isolation (PVI) using cryoballoon catheters are a recognized method for the treatment of atrial fibrillation (AF). This method offers shorter treatment duration in contrast to the classical therapy with high-frequency (HF) ablation.
Purpose: The aim of this study was to integrate different cryoballoon catheters and a HF catheter into a heart rhythm model and to compare them by means of static and dynamic electromagnetic and thermal simulation in use under AF.
Methods: The cryoballoon catheters from Medtronic and the HF ablation catheter from Osypka were modelled virtually with the aid of manufacturer specifications and the CST (Computer Simulation Technology, Darmstadt) simulation program. The cryoballoon catheter was located in the lower left pulmonary vein of the virtual heart rhythm model for the realization of pulmonary vein isolation (PVI) by cryoenergy. The simulated temperature at the balloon surface was -50°C during the simulation.
Results: During a simulated 20 second application of a cryoballoon catheter at -50°C, a temperature of -24°C was measured at a depth of 0.5 mm in the myocardium. At a depth of 1 mm the temperature was -3°C, at 2 mm depth 18°C and at 3 mm depth 29°C. Under the 15 second application of a RF catheter with a 8 mm electrode and a power of 5 W at 420 kHz, the temperature at the tip of the electrode was 110°C. At a depth of 0.5 mm in the myocardium, the temperature was 75°C, at a depth of 1 mm 58°C, at 2 mm depth 45°C and at 3 mm depth 38°C.
Conclusions: The simulation of temperature profiles during the virtual application of several catheter models in the heart rhythm model allows the static and dynamic simulation of PVI by cryoballoon ablation and RF ablation. The three-dimensional simulation can be used to improve ablation applications by creating a model in personalized cardiac rhythm therapy from MRI or CT data of a heart and finding a favourable position for ablation of AF.
Oxidation of the nickel electrode is a severe aging mechanism of solid oxide fuel cells (SOFC) and solid oxide electrolyzer cells (SOEC). This work presents a modeling study of safe operating conditions with respect to nickel oxide formation. Microkinetic reaction mechanisms for thermochemical and electrochemical nickel oxidation are integrated into a 2D multiphase model of an anode‐supported solid oxide cell. Local oxidation propensity can be separated into four regimes. Simulations show that the thermochemical pathway generally dominates the electrochemical pathway. As a consequence, as long as fuel utilization is low, cell operation considerably below electrochemical oxidation limit of 0.704 V is possible without the risk of reoxidation.
Printed systems spark immense interest in industry, and for several parts such as solar cells or radio frequency identification antennas, printed products are already available on the market. This has led to intense research; however, printed field-effect transistors (FETs) and logics derived thereof still have not been sufficiently developed to be adapted by industry. Among others, one of the reasons for this is the lack of control of the threshold voltage during production. In this work, we show an approach to adjust the threshold voltage (Vth) in printed electrolyte-gated FETs (EGFETs) with high accuracy by doping indium-oxide semiconducting channels with chromium. Despite high doping concentrations achieved by a wet chemical process during precursor ink preparation, good on/off-ratios of more than five orders of magnitude could be demonstrated. The synthesis process is simple, inexpensive, and easily scalable and leads to depletion-mode EGFETs, which are fully functional at operation potentials below 2 V and allows us to increase Vth by approximately 0.5 V.
Low latency communication is essential to enable mission-critical machine-type communication (mMTC) use cases in cellular networks. Factory and process automation are major areas that require such low latency communication. In this paper, we investigate the potential of adopting the semi-persistent scheduling (SPS) latency reduction technique in narrowband LTE (NB-LTE) networks and provide a comprehensive performance evaluation. First, we investigate and implement SPS in an open-source network simulator (NS3). We perform simulations with a focus on LTE-M and Narrowband IoT (NB-IoT) systems and evaluate the impact of the SPS technique on the uplink latency of these narrowband systems in real industrial automation scenarios. The performance gain of adopting SPS is analyzed and the results is compared with the legacy dynamic scheduling. Our results show that SPS has the potential to reduce the latency of cellular Internet of Things (cIoT) networks. We believe that SPS can be integrated into LTE-M and NB-IoT systems to support low-latency industrial applications.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
This book, now in its second, completely revised and updated edition, offers a critical approach to the challenging interpretation of the latest research data obtained using functional neuroimaging in whiplash injury. Such a comprehensive guide to recent and current international research in the field is more necessary than ever, given that the confusion regarding the condition and the medicolegal discussions surrounding it have increased further despite the publication of much literature on the subject. In recent decades especially the functional imaging methods – such as single-photon emission tomography, positron emission tomography, functional MRI, and hybrid techniques – have demonstrated a variety of significant brain alterations. Functional Neuroimaging in Whiplash Injury - New Approaches covers all aspects, including the imaging tools themselves, the various methods of image analysis, different atlas systems, and diagnostic and clinical aspects. The book will help physicians, patients and their relatives and friends, and others to understand this condition as a disease.
In this paper pathophysiological interrelated deactivation/activation phenomena are set out in the example of whiplash injury. These phenomena could have been underestimated in previous positron emission tomography studies as their focus was on hypoperfusion rather than hyperperfusion. In addition, statistical parametric mapping analysis of cerebral studies is normally not fine-tuned to special interesting areas rather than to obvious clusters of difference.
The Baroque composer Johann Sebastian Bach (1685–1750) has left us with many puzzles. The well-known oil painting by Elias Gottlob Haußmann is the only painting for which Bach actually posed in person. According to this portrait, Bach must have been quite obese. The cheeks and nose are flushed – possibly as signs of hypertension – and the eye lids are narrow – a sign of myopia. Furthermore, there is a thinning of the lateral third of the right eyebrow, which is known as Hertoghe’s sign, and indicated periorbital edema. Both signs are compatible with hypothyroidism. Bach might have been suffering from type-2 diabetes as the origin of his final illness, and the obituary reports two cataract surgeries by oculist John Taylor in March/April 1750, and, four months later, “apoplexy” followed by a high fever, of which Bach died. It may be speculated, however, that Bach’s entire illness was the result of his presumed obesity, possibly in combination with hypothyroidism.
Kommentar zum Artikel "Arthur Willis Goodspeed" von Otto Glasser, veröffentlicht in Science Vol. 98, Issue 2540, Seite 219 (doi.org/10.1126/science.98.2536.125).
The high peak power in comparison to the average transmit power is one of the major long-standing problems in multicarrier modulation and is known as the PAPR (peak to average power ratio) problem. Many PAPR reduction methods have been devised and their comparison is usually based on the complementary cumulative distribution function (CCDF) of the PAPR. While this comparison is straightforward and easy to compute, its relationship with system performance metrics like the (uncoded) BER or the word error rate (WER) for coded systems is considerably more involved. We evaluate the impact of the PAPR on performance metrics like uncoded BER, EVM (error vector magnitude), mutual information and the WER for soft decoding. In this context, we find that system performance is not necessarily degraded by an increasing PAPR. We show that a high number of subcarriers, despite the corresponding high PAPR, is actually not a problem for the system performance and provide a simple explanation for this seemingly counter-intuitive fact.
In numerical calculations, guided acoustic waves, localized in two spatial dimensions, have been shown to exist and their properties have been investigated in three different geometries, (i) a half-space consisting of two elastic media with a planar interface inclined to the common surface, (ii) a wedge made of two elastic media with a planar interface, and (iii) the free edge of an elastic layer between two quarter-spaces or two wedge-shaped pieces of a material with elastic properties and density differing from those of the intermediate layer.
For the special case of Poisson media forming systems (i) and (ii), the existence ranges of these 1D guided waves in parameter space have been determined and found to strongly depend on the inclination angle between surface and interface in case (i) and the wedge angle in case (ii). In a system of type (ii) made of two materials with strong acoustic mismatch and in systems of type (iii), leaky waves have been found with a high degree of spatial localization of the associated displacements, although the two materials constituting these structures are isotropic.
Both the fully guided and the leaky waves analyzed in this work could find applications in non-destructive evaluation of composite structures and should be accounted for in geophysical prospecting, for example.
A critical comparison is presented of the two computational approaches employed, namely a semi-analytical finite element scheme and a method based on an expansion of the displacement field in a double series of special functions.
Most machine learning methods require careful selection of hyper-parameters in order to train a high performing model with good generalization abilities. Hence, several automatic selection algorithms have been introduced to overcome tedious manual (try and error) tuning of these parameters. Due to its very high sample efficiency, Bayesian Optimization over a Gaussian Processes modeling of the parameter space has become the method of choice. Unfortunately, this approach suffers from a cubic compute complexity due to underlying Cholesky factorization, which makes it very hard to be scaled beyond a small number of sampling steps. In this paper, we present a novel, highly accurate approximation of the underlying Gaussian Process. Reducing its computational complexity from cubic to quadratic allows an efficient strong scaling of Bayesian Optimization while outperforming the previous approach regarding optimization accuracy. First experiments show speedups of a factor of 162 in single node and further speed up by a factor of 5 in a parallel environment.
Printed Electronics is perceived to have a major impact in the fields of smart sensors, Internet of Things and wearables. Especially low power printed technologies such as electrolyte gated field effect transistors (EGFETs) using solution-processed inorganic materials and inkjet printing are very promising in such application domains. In this paper, we discuss a modeling approach to describe the variations of printed devices. Incorporating these models and design flows into our previously developed printed design system allows for robust circuit design. Additionally, we propose a reliability-aware routing solution for printed electronics technology based on the technology constraints in printing crossovers. The proposed methodology was validated on multiple benchmark circuits and can be easily integrated with the design automation tools-set.
A car is only useful, when it runs properly – but keeping a car it running is getting more and more complex. Car service providers need a deep knowledge about technical details of the different car models. On the other hand car producers try to keep this information in their ownership. Digital data collection takes place every second on the car´s product life cycle and is stored on the car producers´ servers. The contribution of this paper is three-fold: we will provide an overview of the current concepts of intelligent order assistant technologies (I). This corpus is used to come to a more precise description of the specific service performance aspects (II). Finally, a representative empirical study with German motor mechanics will help to evaluate the wishes and needs regarding an intelligent order assistant in the garage (III).
With the growing share of renewable energies in the electricity supply, transmission and distribution grids have to be adapted. A profound understanding of the structural characteristics of distribution grids is essential to define suitable strategies for grid expansion. Many countries have a large number of distribution system operators (DSOs) whose standards vary widely, which contributes to coordination problems during peak load hours. This study contributes to targeted distribution grid development by classifying DSOs according to their remuneration requirement. To examine the amendment potential, structural and grid development data from 109 distribution grids in South-Western Germany, are collected, referring to publications of the respective DSOs. The resulting data base is assessed statistically to identify clusters of DSOs according to the fit of demographic requirements and grid-construction status and thus identify development needs to enable a broader use of regenerative energy resources. Three alternative algorithms are explored to manage this task. The study finds the novel Gauss-Newton algorithm optimal to analyse the fit of grid conditions to regional requirements and successfully identifies grids with remuneration needs. It is superior to the so far used K-Means algorithm. The method developed here is transferable to other areas for grid analysis and targeted, cost-efficient development.
Protecting software from illegal access, intentional modification or reverse engineering is an inherently difficult practical problem involving code obfuscation techniques and real-time cryptographic protection of code. In traditional systems a secure element (the "dongle") is used to protect software. However, this approach suffers from several technical and economical drawbacks such as the dongle being lost or broken.
We present a system that provides such dongles as a cloud service, and more importantly, provides the required cryptographic material to control access to software functionality in real-time.
This system is developed as part of an ongoing nationally funded research project and is now entering a first trial stage with stakeholders from different industrial sectors.
The development of secure software systems is of ever-increasing importance. While software companies often invest large amounts of resources into the upkeeping and general security properties of large-scale applications when in production, they appear to neglect utilizing threat modeling in the earlier stages of the software development lifecycle. When applied during the design phase of development, and continuously throughout development iterations, threat modeling can help to establish a "Secure by Design" approach. This approach allows issues relating to IT security to be found early during development, reducing the need for later improvement – and thus saving resources in the long term. In this paper the current state of threat modeling is investigated. This investigation drove the derivation of requirements for the development of a new threat modelling framework and tool, called OVVL. OVVL utilizes concepts of established threat modeling methodologies, as well as functionality not available in existing solutions.
Model-based analysis of Electrochemical Pressure Impedance Spectroscopy (EPIS) for PEM Fuel Cells
(2019)
Electrochemical impedance spectroscopy (EIS) is a widely-used diagnostic technique to characterize electrochemical processes. It is based on the dynamic analysis of two electrical observables, that is, current and voltage. Electrochemical cells with gaseous reactants or products, in particular fuel cells, offer an additional observable, that is, the gas pressure. The dynamic coupling of current or voltage with gas pressure gives rise to a number of additional impedance definitions, for which we have previously introduced the term electrochemical pressure impedance spectroscopy (EPIS) [1,2]. EPIS shows a particular sensitivity towards transport processes of gas-phase or dissolved species, in particular, diffusion coefficients and transport pathway lengths. It is as such complementary to standard EIS, which is mainly sensitive towards electrochemical processes. First EPIS experiments on PEM fuel cells have recently been shown [3].
We present a detailed modeling and simulation analysis of EPIS of a PEM fuel cell. We use a 1D+1D continuum model of a fuel/air channel pair with GDL and MEA. Backpressure is dynamically varied, and the resulting simulated oscillation in cell voltage is evaluated to yield the ▁Z_( V⁄p_ca ) EPIS signal. Results are obtained for different transport situations of the fuel cell, giving rise to very complex EPIS shapes in the Nyquist plot. This complexity shows the necessity of model-based interpretation of the complex EPIS shapes. Based on the simulation results, specific features in the EPIS spectra can be assigned to different transport domains (gas channel, GDL, membrane water transport).
Spinal cord stimulation (SCS) is the most commonly used technique of neurostimulation. It involves the stimulation of the spinal cord and is therefore used to treat chronic pain. The existing esophageal catheters are used for temperature monitoring during an electrophysiology study with ablation and transesophageal echocardiography. The aim of the study was to model the spine and new esophageal electrodes for the transesophageal electrical pacing of the spinal cord, and to integrate them in the Offenburg heart rhythm model for the static and dynamic simulation of transesophageal neurostimulation. The modeling and simulation were both performed with the electromagnetic and thermal simulation software CST (Computer Simulation Technology, Darmstadt). Two new esophageal catheters were modelled as well as a thoracic spine based on the dimensions of a human skeleton. The simulation of directed transesophageal neurostimulation is performed using the esophageal balloon catheter with an electric pacing potential of 5 V and a trapezoidal signal. A potential of 4.33 V can be measured directly at the electrode, 3.71 V in the myocardium at a depth of 2 mm, 2.68 V in the thoracic vertebra at a depth of 10 mm, 2.1 V in the thoracic vertebra at a depth of 50 mm and 2.09 V in the spinal cord at a depth of 70 mm. The relation between the voltage delivered to the electrodes and the voltage applied to the spinal cord is linear. Virtual heart rhythm and catheter models as well as the simulation of electrical pacing fields and electrical sensing fields allow the static and dynamic simulation of directed transesophageal electrical pacing of the spinal cord. The 3D simulation of the electrical sensing and pacing fields may be used to optimize transesophageal neurostimulation.
Spinal cord stimulation (SCS) is the most commonly used technique of neurostimulation. It involves the stimulation of the spinal cord and is therefore used to treat chronic pain. The existing esophageal catheters are used for temperature monitoring during an electrophysiology study with ablation and transesophageal echocardiography. The aim of the study was to model the spine and new esophageal electrodes for the transesophageal electrical pacing of the spinal cord, and to integrate them in the Offenburg heart rhythm model for the static and dynamic simulation of transesophageal neurostimulation. The modeling and simulation were both performed with the electromagnetic and thermal simulation software CST (Computer Simulation Technology, Darmstadt). Two new esophageal catheters were modelled as well as a thoracic spine based on the dimensions of a human skeleton. The simulation of directed transesophageal neurostimulation is performed using the esophageal balloon catheter with an electric pacing potential of 5 V and a trapezoidal signal. A potential of 4.33 V can be measured directly at the electrode, 3.71 V in the myocardium at a depth of 2 mm, 2.68 V in the thoracic vertebra at a depth of 10 mm, 2.1 V in the thoracic vertebra at a depth of 50 mm and 2.09 V in the spinal cord at a depth of 70 mm. The relation between the voltage delivered to the electrodes and the voltage applied to the spinal cord is linear. Virtual heart rhythm and catheter models as well as the simulation of electrical pacing fields and electrical sensing fields allow the static and dynamic simulation of directed transesophageal electrical pacing of the spinal cord. The 3D simulation of the electrical sensing and pacing fields may be used to optimize transesophageal neurostimulation.
Cast aluminum alloys are frequently used as materials for cylinder head applications in internal combustion gasoline engines. These components must withstand severe cyclic mechanical and thermal loads throughout their lifetime. Reliable computational methods allow for accurate estimation of stresses, strains, and temperature fields and lead to more realistic Thermomechanical Fatigue (TMF) lifetime predictions. With accurate numerical methods, the components could be optimized via computer simulations and the number of required bench tests could be reduced significantly. These types of alloys are normally optimized for peak hardness from a quenched state that maximizes the strength of the material. However due to high temperature exposure, in service or under test conditions, the material would experience an over-ageing effect that leads to a significant reduction in the strength of the material. To numerically account for ageing effects, the Shercliff & Ashby ageing model is combined with a Chaboche-type viscoplasticity model available in the finite-element program ABAQUS by defining field variables. The constitutive model with ageing effects is correlated with uniaxial cyclic isothermal tests in the T6 state, the overaged state, as well as thermomechanical tests. On the other hand, the mechanism-based TMF damage model (DTMF) is calibrated for both T6 and over-aged state. Both the constitutive and the damage model are applied to a cylinder head component simulating several cycles on an engine dynamometer test. The effects of including ageing for both models are shown.
Wireless sensor networks have found their way into a wide range of applications, among which environmental monitoring systems have attracted increasing interests of researchers. Main challenges for these applications are scalability of the network size and energy efficiency of the spatially distributed nodes. Nodes are mostly battery-powered and spend most of their energy budget on the radio transceiver module. In normal operation modes most energy is spent waiting for incoming frames. A so-called Wake-On-Radio (WOR) technology helps to optimize trade-offs between energy consumption, communication range, complexity of the implementation and response time. We already proposed a new protocol called SmartMAC that makes use of such WOR technology. Furthermore, it gives the possibility to balance the energy consumption between sender and receiver nodes depending on the use case. Based on several calculations and simulations, it was predicted that the SmartMAC protocol was significantly more efficient than other schemes being proposed in recent publications, while preserving a certain backward compatibility with standard IEEE802.15.4 transceivers. To verify this prediction, we implemented the SmartMAC protocol for a given hardware platform. This paper compares the realtime performance of the SmartMAC protocol against simulation results, and proves the measured values are very close to the estimated values. Thus we believe that the proposed MAC algorithms outperforms all other Wake-on-Radio MACs.