Refine
Year of publication
Document Type
- Conference Proceeding (934)
- Article (reviewed) (558)
- Article (unreviewed) (124)
- Part of a Book (65)
- Master's Thesis (63)
- Contribution to a Periodical (58)
- Book (30)
- Bachelor Thesis (29)
- Patent (29)
- Letter to Editor (28)
Conference Type
- Konferenzartikel (734)
- Konferenz-Abstract (134)
- Sonstiges (34)
- Konferenz-Poster (22)
- Konferenzband (12)
Language
- English (1976) (remove)
Keywords
- RoboCup (32)
- Dünnschichtchromatographie (27)
- COVID-19 (23)
- Export (22)
- Machine Learning (19)
- Gamification (17)
- Kommunikation (15)
- Finite-Elemente-Methode (13)
- Government Measures (13)
- TRIZ (13)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (589)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (501)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (373)
- Fakultät Wirtschaft (W) (279)
- INES - Institut für nachhaltige Energiesysteme (167)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (157)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (146)
- Fakultät Medien (M) (ab 22.04.2021) (80)
- IMLA - Institute for Machine Learning and Analytics (71)
- ACI - Affective and Cognitive Institute (58)
Open Access
- Open Access (830)
- Closed Access (666)
- Closed (290)
- Bronze (136)
- Gold (75)
- Diamond (67)
- Hybrid (44)
- Grün (12)
In this study, circular economy (CE) relevance in Germany will be discussed based on LinkedIn readily available data. LinkedIn company profiles located in Germany with ‘circular economy’ in their description or any other field were selected and used as a data source to analyze their CE relation. Overall, 514 German companies were analyzed in reference to the 15 German regions they belong. Most companies are located in the federal state of Berlin (126), followed by North Rhine-Westphalia (96) and Bavaria (77). In terms of the industry sector, they are self-classified to environmental services (64), management consulting (50), renewables & environment (33), research (31), and computer software (18) etc. Regarding their employees with LinkedIn profiles, 22,621 people are affiliated with these companies, ranging from one to 7,877. All examined companies have a total of 819,632 followers on LinkedIn, ranging from none to 88,167. An increase in CE-related companies was recorded in 13 of the 16 federal states of Germany over a one-year period. This work provides essential insights into the increasing relevance and trends of the circular economy in German enterprises and will help conduct further national studies with readily available data from LinkedIn.
To this date, it is difficult to find high-level statistics on YouTube that paint a fair picture of the platform in its entirety. This study attempts to provide an overall characterization of YouTube, based on a random sample of channel and video data, by showing how video provision and consumption evolved over the course of the past 10 years. It demonstrates stark contrasts between video genres in terms of channels, uploads and views, and that a vast majority of on average 85% of all views goes to a small minority of 3% of all channels. The analytical results give evidence that older channels have a significantly higher probability to garner a large viewership, but also show that there has always been a small chance for young channels to become successful quickly, depending on whether they choose their genre wisely.
Wow, You Are Terrible at This!: An Intercultural Study on Virtual Agents Giving Mixed Feedback
(2020)
While the effects of virtual agents in terms of likeability, uncanniness, etc. are well explored, it is unclear how their appearance and the feedback they give affects people's reactions. Is critical feedback from an agent embodied as a mouse or a robot taken less serious than from a human agent? In an intercultural study with 120 participants from Germany and the US, participants had to find hidden objects in a game and received feedback on their performance by virtual agents with different appearances. As some levels were designed to be unsolvable, critical feedback was unavoidable. We hypothesized that feedback would be taken more serious, the more human the agent looked. Also, we expected the subjects from the US to react more sensitively to criticism. Surprisingly, our results showed that the agents' appearance did not significantly change the participants' perception. Also, while we found highly significant differences in inspirational and motivational effects as well as in perceived task load between the two cultures, the reactions to criticism were contrary to expectations based on established cultural models. This work improves our understanding on how affective virtual agents are to be designed, both with respect to culture and to dialogue strategies.
This paper presents a new approach for the teaching of competence in additive manufacturing to engineering students in product development. Particularly new to this approach is the combination of the students' autonomous assembly and commissioning of a 3D-printer, and the independent development of guidelines for this new technology regarding the design of components. This way the students will be able to gain first practical experiences with the data preparation, the additive manufacturing process itself and also the required post-treatment of the 3D-printed parts. To allow the students a significantly deeper insight into the functioning of 3D-printing, the workshop Rapid Prototyping developed a new approach in the course of which the students first assemble a construction kit for a 3D-printer themselves and then commission the printer. This enables the students to gain a better understanding of the functionality and configuration of additive manufacturing. In a next step, the students used the 3D-printers they constructed themselves to produce components which they take from a database. Finally, the experiences of the students in the course of the workshop will be evaluated to review the effectiveness of the new approach.
With economic weight shifting toward net zero, now is the time for ECAs, Exim-Banks, and PRIs to lead. Despite previous success, aligning global economic governance to climate goals requires additional activities across export finance and investment insurance institutions. The new research project initiated by Oxford University, ClimateWorks Foundation, and Mission 2020 including other practitioners and academics from institutions such as Atradius DSB, Columbia University, EDC, FMO and Offenburg University focuses on reshaping future trade and investment governance in light of climate action. The idea of a ‘Berne Union Net Zero Club’ is an important item in a potential package of reforms. This can include realigning mandates and corporate strategies, principles of intervention, as well as ECA, Exim-Bank and PRI operating models in order to accelerate net zero transformation. Full transparency regarding Berne Union members’ activities would be an excellent starting point. We invite all interested parties in the sector to come together to chart our own path to net zero
Additive manufacturing (AM) or 3D printing (3DP) has become a widespread new technology in recent years and is now used in many areas of industry. At the same time, there is an increasing need for training courses that impart the knowledge required for product development in 3D printing. In this article, a workshop on “Rapid Prototyping” is presented, which is intended to provide students with the technical and creative knowledge for product development in the field of AM. Today, additive manufacturing is an important part of teaching for the training of future engineers. In a detailed literature review, the advantages and disadvantages of previous approaches to training students are examined and analyzed. On this basis, a new approach is developed in which the students analyze and optimize a given product in terms of additivie manufacturing. The students use two different 3D printers to complete this task. In this way, the students acquire the skills to work independently with different processes and materials. With this new approach, the students learn to adapt the design to different manufacturing processes and to observe the restrictions of different materials. The results of these courses are evaluated through feedback in a presentation and a questionnaire.
Wood juice, a liquid produced during wood processing, is a harmful waste that requires utilization. To achieve a circular economy, biowastes should be recycled, reducing fossil carbon usage. Therefore, the objective of this work was to examine the potential of wood juice as a feedstock for bioplastic synthesis by Bacillus sp. G8_19. Polyhydroxyalkanoate (PHA) syntheses using wood juice from Douglas fir trees and that from a mixture of spruce/fir trees were compared. It was found that the PHA content was higher after using wood juice from spruce/fir trees than that from Douglas fir trees (18.0% vs 6.1% of cell dry mass). Gas chromatography analysis showed that, with both wood juices, Bacillus sp. G8_19 accumulated poly(3-hydroxybutyrate-co-3-hydroxyvalerate). The content of 3-hydroxyvalerate (3HV) monomers was higher when spruce/fir wood juice was used (10.7% vs 1.9%). The C/N ratio did not have a statistically significant effect on the copolymer content in biomass, but it did significantly influence the 3HV content. The proposed concept may serve as an approach to wood waste valorization via production of biodegradable materials.
The IEEE 1588 precision time protocol (PTP) is a time synchronization protocol with sub-microsecond precision primarily designed for wired networks. In this letter, we propose wireless precision time protocol (WPTP) as an extension to PTP for multi-hop wireless networks. WPTP significantly reduces the convergence time and the number of packets required for synchronization without compromising on the synchronization accuracy.
In this paper we integrate the ideas of network coding and relays into an existing practical network architecture used in a wireless network scenario. Specifically, we use the COPE architecture to test our ideas. Since previous works have focused on the communication aspect at the physical layer level, we attempt to take it one step further by including the MAC layer. Our idea is based on information theoretic concepts developed by Shannon in order to reliably apply network coding to increase the net throughput.
Home Care Applications and Ambient Assisted Living become increasingly attractive. This is caused as well by market pull, as the number of elderly people grows monotonously, as well as by technology push, as technological advances and attractive products pave the way to economically advantageous offerings. However, in real-life applications, a significant number of challenges remain. Those include seamless communication between products from different supplier, due to the lack of sufficiently standardized solutions, energy budgets, and scalability of solutions. This paper presents the experience from the InCASA project (Integrated Network for Completely Assisted Senior Citizen's Autonomy), where architectures for heterogeneous physical and logical communication flows are examined.
Critical theory and philosophy across many fields in the humanities has become awash with what has been characterised as ‘the material turn’. This material turn, which seems to involve varying combinations of what is known as Object Orientated Ontology (Harman), Actor-Network Theory (Latour), process philosophy (Whitehead), speculative realism (Bryant), or agential realism (Barad), emphasises some move toward a posthuman understanding of what the world is, and our relation to it.
In the past two decades much has been published on whiplash injury, yet both the confusion regarding the condition, and the medicolegal discussion about it have increased. In this paper, functional imaging research results are summarized using MRIcroGL3D visualization software and assembled in an image comprising regions of cerebral activation and deactivation.
Whiplash injury
(2012)
In cardiac resynchronization therapy (CRT) for heart failure, individualization of the AV delay is essential to improve hemodynamics and to minimize non-responder rate. In patients in sinus rhythm having additional disposition to bradycardia, optimization is necessary for both situations, atrial sensing and pacing. Therefore, echo-optimization is the goldstandard but time consuming. Unfortunately, it depends on the particular CRT systems parameter set if the resulting individually optimal AV delays can be programmed or not. Some CRT systems provide a set of AV delays for DDD operation combined with a set of the pace-sense-compensation to optimize the AV delay in DDD and VDD operation. The pace-sense-compensation (PSC) can be defined by the difference of implant-related interatrial conduction intervals in DDD and VDD operation measured in the esophageal left atrial electrogram. In a cohort of 96 CRT patients we found mean PSC of 59-35ms ranging between 0-143ms. As a consequence, allowing 10ms tolerance, AVD optimization is completely impossible in one of the two modes, VDD or DDD operation, in 34 (35%) or 5 (5%) patients with implants restricting the PSC range to 60ms or 100ms, respectively. Thus, we propose companies to provide CRT systems with programmable pace-sense- compensation between 0ms and 150ms.
Webassembly is a new technology to create application in a new way. Webassembly is being developed since 2017 by the worldwide web consortium (w3c). The primary task of webassembly is to improve web applications.
Today, more and more applications are being created as web applications. Web applications have some advantages - they are platform independent and even mobile platforms can run them, and no installation is needed apart from a modern web browser.
Currently, web applications are being developed in JavaScript (JS), hypertext mark-up language 5 (HTML 5), and cascading style sheets (CSS).
These technologies are not made for huge web applications, but they should not be replaced by webassembly; rather, webassembly is an extension to the currently existing technology.
The purpose of webassembly is to fix or improve the problems in web application development.
This master’s thesis reviews all of the aspects and checks whether the promises of webassembly are kept and where problems still exist.
A recognizable division appears between students with a comprehensive knowledge of the Web and those that are less certain about its resources. This is where, the teaching innovation Web Mentoring: Peer-to-Peer has been developed to help the students to cope better with the demands of media education. Furthermore, this presents the opportunity for master’s degree students to begin mentoring undergraduate students. Mentoring sessions have already been carried out successfully in the previous two semesters and are being presented, evaluated and discussed.
This paper describes a comparative study of two tactile systems supporting navigation for persons with little or no visual and auditory perception. The efficacy of a tactile head-mounted device (HMD) was compared to that of a wearable device, a tactile belt. A study with twenty participants showed that the participants took significantly less time to complete a course when navigating with the HMD, as compared to the belt.
Generative convolutional deep neural networks, e.g. popular GAN architectures, are relying on convolution based up-sampling methods to produce non-scalar outputs like images or video sequences. In this paper, we show that common up-sampling methods, i.e. known as up-convolution or transposed convolution, are causing the inability of such models to reproduce spectral distributions of natural training data correctly. This effect is independent of the underlying architecture and we show that it can be used to easily detect generated data like deepfakes with up to 100% accuracy on public benchmarks. To overcome this drawback of current generative models, we propose to add a novel spectral regularization term to the training optimization objective. We show that this approach not only allows to train spectral consistent GANs that are avoiding high frequency errors. Also, we show that a correct approximation of the frequency spectrum has positive effects on the training stability and output quality of generative networks.
A novel approach of a test environment for embedded networking nodes has been conceptualized and implemented. Its basis is the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes run in parallel, connected via so-called virtual channels. The environment allows to modifying the behavior of the virtual channels as well as the overall topology during runtime to virtualize real-life networking scenarios. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features as well as it supports the identification of bugs in wireless communication stacks. In combination with powerful test execution systems, it is possible to create a continuous development and integration flow.
The paper describes the implementation of practical laboratory settings in a virtual environment. With the entry of VR glasses into the mass market, there is a chance to establish educational and training applications for displaying some teaching materials and practical works. Therefore our project focuses on the realization of virtual experiments and environments, which gives users a deep insight into selected subfields of Optics and Photonics. Our goal is not to substitute the hand on experiments rather to extend them. By means of VR glasses, the user is offered the possibility to view the experiment from several angles and to make changes through interactive control functions. During the VR application, additional context-related information is displayed. By using object recognition, the specific graphics and texts for the respective object are loaded and supplemented at the appropriate place. Thus, complex facts are supported in an informative way. The prototype is developed using the Unity Engine and can thus be exported to different platforms and end devices. Another major advantage of virtual simulations to the real situation is the high degree of controllability as well as the easy repeatability. With slight modifications, entire experiments can be reused. Our research aims to acquire new knowledge in the field of e-learning in association with VR technology. Here we try to answer a core question of the compatibility of the individual media components.
VR-based implementation of interactive laboratory experiments in optics and photonics education
(2022)
Within the framework of a developed blended learning concept, a lot of experience has already been gained with a mixture of theoretical lectures and hands-on activities, combined with the advantages of modern digital media. Here, visualizations using videos, animations and augmented reality have proven to be effective tools to convey learning content in a sustainable way. In the next step, ideas and concepts were developed to implement hands-on laboratory experiments in a virtual environment. The main focus is on the realization of virtual experiments and environments that give the students a deep insight into selected subfields of optics and photonics.
Vortex breakdown phenomena in rotating fluids are investigated both theoretically and experimentally. The fluid is contained in a cone between two spherical surfaces. The primary swirling motion is induced ba the rotating lower boundary. The upper surface can be fixed with non-slip condition or can be a stress-free surface. Depending on these boundary conditions and on the Reynolds number, novel structures of recirculation zones are realized. The axisymmetric flow patterns are simulated numerically by a finite difference method. Experiments are done to visualize the topological structure of the flow pattern and to observe the existence ranges of the different recirculating flows. The comparison between theory and experiment shows good agreement with respect to the topological structure of the flow.
Background: Transesophageal left atrial (LA) pacing and transesophageal LA ECG recording are semi-invasive techniques for diagnostic and therapy of supraventricular rhythm disturbance. Cardiac resynchronization therapy (CRT) with right atrial (RA) sensed biventricular pacing is an established therapy for heart failure patients with reduced left ventricular (LV) ejection fraction, sinus rhythm and interventricular electrical desynchronization.
Purpose: The aim of the study was to evaluate electromagnetic and voltage pacing fields of the combination of RA pacing, LA pacing and biventricular pacing in patients with long interatrial and interventricular electrical desynchronization.
Methods: The modelling and electromagnetic simulations of transesophageal LA pacing in combination with RA pacing and biventricular pacing would be staged and analyzed with the CST (Computer Simulation Technology) software. Different electrodes were modelled in order to simulate different types of bipolar pacing in the 3D-CAD Offenburg heart rhythm model: The bipolar Solid S (Biotronik) electrode where modelled for RA pacing and right ventricular (RV) pacing, Attain 4194 (Medtronic) for LV pacing and TO8 (Osypka) multipolar esophageal electrode with hemispheric electrodes for LA pacing.
Results: The pacemaker amplitudes for the electromagnetic pacing simulations were performed with 3 V for RA pacing, 1.5 V for RV pacing, 50 V for LA pacing and 3V for LV pacing with pacing impulse duration of 0.5 ms for RA, RV and LV pacing and 10 ms for LA pacing. The atrioventricular pacing delay after RA pacing was 140 ms. The different pacing modes AAI, VVI, DDD, DDD0V and DDD0D were evaluated for the analysis of the electric pacing field propagation of pacemaker, CRT and LA pacing. The pacing results were compared at minimum (LOW) and maximum (HIGH) parameter settings. While the LOW setting produced fewer tetrahedral and more inaccurate results, the HIGH setting produced many tetrahedral and therefore more accurate results.
Conclusions: The simulation of the combination of transesophageal LA pacing with RA sensed biventricular pacing is possible with the Offenburg heart rhythm model. The new temporary 4-chamber pacing method may be additional useful method in CRT non-responders with long interatrial electrical delay.
Shapes and structures of vortex breakdown phenomena in rotating fluids are visualized. We investigate the flow in a cylindrical container and in a cone between two spherical surfaces. The primary swirling flow is induced by the rotating upper disk in the cylindrical case and by the lower boundary in the spherical case. The upper surface can be fixed with a no slip condition or can be a stress-free surface. Depending on these boundary conditions and on the Reynolds number novel structures of recirculation zones are realized. Experiments are done to visualize the topological structure of the flow and to determine their existence range as function of the geometry and rotation rate. A comparison between the experimental and theoretical approach shows a good agreement in respect to the topological structures of the flows.
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
After approximately 200 years, a comprehensive access to the texts of Humboldt’s extraordinary exploration of the Americans is within sight. To open the legacy to the public for free access the Humboldt Digital Library (HDL) project has been developing a dynamic amount of data related to studies of Alexander von Humboldt. The library includes a range of texts, tables and images, as well as many tools that assist mining the data and navigating the system.
Virtual reality (VR) offers the opportunity to create virtual worlds that could replace real experiences. This research investigates the influence of user motivation, temporal distance and experience type on the satisfaction with the VR experience, and the degree of acceptance of a VR experience as a substitute for a real experience. The results suggest that the degree of acceptance of a VR experience as a substitute for a real experience is higher for passive VR experiences compared to active VR experiences. Furthermore, the results support the assumption that users are more satisfied with passive VR experiences.
Virtual reality in the hotel industry: assessing the acceptance of immersive hotel presentation
(2019)
In the hotel industry, it is crucial to reduce the inherent information asymmetry with regard to the goods offered. This asymmetry can be minimised through the use of smartphone-based virtual reality applications (SBVRs), which allow virtual simulation of real experiences and thus enable more efficient information retrieval. The aim of the study is to determine for the first time the user acceptance of these immersive hotel presentations for assessing the performance of a travel accommodation. For this purpose, the Technology Acceptance Model (TAM) was used to explain the acceptance behaviour for this new technology. A virtual reality application was specially developed, in which the participants could explore a hotel virtually. A total of 569 participants took part in the study. The structural equation model and the hypotheses were tested using a Partial Least Squares (PLS) analysis. The results illustrate that the immersive product experience leads to more efficient information gathering. The perceived usefulness significantly affects the attitude towards using the technology as well as the intention to use it. In contrast to the traditional TAM, the perceived ease of use of SBVRs has no effect on the perceived usefulness or attitude towards using the technology.
Nowadays the processing power of mobile phones, Smart phones and PDA is increasing, as well as the transmission bandwidth. Nevertheless there is still the need to reduce the content and the need of processing the data. Proposals and solutions for dynamic reduction of the transmitted content will be discussed. For that, device specific properties will be taken into account, aiming at reducing the need of processing power at the client side to display the 3D Virtual Reality data. Therefore, well known technologies like data compression are combined with new approaches to achieve the goal of adaptive content transmission. For device dependant reduction of processing power the data has to be pre-processed at the server side or the server itself has to take over functionality of weak mobile devices.
The importance of machine learning (ML) has been increasing dramatically for years. From assistance systems to production optimisation to healthcare support, almost every area of daily life and industry is coming into contact with machine learning. Besides all the benefits ML brings, the lack of transparency and difficulty in creating traceability pose major risks. While solutions exist to make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge, as unnoticed modification of a model is also a danger when using ML. This paper proposes to create an ML Birth Certificate and ML Family Tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
A method for 3D printing of a robot element, more particularly a finger for use in robotics. At least one sensor is concomitantly printed by means of multi-material printing during the printing of the robot element. A gripping element produced by a method of this kind includes a number of printed layers of robot element material and a concomitantly printed sensor.
Cardiac resynchronization therapy is an established therapy for heart failure patients with sinus rhythm, reduced left ventricular ejection fraction and prolongation of QRS duration. The aim of the study was to evaluate ventricular desynchronization with electrical interventricular delay (IVD) to left ventricular delay (LVD) ratio in atrial fibrillation heart failure patients. IVD and LVD were measured by transesophageal posterior left ventricular ECG recording. In atrial fibrillation heart failure patients with prolonged QRS duration, the mean IVD-to-LVD-ratio was 0.84 +/- 0.42 with a range from 0.17 to 2.2 IVD-to-LVD-ratio. IVD-to-LVD-ratio correlated with QRS duration. IVD-to-LVD-ratio may be a useful parameter to evaluate electrical ventricular desynchronization in atrial fibrillation heart failure patients.
VDI Standard 4521: Status
(2016)
VDI Guideline 4521 Part 1: “Inventive problem solving with TRIZ: Part 1 – Fundamentals and definitions” has been published on 2015-04-01. The standard will sharpen the image of TRIZ, facilitate cooperation, and support studying and teaching. It is not a textbook but concisely summarizes basic assumptions of TRIZ and its terminology. It gives an overview on specific methods and tools which will be described in the following parts.
Auxiliary power units (APUs) are used in mobile applications to supply electrical power in the range of 3 to 10 kW. The state of the art generators are driven by a diesel engine at constant speed. They have a low efficiency (high fuel consumption) as they operate mostly in partial load conditions. A higher efficiency for partial loads is feasible by adjusting the speed of the diesel engine to its optimum efficiency. A frequency converter provides a constant electric frequency at variable speed of the generator. The resulting higher investments for such a variable speed generator (VSG) need a proof of economics, which is demonstrated by this investigation.
Marketing and sales have high expectations of new methods such as Big Data, artificial intelligence, machine learning, and predictive analytics. But following the “garbage in—garbage out” principle, the results leave much to be desired. The reason is often insufficient quality in the underlying customer data. This article sheds light on this problem using the data quality and value pyramid as an example. The higher up the value-added pyramid the data is located, the higher its quality and the more value it generates for a company. In addition, we show how the use of monitoring systems, such as a data quality scorecard, makes data quality visible and improvements measurable. In this way, the actual value of data for companies becomes obvious and manageable.
Additive manufacturing offers completely new production technologies thanks to the layered structure and the simultaneous processing of several materials. In order to exploit the potential of this new technology, it is already necessary in product development to consider the components no longer as monolithic blocks, but as a structure of many layers and individual elements (voxels). Therefore, this paper will examine the current state of voxel-based CAD systems and the subsequent 3D multi-material printing of the designed components. Different voxel-based CAD systems are used and analyzed for component design and a sample component is additively manufactured. The results show that simple components can be designed using voxel-based CAD systems. With the application of 3D multi-material printing, different materials and thus functions can be assigned to the designed voxel-based CAD-model.
This textbook helps use regenerative systems for heating and cooling effectively. Integration and automation schemes provide a quick overview. Practical examples clearly show standard solutions for the integration of regenerative energy sources. For the 2nd edition, improvements have been made to the text and illustrations, and references to standards have been updated. Control questions at the end of the main chapters serve to consolidate the understanding of the content.
The application of leaky feeder (radiating) cables is a common solution for the implementation of reliable radio communication in huge industrial buildings, tunnels and mining environment. This paper explores the possibilities of leaky feeders for 1D and 2D localization in wireless systems based on time of flight chirp spread spectrum technologies. The main focus of this paper is to present and analyse the results of time of flight and received signal strength measurements with leaky feeders in indoor and outdoor conditions. The authors carried out experiments to compare ranging accuracy and radio coverage area for a point-like monopole antenna and for a leaky feeder acting as a distributed antenna. In all experiments RealTrac equipment based on nanoLOC radio standard was used. The estimation of the most probable path of a chirp signal going through a leaky feeder was calculated using the ray tracing approach. The typical non-line-of-sight errors profiles are presented. The results show the possibility to use radiating cables in real time location technologies based on time-of-flight method.
Using patent information for identification of new product features with high market potential
(2014)
As part of the design education at Offenburg University, the teaching in technical documentation is continuously optimised. In this study, numerous mechanical engineering students, ages 19 to 29, are observed using the eye tracking technology and a video camera while performing various design exercises. The aim of the study is to enhance the students’ ability to read, understand and analyse complex engineering drawings. In one experiment, the students are asked to perform the “cube perspective test” after Stumpf and Fay to assess their ability for mental rotation as part of spatial visualization ability. Furthermore, the students are asked to prepare and give micro presentations on a topic related to their studies. Students have a maximum of 100 s time for these presentations. Thus, they can practise presenting important information in a short amount of time, show their rhetorical skills and demonstrate their acquisition of basic knowledge. During the presentation, the eye movement of a few selected students is recorded to analyse their information acquisition. In a further test, the students’ eye movements are analysed while reading an engineering drawing that consists of multiple views. All the spatial connections have to be included based on the different component views. Including these and their acquired knowledge, the students are asked to identify the correct representation of a component view. Furthermore the subjects are describing the function of an assembly, a parallel gripper and then they are to mentally disassemble the assembly to replace a damaged cylindrical pin. Simultaneously, they are filmed using a video camera to see which terms the students use for the individual technical terms. The evaluation of the eye movements shows that the increasing digitalisation of society and the use of electronic devices in everyday life lead to fast and only selective perceptual behaviour and that students feel insecure when dealing with technical drawings. The analysis of the videos shows a mostly non-technical and inaccurate manner of expression and a poor use of technical terms. The transferability of the achieved results to other technical tasks is part of further investigations.
A method for evaluating skin cancer detection based on millimeter-wave technologies is presented. For this purpose, the relative permittivities are calculated using the effective medium theory for the benign and cancerous lesion, considering the change in water content between them. These calculated relative permittivities are further used for the simulation and evaluation of skin cancer detection using a substrate-integrated waveguide probe. A difference in the simulated scattering parameters S 11 of up to 13dB between healthy and cancerous skin can be determined in the best-case.
We present a novel approach that utilizes BLE packets sent from generic BLE capable radios to synthesize an FSK-(like) addressable wake-up packet. A wake-up receiver system was developed from off-the-shelf components to detect these packets. It makes use of two differential signal paths separated by passive band-pass filters. After the rectification of each channel a differential amplifier compares the signals and the resulting wake-up signal is evaluated by an AS3933 wake-up receiver IC. Overall, the combination of these techniques contributes to a BLE compatible wake-up system which is more robust than traditional OOK wake-up systems. Thus, increasing wake-up range, while still maintaining a low energy budget. The proof-of-concept setup achieved a sensitivity of -47.8 dBm at a power consumption of 18.5 uW during passive listening. The system has a latency of 31.8 ms with a symbol rate of 1437 Baud.
We present the design of a system combining augmented reality (AR) and gamification to support elderly persons’ rehabilitation activities. The system is attached to the waist; it collects detailed movement data and at the same time augments the user’s path by projections. The projected AR-elements can provide location-based information or incite movement games. The collected data can be observed by therapists. Based on this data, the challenge level can be more frequently adapted, keeping up the patient’s motivation. The exercises can involve cognitive elements (for mild cognitive impairments), physiological elements (rehabilitation), or both. The overall vision is an individualized and gamified therapy. Thus, the system also offers application scenarios beyond rehabilitation in sports. In accordance with the methodology of design thinking, we present a first specification and a design vision based on inputs from business experts, gerontologists, physiologists, psychologists, game designers, cognitive scientists and computer scientists.
An organized strategy to ensure the security of an organization is an information security management system. During various security crises, hazards, and breaches, this strategy aids an organization in maintaining the confidentiality, integrity, and accessibility of information. Organizations are getting ready to comply with information security management system criteria. Despite this, security concerns continue to plague ineffective controls, have poor connectivity, or cause a silo effect, which is a common cause. One of the causes is a low maturity model that is not synchronized with the organization’s business processes. For a higher level of maturity, it is best to evaluate the practices.
Different maturity models on information security and cyber security capacity, management processes, security controls, implementation level, and many more have already been developed by numerous international organizations, experts, and scholars. The present models, however, do not assess a particular organization's particular practices. The evaluation of the business process is frequently neglected because measurement requirements for models are typically more concentrated on examining specific elements. For this reason, it caused the maturity assessment to not be executed explicitly and broadly.
We developed an organizational information security maturity model, a combination of work of different maturity models currently existing. While making this model, we considered that any size or type of organization could use this model. The model considers the success elements of the information security management system when assessing the implementation's effectiveness. We employed a mixed-method strategy that included both qualitative and quantitative research. With the help of a questionnaire survey, we evaluated the previous research using a qualitative methodology. In the quantitative method, we'll figure out how mature the information security management system is now. The proposed model could be used to reduce security incidents by improving implementation gaps.
To reach customers by dialog marketing campaigns is more and more difficult. This is a common problem of companies and marketing agencies worldwide: information overload, multi-channel-communication and a confusing variety of offers make it hard to gain the attention of the target group. The contribution of this paper is four-fold: we provide an overview of the current state of print dialog marketing activities and trends (I). Based on this corpus we identify the main key performance indicators of dialog marketing customer interaction (II). A qualitative user experience study identifies the customer wishes and needs, focusing on lottery offers for senior citizens (III). Finally, we evaluate the success of two different dialog marketing campaigns with 20,000 clients and compare the key performance indicators of the original hands-on experience-based print mailings with user experience tested and optimized mailings (IV).
Three real-lab trigeneration microgrids are investigated in non-residential environments (educational, office/administrational, companies/production) with a special focus on domain-specific load characteristics. For accurate load forecasting on such a local level, à priori information on scheduled events have been combined with statistical insight from historical load data (capturing information on not explicitly-known consumer behavior). The load forecasts are then used as data input for (predictive) energy management systems that are implemented in the trigeneration microgrids. In real-world applications, these energy management systems must especially be able to carry out a number of safety and maintenance operations on components such as the battery (e.g. gassing) or CHP unit (e.g. regular test runs). Therefore, energy management systems should combine heuristics with advanced predictive optimization methods. Reducing the effort in IT infrastructure the main and safety relevant management process steps are done on site using a Smart & Local Energy Controller (SLEC) assisted by locally measured signals or operator given information as default and external inputs for any advanced optimization. Heuristic aspects for local fine adjustment of energy flows are presented.
Besides of conventional CAD systems, new, cloudbased CAD systems have also been available for some years. These CAD systems designed according to the principle of software as a service (SaaS) differ in some important features from the conventional CAD systems. Thus, these CAD systems are operated via a browser and it is not necessary to install the software on a computer. The CAD-data is stored in the cloud and not on a local computer or central server. This new approach should also facilitate the sharing and management of data. Finally, many of these new CAD systems are available as freeware for education purposes, so the universities can save license costs. The chances and risks of cloud-based systems will first be analyzed in this paper. Then two leading cloud-based CAD systems will be researched. During the process, the technical performance range these new systems offer for the product development will be initially checked and reviewed. For this purpose, various criteria are worked out and the CAD software is evaluated using these criteria. In addition, the criteria are weighted by their importance for design education. This allows one to conclude which capabilities the different CAD system offers for use in education.
Introduction: Cardiac resynchronization therapy (CRT) with biventricular pacing (BV) is an established therapy for heart failure (HF) patients (P) with ventricular desynchronisation, but not all patients improved clinically. Aim of this study was to evaluate electrical intra-left ventricular conduction delay (LVCD) and interventricular conduction delay (IVCD), to better select patients for CRT.
Methods: 65 HF patients (age 63.4 ± 10.6 years; 7 females, 58 males) with New York Heart Association (NYHA) class 3 ± 0.2, 24.4 ± 6.7 % left ventricular (LV) ejection fraction and 167.4 ± 35.6 ms QRSD were included. Esophageal TO Osypka focused hemispherical electrodes catheter was perorally applied in position of maximum LV deflection to measure LVCD between onset and offset of LV deflection and IVCD between earliest onset of QRS in the 12-channel surface ECG and onset of LV deflection in the focused bipolar transesophageal LV electrogram.
Results: There were 50 responders with LVCD of 76.5 ± 20.4 ms, IVCD of 80.5 ± 26.1 ms (P=0.34) and QRSD of 171 ± 37.7 ms. 15 non-responders had longer LVCD of 90 ± 28.5 ms (P = 0.045), shorter IVCD of 50.1 ± 29.1 ms (P < 0.001) and QRSD of 155.3 ± 25 ms (P=0.14). During 21.3 ± 20.3 month BV pacing follow-up, the responder`s NYHA classes improved from 3 ± 0.2 to 2. ± 0.3 (P < 0.001) whereas the non-responders NYHA classes did not improve from 3 ± 0.2 to 2.9 ± 0.3 (P = 0.43) during 15.7 ± 13.9 month BV pacing follow-up (53 Boston, 10 Medtronic and 2 St. Jude CRT devices).
Conclusion: Determination of electrical LVCD and IVCD by focused bipolar transesophageal LV electrogram recording may be an additional useful technique to improve patient selection for CRT.
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present an unsupervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without superivison. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an autoencoder to generate suitable latent representation. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features could be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
Deep generative models have recently achieved impressive results for many real-world applications, successfully generating high-resolution and diverse samples from complex datasets. Due to this improvement, fake digital contents have proliferated growing concern and spreading distrust in image content, leading to an urgent need for automated ways to detect these AI-generated fake images.
Despite the fact that many face editing algorithms seem to produce realistic human faces, upon closer examination, they do exhibit artifacts in certain domains which are often hidden to the naked eye. In this work, we present a simple way to detect such fake face images - so-called DeepFakes. Our method is based on a classical frequency domain analysis followed by basic classifier. Compared to previous systems, which need to be fed with large amounts of labeled data, our approach showed very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios. For the evaluation on high resolution face images, we combined several public datasets of real and fake faces into a new benchmark: Faces-HQ. Given such high-resolution images, our approach reaches a perfect classification accuracy of 100% when it is trained on as little as 20 annotated samples. In a second experiment, in the evaluation of the medium-resolution images of the CelebA dataset, our method achieves 100% accuracy supervised and 96% in an unsupervised setting. Finally, evaluating a low-resolution video sequences of the FaceForensics++ dataset, our method achieves 91% accuracy detecting manipulated videos.
The COVID19 pandemic, a unique and devastating respiratory disease outbreak, has affected global populations as the disease spreads rapidly. Recent Deep Learning breakthroughs may improve COVID19 prediction and forecasting as a tool of precise and fast detection, however, current methods are still being examined to achieve higher accuracy and precision. This study analyzed the collection contained 8055 CT image samples, 5427 of which were COVID cases and 2628 non COVID. The 9544 Xray samples included 4044 COVID patients and 5500 non COVID cases. The most accurate models are MobileNet V3 (97.872 percent), DenseNet201 (97.567 percent), and GoogleNet Inception V1 (97.643 percent). High accuracy indicates that these models can make many accurate predictions, as well as others, are also high for MobileNetV3 and DenseNet201. An extensive evaluation using accuracy, precision, and recall allows a comprehensive comparison to improve predictive models by combining loss optimization with scalable batch normalization in this study. Our analysis shows that these tactics improve model performance and resilience for advancing COVID19 prediction and detection and shows how Deep Learning can improve disease handling. The methods we suggest would strengthen healthcare systems, policymakers, and researchers to make educated decisions to reduce COVID19 and other contagious diseases.
The University for Children is a very successful event aiming to spark children‧s interest in science, in this particular lecture in Optics and Photonics. It is from brain research that we know about the significant dependence of successful learning on the fun factor. Researchers in this field have shown that knowledge acquired with fun is stored for a longer time in the long-term memory and can be used both more efficiently and more creatively [1], [2]. Such an opportunity to inspire the young generation for science must not be wasted. The world of Photonics and Optics provides us with a nearly inexhaustible source of opportunities of this kind.
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant m argin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant margin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Purpose: Participation and accessibility issues faced by gamers with multi-sensory disabilities are themes yet to be fully understood by accessible technology researchers. In this work, we examine the personal experiences and perceptions of individuals with deafblindness who play games despite their disability, as well as the reasons that lead some of them to stop playing games.
Materials and methods: We conducted 60 semi-structured interviews with individuals living with deafblindness in five European countries: United Kingdom, Germany, Netherlands, Greece and Sweden.
Results: Participants stated that reasons for playing games included them being a fun and entertaining hobby, for socialization and meeting others, or for occupying the mind. Reasons for stop playing games included essentially accessibility issues, followed by high cognitive demand, changes in gaming experience due their disability, financial reasons, or because the accessible version of a specific game was not considered as fun as the original one.
Conclusions: We identified that a considerable number of individuals with deafblindness enjoy playing casual mobile games such as Wordfeud and Sudoku as a pastime activity. Despite challenging accessibility issues, games provide meaningful social interactions to players with deafblindness. Finally, we introduce a set of user-driven recommendations for making digital games more accessible to players with a diverse combination of sensory abilities.
IMPLICATIONS FOR REHABILITATION
- Digital games were considered a fun and entertaining hobby by participants with deafblindness. Furthermore, participants play games for socialization and meeting others, or for occupying the mind.
- Digital games provide meaningful social interactions and past time to persons with deafblindness.
- On top of accessibility implications, our findings draw attention to the importance of the social element of gaming for persons with deafblindness.
- Based on interviews, we introduce a set of user-driven recommendations for making digital games more accessible to players with a diverse combination of sensory abilities.
This article presents a study of cultural differences affecting the acceptance and design preferences of social robots. Based on a survey with 794 participants from Germany and the three Arab countries of Egypt, Jordan, and Saudi Arabia, we discuss how culture influences the preferences for certain attributes. We look at social roles, abilities and appearance, emotional awareness and interactivity of social robots, as well as the attitude toward automation. Preferences were found to differ not only across cultures, but also within countries with similar cultural backgrounds. Our findings also show a nuanced picture of the impact of previously identified culturally variable factors, such as attitudes toward traditions and innovations. While the participants’ perspectives toward traditions and innovations varied, these factors did not fully account for the cultural variations in their perceptions of social robots. In conclusion, we believe that more real-life practices emerging from the situated use of robots should be investigated. Besides focusing on the impact of broader cultural values such as those associated with religion and traditions, future studies should examine how users interact, or avoid interaction, with robots within specific contexts of use.
Non-contact anterior cruciate ligament injuries typically occur during cutting maneuvers and are associated with high peak knee abduction moments (KAM) within early stance. To screen athletes for injury risk or quantify the efficacy of prevention programs, it may be necessary to design tasks that mimic game situations. Thus, this study compared KAMs and ranking consistency of female handball players in three sport-specific fake-and-cut tasks of increasing complexity. The biomechanics of female handball players (n = 51, mean ± SD: 66.9 ± 7.8 kg, 1.74 ± 0.06 m, 19.2 ± 3.4 years) were recorded with a 3D motion capture system and force plates during three standardized fake-and-cut tasks. Task 1 was designed as a simple pre-planned cut, task 2 included catching a ball before a pre-planned cut in front of a static defender, and task 3 was designed as an unanticipated cut with three dynamic defenders involved. Inverse dynamics were used to calculate peak KAM within the first 100 ms of stance. KAM was decomposed into the frontal plane knee joint moment arm and resultant ground reaction force. RANOVAs (α ≤ 0.05) were used to reveal differences in the KAM magnitudes, moment arm, and resultant ground reaction force for the three tasks. Spearman's rank correlations were calculated to test the ranking consistency of the athletes' KAMs. There was a significant task main effect on KAM (p = 0.02; ηp2 = 0.13). The KAM in the two complex tasks was significantly higher (task 2: 1.73 Nm/kg; task 3: 1.64 Nm/kg) than the KAM in the simplest task (task 1: 1.52 Nm/kg). The ranking of the peak KAM was consistent regardless of the task complexity. Comparing tasks 1 and 2, an increase in KAM resulted from an increased frontal plane moment arm. Comparing tasks 1 and 3, higher KAM in task 3 resulted from an interplay between both moment arm and the resultant ground reaction force. In contrast to previous studies, unanticipated cutting maneuvers did not produce the highest KAMs. These findings indicate that the players have developed an automated sport-specific cutting technique that is utilized in both pre-planned and unanticipated fake-and-cut tasks.
In this study, an approach to a microwave-based radar system for the localization of objects has been proposed. This could be particularly useful in microwave imaging applications such as cardiac catheter detection. An experimental system is defined and realized with the selection of an appropriate antenna design. Hardware control functions and different imaging algorithms are implemented as well. The functionality of this measurement setup has been analyzed considering multiple testscenarios and it is proved to be capable of locating multiple objects as well as expanded objects.
A 2D-separation of 16 polyaromatic hydrocarbons (PAHs) according to the Environmental Protecting Agency (EPA) standard was introduced. Separation took place on a TLC RP-18 plate (Merck, 1.05559). In the first direction, the plate was developed twice using n-pentane at −20°C as the mobile phase. The mixture acetonitrile-methanol-acetone-water (12:8:3:3, v/v) was used for developing the plate in the second direction. Both developments were carried out over a distance of 43 mm. Further on in this publication, a specific and very sensitive indication method for benzo[a]pyrene and perylene was presented. The method can detect these hazardous compounds even in complicated PAH mixtures. These compounds can be quantified by a simple chemiluminescent reaction with a limit of detection (LOD) of 48 pg per band for perylene and 95 pg per band for benzo[a]pyrene. Although these compounds were separated from all other PAHs in the standard, a separation of both compounds was not possible from one another. The method is suitable for tracing benzo[a]pyrene and/or perylene. The proposed chemiluminescence screening test on PAHs is extremely sensitive but may indicate a false positive result for benzo[a]pyrene.
We present a two-dimensional (2D) planar chromatographic separation method for phytoestrogenic active compounds on RP-18 W (Merck, 1.14296) phase. It could be shown that an ethanolic extract of liquorice (Glycyrrhiza glabra) roots contains four phytoestrogenic active compounds. As solvent, in the first direction, the mix of hexane, ethyl acetate, and acetone (45:15:10, v/v) was used, and, in the second direction, that of acetone and water (15:10, v/v) was used. After separation, a modified yeast estrogen screen (YES) test was applied, using the yeast strain Saccharomyces cerevisiae BJ3505. The test strain (according to McDonnell) contains the estrogen receptor. Its activation by estrogen active compounds is measured by inducing the reporter gene lacZ which encodes the enzyme β-galactosidase. This enzyme activity is determined on plate by using the fluorescent substrate MUG (4-methylumbelliferyl-β-d-galactopyranoside). The enzyme can also hydrolyse X-β-Gal (5-bromo-4-chloro-3-indoxyl-β-d-galactopyranosid) into β-galactose and 5-bromo-4-chloro-3-indoxyl. The indoxyl compound is oxidized by oxygen forming the deep-blue dye 5,5β-dibromo-4,4β-dichloro-indigo which allows to detect phytoestrogenic activity more specific in the presence of native fluorescing compounds.
We present a two-dimensional (2D) planar chromatographic separation of estrogenic active compounds on RP-18 W (Merck, 1.14296) phase. A mixture of 8 substances was separated using a solvent mix consisting of hexane, ethyl acetate, acetone (55:15:10, v/v) in the first direction and of acetone and water (15:10, v/v) in the second direction. Separation was performed on an RP-18 W plate over a distance of 70 mm. This 2D-separation method can be used to quantify 17α-ethinylestradiol (EE2) in an effect-directed analysis, using the yeast strain Saccharomyces cerevisiae BJ3505. The test strain (according to McDonnell) contains the estrogen receptor. Its activation by estrogen active compounds is measured by inducing the reporter gene lacZ which encodes the enzyme β-galactosidase. This enzyme activity is determined on plate by using the fluorescent substrate MUG (4-methylumbelliferyl-β-d-galactopyranoside).
The state-of-the-art electrochemical impedance spectroscopy (EIS) calculations have not yet started from fully multi-dimensional modeling. For a polymer electrolyte membrane fuel cell (PEMFC) with long flow channel, the impedance plot shows a multi-arc characteristic and some impedance arcs could merge. By using a step excitation/Fourier transform algorithm, an EIS simulation is implemented for the first time based on the full 2D PEMFC model presented in the first part of this work. All the dominant transient behaviors are able to be captured. A novel methodology called ‘configuration of system dynamics’, which is suitable for any electrochemical system, is then developed to resolve the physical meaning of the impedance spectra. In addition to the high-frequency arc due to charge transfer, the Nyquist plots contain additional medium/low-frequency arcs due to mass transfer in the diffusion layers and along the channel, as well as a low-frequency arc resulting from water transport in the membrane. In some case, the impedance spectra appear partly inductive due to water transport, which demonstrates the complexity of the water management of PEMFCs and the necessity of physics-based calculations.
A two-dimensional single-phase model is developed for the steady-state and transient analysis of polymer electrolyte membrane fuel cells (PEMFC). Based on diluted and concentrated solution theories, viscous flow is introduced into a phenomenological multi-component modeling framework in the membrane. Characteristic variables related to the water uptake are discussed. A Butler–Volmer formulation of the current-overpotential relationship is developed based on an elementary mechanism of electrochemical oxygen reduction. Validated by using published V–I experiments, the model is then used to analyze the effects of operating conditions on current output and water management, especially net water transport coefficient along the channel. For a power PEMFC, the long-channel configuration is helpful for internal humidification and anode water removal, operating in counterflow mode with proper gas flow rate and humidity. In time domain, a typical transient process with closed anode is also investigated.
Practical bottlenecks associated with commercialization of Lithium-air cells include capacity limitation and low cycling efficiency. The origin of such losses can be traced to complex electrochemical side reactions and reactant mass transport losses[1]. The efforts to minimize such losses include exploration of various electrolytes with additives[2], and cell component geometry and material design. Given the wide range of options for such materials, it is almost impractical to experimentally setup and characterize all those cells. Consequently, modeling and simulation studies are efficient alternatives to analyze spatially and temporally resolved cell behavior for various combinations of materials[3]. In this study, with the help of a two-dimensional multi physics model, we have focused on the effect of electrode and electrolyte interaction (electrochemistry), choice of electrolyte (species transport), and electrode geometry (electrode design) on the performance of a lithium-air button cell. Figure1a shows the schematics of the 2D axisymmetric computational domain. A comparative analysis of five different electrolytes was performed while focusing on the 2D distribution of local current density and the concentration of electro-chemically active species in the cell, that is, O2and Li+. Using two different cathode configurations, namely, flooded electrode and gas diffusion electrode (GDE)[4] at different cathode thickness, the effect of cell geometry and electrolyte saturation on cell performance was explored. Further, a detailed discussion on electrode volume utilization (cf. Figure1b) is presented via changes in the active volume of cathode that produces 90% of the total current with the cell current density for different combinations of electrolyte saturations and cathode thickness.
One of the practical bottlenecks associated with commercialization of lithium-air cells is the choice of an appropriate electrolyte that provides the required combination of cell performance, cyclability and safety. With the help of a two-dimensional multiphysics model, we attempt to narrow down the electrolyte choice by providing insights into the effect of the transport properties of electrolyte, electrode saturation (flooded versus gas diffusion), and electrode thickness on a single discharge performance of a lithium-air button cell cathode for five different electrolytes including water, ionic liquid, carbonate, ether, and sulfoxide. The 2D distribution of local current density and concentrations of electrochemically active species (O2 and Li+) in the cathode is also discussed w.r.t electrode saturation. Furthermore, the efficacy of species transport in the cathode is quantified by introducing two parameters, firstly, a transport efficiency that gives local insight into the distribution of mass transfer losses, and secondly, an active electrode volume that gives global insight into the cathode volume utilization at different current densities. A detailed discussion is presented toward understanding the design-induced performance limitations in a Li-air button cell prototype.
In this work, we consider a duty-cycled wireless sensor network with the assumption that the on/off schedules are uncoordinated. In such networks, as all nodes may not be awake during the transmission of time synchronization messages, nodes will require to re-transmit the synchronization messages. Ideally a node should re-transmit for the maximum sleep duration to ensure that all nodes are synchronized. However, such a proposition will immensely increase the energy consumption of the nodes. Such a situation demands that there is an upper bound of the number of retransmissions. We refer to the time a node spends in re-transmission of the control message as broadcast duration. We ask the question, what should be the broadcast duration to ensure that a certain percentage of the available nodes are synchronized. The problem to estimate the broadcast duration is formulated so as to capture the probability threshold of the nodes being synchronized. Results show the proposed analytical model can predict the broadcast duration with a given lower error margin under real world conditions, thus demonstrating the efficiency of our solution.
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
In this paper we propose a motion framework forbipedal robots that decouples motion definitions from stabilizingthe robot. This simplifies motion definitions yet allows dynamicmotion adaptations. Two applications, walking and stopping onone leg, demonstrate the power of the framework. We show thatour framework is able to perform walking and stopping on one legeven under extreme conditions and improves walking benchmarkssignificantly in the RoboCup 3D soccer simulation domain.
Process engineering (PE) focuses on the design, operation, control and optimization of chemical, physical and biological processes and has applications in many industries. Process intensification (PI) is the key development approach in the modern process engineering. The theory of inventive problem solving (TRIZ) is today considered as the most comprehensive and systematically organized invention knowledge and creative thinking methodology. This paper analyses the opportunities of TRIZ application in PE and especially in combination with PI. In this context the paper outlines the major challenges for TRIZ application in PE, conceptualizes a possible TRIZ-based approach for process intensification and problem solving in PE, and defines the corresponding research agenda. It also presents the results of the original empirical innovation research in the field of solid handling in the ceramic industry, demonstrates a method for identification and prediction of contradictions and introduces the concept of the probability of contradiction occurrence. Additionally, it describes a technique of process mapping that is based on the function and multi-screen analysis of the processes. This technique is illustrated by a case study dealing with granulation process. The research work presented in this paper is a part of the European project “Intensified by Design® platform for the intensification of processes involving solids handling”.
TRIZ Inventive Principles
(2022)
The analysis of several thousand patents led to the conclusion that inventive engineering problems and technical contradictions in all kinds of industrial sectors could be solved by a limited number of basic Inventive Principles (Altshuller, 1984). The modern Theory of Inventive Problem Solving TRIZ (VDI 4521) contains 40 basic Inventive Principles (IP). These principles are simple to use or modify and can be easily integrated in brainstorming or daily engineer’s work. One established part of industrial practice is the composition of the specific groups of principles for solving different kinds of problems (Livotov, Petrov, 2011). Based on interdisciplinary experience of TRIZ application in the industrial companies in the last 25 years the a general order in the application of 40 Inventive Principles can be recommended for idea generation and problem solving (Livotov, Chandra, Mas'udah et al, 2019). This brochure presents an update of the 40 Inventive Principles extending the original version (Altshuller, 1984) with additional 70 sub-principles, resulting in the advanced set of 160 sub-principles, regarded as elementary inventive operators. These extended version of inventive principles finds its application in the AIDA Automatic IDEA & IP Generator https://www.tris-europe.com/eng/software/innovationssoftware.htm
With recent developments in the Ukrainian-Russian conflict, many are discussing about Germany’s dependency on fossil fuel imports in its energy system, and how can the country proceed with reducing that dependency. With its wide-ranging consumption sectors, the electricity sector comes as the perfect choice to start with. Recent reports showed that the German federal government is already intending to have a fully renewable electricity by 2035 while exploiting all possible clean power options. This was published in the federal government’s climate emergency program (Easter Package) in early 2022. The aim of this package is to initiate a rapid transition and decarbonization of the electricity sector. The Easter Package expects an enormous growth of renewable energies to a completely new level, with already at least 80% renewable gross energy consumption, with extensive and broad deployment of different generation technologies on various scales. This paper will discuss this ambitious plan and outline some insights into this huge and rapidly increasing step, and show how much will Germany need in order to achieve this huge milestone towards a fully green supply of the electricity sector. Different scenarios and shares of renewables will be investigated in order to elaborate on preponed climate-neutral goal of the electricity sector by 2035. The results pointed out some promising aspects in achieving a 100% renewable power, with massive investments in both generation and storage technologies.
Transthoracic impedance cardiography (ICG) is a non-invasive method for determination of hemodynamic parameters. The basic principle of transthoracic ICG is the measurement of electrical conductivity of the thorax over the time. The aim of the study was the analysis of hemodynamic parameters from healthy individuals and the evaluation of various hemodynamic monitoring devices. Fourteen men (mean age 25 ± 4.59 years) and twelve women (mean age 24 ± 3.5 years) were measured during the cardiovascular engineering laboratory at Offenburg University of Applied Sciences, Offenburg, Germany. The ICG recordings were measured with the devices CardioScreen 1000, CardioScreen 2000 and TensoScreen with the corresponding Software Cardiovascular Lab 2.5 (Medis Medizinische Messtechnik GmbH, Illmenau, Germany). In order to create identical frame conditions, all measurements were recorded in the same position and for the same duration. Various positions were simulated from horizontal lying position to vertical standing position. Altogether, more than 30 hemodynamic parameters were measured.
Artificial Intelligence (AI) can potentially transform many aspects of modern society in various ways, including automation of tasks, personalization of products and services, diagnosis of diseases and their treatment, transportation, safety, and security in public spaces, etc. Recently, AI technology has been transforming the financial industry, offering new ways to analyse data and automate processes, reduce costs, increase efficiency, and provide more personalized services to customers. However, it also raised important ethical and regulatory questions that need to be addressed by the industry and society as a whole. The aim of the Erasmus+ project Transversal Skills in Applied Artificial Intelligence - TSAAI (KA220-HED - Cooperation Partnerships in higher education) has been to establish a training platform that will incorporate teaching guidelines based on a curriculum covering different areas of application of AI technology. In this work, we will focus on applying AI models in the financial and insurance sectors.
Deep learning approaches are becoming increasingly important for the estimation of the Remaining Useful Life (RUL) of mechanical elements such as bearings. This paper proposes and evaluates a novel transfer learning-based approach for RUL estimations of different bearing types with small datasets and low sampling rates. The approach is based on an intermediate domain that abstracts features of the bearings based on their fault frequencies. The features are processed by convolutional layers. Finally, the RUL estimation is performed using a Long Short-Term Memory (LSTM) network. The transfer learning relies on a fixed-feature extraction. This novel deep learning approach successfully uses data of a low-frequency range, which is a precondition to use low-cost sensors. It is validated against the IEEE PHM 2012 Data Challenge, where it outperforms the winning approach. The results show its suitability for low-frequency sensor data and for efficient and effective transfer learning between different bearing types.
The last decades have seen the evolution of industrial production into more sophisticated processes. The development of specialized, high-end machines has increased the importance of predictive maintenance of mechanical systems to produce high-quality goods and avoid machine breakdowns. Predictive maintenance has two main objectives: to classify the current status of a machine component and to predict the maintenance interval by estimating its remaining useful life (RUL). Nowadays, both objectives are covered by machine learning and deep learning approaches and require large training datasets that are often not available. One possible solution may be transfer learning, where the knowledge of a larger dataset is transferred to a smaller one. This thesis is primarily concerned with transfer learning for predictive maintenance for fault classification and RUL estimation. The first part presents the state-of-the-art machine learning techniques with a focus on techniques applicable to predictive maintenance tasks (Chapter 2). This is followed by a presentation of the machine tool background and current research that applies the previously explained machine learning techniques to predictive maintenance tasks (Chapter 3). One novelty of this thesis is that it introduces a new intermediate domain that represents data by focusing on the relevant information to allow the data to be used on different domains without losing relevant information (Chapter 4). The proposed solution is optimized for rotating elements. Therefore, the presented intermediate domain creates different layers by focusing on the fault frequencies of the rotating elements. Another novelty of this thesis is its semi and unsupervised transfer learning-based fault classification approach for different component types under different process conditions (Chapter 5). It is based on the intermediate domain utilized by a convolutional neural network (CNN). In addition, a novel unsupervised transfer learning loss function is presented based on the maximum mean discrepancy (MMD), one of the state-of-the-art algorithms. It extends the MMD by considering the intermediate domain layers; therefore, it is called layered maximum mean discrepancy (LMMD). Another novelty is an RUL estimation transfer learning approach for different component types based on the data of accelerometers with low sampling rates (Chapter 6). It applies the feature extraction concepts of the classification approach: the presented intermediate domain and the convolutional layers. The features are then used as input for a long short-term memory (LSTM) network. The transfer learning is based on fixed feature extraction, where the trained convolutional layers are taken over. Only the LSTM network has to be trained again. The intermediate domain supports this transfer learning type, as it should be similar for different component types. In addition, it enables the practical usage of accelerometers with low sampling rates during transfer learning, which is an absolute novelty. All presented novelties are validated in detailed case studies using the example of bearings (Chapter 7). In doing so, their superiority over state-of-the-art approaches is demonstrated.
Cardiac resynchronization therapy (CRT) with biventricular pacing (BV) is an established therapy for heart failure (HF) patients with inter- and intraventricular conduction delay. The aim of this pilot study was to test the feasibility of both transesophageal measurement of left ventricular (LV) electrical delay and transesophageal LV pacing prior to implantation, to better select patients for CRT.
Introduction: Cardiac resynchronization therapy (CRT) with biventricular pacing is an established therapy for heart failure (HF) patients with sinus rhythm and ventricular desynchronisation. The aim of this study was to evaluate interventricular conduction delay (IVCD) and interatrial conduction delay (IACD) before and after premature ventricular contractions (PVC) in HF patients.
Methods: 13 HF patients (age 68 ± 10 years; 2 females, 11 males) with New York Heart Association functional class 2,8 ± 0.5, left ventricular (LV) ejection fraction 28,6 ± 12,6 %, 154 ± 25 ms QRS duration and PVC were analysed with bipolar transesophageal LV and left atrial electrogram recording and National Instruments LabView 2009 software. The level of significance of the t-test is 0,005.
Results: QRS duration increases during PVC (188 ± 32 ms) in comparison to the beat before (154 ± 25 ms, P = ) and after PVC (152 ± 25 ms,). IVCD increases during PVC up to 65 ± 33 ms (51 ± 19 ms in the beat before PVC, P=0.18, 49 ± 19 ms after PVC, P = 0.12). Intra-LV delay of 90 ± 16 ms is not different in the beat before PVC, 90 ± 14 ms during PVC (P = 0.99) and 94 ± 16 ms in the beat after PVC (P = 0.38). IACD is not significantly PVC influenced (67 ± 12 ms before PVC and 65 ± 13 ms after PVC, P = 0.71). Intra-left atrial conduction delay is not significant longer during PVC (57 ± 28 ms) than in the beat before PVC (54 ± 13 ms, P = 0.51) or after PVC (54 ± 8 ms, P = 0.45). PQ duration increases significantly after PVC (224 ± 95 ms) in comparison to the beat before PVC (176± 29 ms, P =...).
Conclusion: Transesophageal left cardiac electrocardiography with LabView 2009 software can improve evaluation of IVCD and IACD before, during and after PVC in HF patient selection for CRT.
Comparing anomalies and exceptions to multilateral dysfunction across a number of spheres of world politics, the book chapter explores pathways through and beyond gridlock in trade. It provides a vital new perspective on world politics as well as a practical guide for positive change in global policy.
Lithium-ion batteries exhibit a well-known trade-off between energy and power, which is problematic for electric vehicles which require both high energy during discharge (high driving range) and high power during charge (fast-charge capability). We use two commercial lithium-ion cells (high-energy [HE] and high-power) to parameterize and validate physicochemical pseudo-two-dimensional models. In a systematic virtual design study, we vary electrode thicknesses, cell temperature, and the type of charging protocol. We are able to show that low anode potentials during charge, inducing lithium plating and cell aging, can be effectively avoided either by using high temperatures or by using a constant-current/constant-potential/constant-voltage charge protocol which includes a constant anode potential phase. We introduce and quantify a specific charging power as the ratio of discharged energy (at slow discharge) and required charging time (at a fast charge). This value is shown to exhibit a distinct optimum with respect to electrode thickness. At 35°C, the optimum was achieved using an HE electrode design, yielding 23.8 Wh/(min L) volumetric charging power at 15.2 min charging time (10% to 80% state of charge) and 517 Wh/L discharge energy density. By analyzing the various overpotential contributions, we were able to show that electrolyte transport losses are dominantly responsible for the insufficient charge and discharge performance of cells with very thick electrodes.
We consider large scale Peer-to-Peer Sensor Networks, which try to calculate and distribute the mean value of all sensor inputs. For this we design, simulate and evaluate distributed approximation algorithms which reduce the number of messages. The main difference of these algorithms is the underlying communication protocol which all use the random call model, where in discrete round model each node can call a random sensor node with uniform probability.The amount of data exchanged between sensor nodes and used in the calculation process affects the accuracy of the aggregation results leading to a trade-off situation. The key idea of our algorithms is to limit the sample size using the Finite Population Correction (FPC) method and collect the data using a distribution aggregation using Push-Pull Sampling, Pull Sampling, and Push Sampling communication protocols. It turns out that all methods show exponential improvement of Mean Squared Error (MSE) with the number of messages and rounds.
We propose a new streaming media service development environment comprising of a streaming media service model, a XML based service specification language and several implementation and configuration management tools. In our project, the described approach is used for integration of streaming based eLearning services in manufacturing processes of a subcontractor to the automotive industry. The key components of our approach are 1) an xml based streaming service specification language, 2) a set of web services for searching, registration, and creation of streaming services, 3) caching and replication policies based on timing information derived from the service specifications.