Refine
Year of publication
- 2022 (106) (remove)
Document Type
- Conference Proceeding (106) (remove)
Conference Type
- Konferenzartikel (87)
- Konferenz-Abstract (13)
- Konferenz-Poster (3)
- Sonstiges (3)
Is part of the Bibliography
- yes (106)
Keywords
- injury (10)
- Machine Learning (5)
- biomechanics (5)
- running (5)
- ACL (4)
- Robustness (4)
- Radar (3)
- RoboCup (3)
- sport (3)
- 3D printing (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (42)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (31)
- Fakultät Wirtschaft (W) (22)
- Fakultät Medien (M) (ab 22.04.2021) (13)
- INES - Institut für nachhaltige Energiesysteme (13)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (11)
- IMLA - Institute for Machine Learning and Analytics (10)
- ACI - Affective and Cognitive Institute (3)
- IUAS - Institute for Unmanned Aerial Systems (3)
- POIM - Peter Osypka Institute of Medical Engineering (2)
- CRT - Campus Research & Transfer (1)
Open Access
- Open Access (52)
- Closed (45)
- Bronze (29)
- Diamond (13)
- Closed Access (9)
- Grün (6)
- Hybrid (2)
- Gold (1)
The isolation measures adopted during the COVID-19 pandemic brought light to discussions related to the importance of meaningful social relationships as a basic need to human well-being. But even before the pandemic outbreak in the years 2020 and 2021, organizations and scholars were already drawing attention to the growing numbers related to lonely people in the world (World Economic Forum, 2019). Loneliness is an emotional distress caused by the lack of meaningful social connections, which affects people worldwide across all age groups, mainly young adults (Rook, 1984). The use of digital technologies has gained prominence as a means of alleviating the distress. As an example, studies have shown the benefits of using digital games both to stimulate social interactions (Steinfield, Ellison & Lampe, 2008) and to enhance the effects of digital interventions for mental health treatments, through gamification (Fleming et al., 2017). It is with these aspects in mind that the gamified app Noneliness was designed with the intention of reducing loneliness rates among young students at a German university. In addition to sharing the related works that supported the application development, this chapter also presents the aspects considered for the resource's design, its main functionalities, and the preliminary results related to the reduction of loneliness in the target audience.
We aim to debate and eventually be able to carefully judge how realistic the following statement of a young computer scientist is: “I would like to become an ethical correctly acting offensive cybersecurity expert”. The objective of this article is not to judge what is good and what is wrong behavior nor to present an overall solution to ethical dilemmas. Instead, the goal is to become aware of the various personal moral dilemmas a security expert may face during his work life. For this, a total of 14 cybersecurity students from HS Offenburg were asked to evaluate several case studies according to different ethical frameworks. The results and particularities are discussed, considering different ethical frameworks. We emphasize, that different ethical frameworks can lead to different preferred actions and that the moral understanding of the frameworks may differ even from student to student.
Sweaty has already participated several times in RoboCup soccer competitions (Adult Size). Now the work is focused on stabilizing the gait. Moreover, we would like to overcome the constraints of a ZMP-algorithm that has a horizontal footplate as precondition for the simplification of the equations. In addition we would like to switch between impedance and position control with a fuzzy-like algorithm that might help to minimize jerks when Sweaty’s feet touch the ground.
Generative machine learning models for creative purposes play an increasingly prominent role in the field of dance and technology. A particularly popular approach is the use of such models for generating synthetic motions. Such motions can either serve as source of ideation for choreographers or control an artificial dancer that acts as improvisation partner for human dancers. Several examples employ autoencoder-based deep-learning architectures that have been trained on motion capture recordings of human dancers. Synthetic motions are then generated by navigating the autoencoder's latent space. This paper proposes an alternative approach of using an autoencoder for creating synthetic motions. This approach controls the generation of synthetic motions on the level of the motion itself rather than its encoding. Two different methods are presented that follow this principle. Both methods are based on the interactive control of a single joint of an artificial dancer while the other joints remain under the control of the autoencoder. The first method combines the control of the orientation of a joint with iterative autoencoding. The second method combines the control of the target position of a joint with forward kinematics and the application of latent difference vectors. As illustrative example of an artistic application, this latter method is used for an artificial dancer that plays a digital instrument. The paper presents the implementation of these two methods and provides some preliminary results.
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
Brand-related-user-generated-content allows companies to achieve several important objectives, such as increasing sales and creating higher user engagement. In this paper a research framework is developed that provides an overview of the necessary processes to successfully use brand-related-user-generated-content. The framework also helps managers to understand the main motives of users when posting brand-related-user-generated-content. Expert interviews were carried out to validate the research framework. The results from the interviews support the proposed framework. Brand-related-user-generated-content can increase purchase intention and the community engagement. From a user’s perspective the opportunity to interact with a brand and be featured on official brand channels could be seen as the main motivation for creating brand-related-user-generated-content.
Significant improvements in module performance are possible via implementation of multi-wire electrodes. This is economically sound as long as the mechanical yield of the production is maintained. While flat ribbons have a relatively large contact area to exert forces onto the solar cell, wires with round cross section reduce this contact area considerably – in theory to an infinitively thin line. Therefore, the local stresses induced by the electrodes might increase to a point that mechanical production yields suffer unacceptably.
In this paper, we assess this issue by an analytical mechanical model as well as experiments with an encapsulant-free N.I.C.E. test setup. From these, we can derive estimations for the relationship between lay-up accuracy and expected breakage losses. This paves the way for cost-optimized choices of handling equipment in industrial N.I.C.E.-wire production lines.
Micronization of biochar (BC) may ease its application in agriculture. For example, fine biochar powders can be applied as suspensions via drip-irrigation systems or can be used to produce grnulated fertilizers. However, micronization may effect important physical biochar properties like the water holding capacity (WHC) or the porosity.
The aim of this study is to identify indicators at country level that could prove useful in improving the effectiveness of fraud detection in European Structural and Investment Funds. The chapter analyses EU funds, belonging to the period 2014–2020, from and the study suggests the convenience of tracking funds, especially in countries with higher GDP and higher transparency levels, and the lesser relevance of the number of irregularities for countries with higher GDP and those receiving larger funds. Fraud and fraud detection rates in individual funds vary significantly across states. Federal states, such as the Federal Republic of Germany, are comparatively successful in detecting fraud in EU funds.
Currently, many theoretical as well as practically relevant questions towards the transferability and robustness of Convolutional Neural Networks (CNNs) remain unsolved. While ongoing research efforts are engaging these problems from various angles, in most computer vision related cases these approaches can be generalized to investigations of the effects of distribution shifts in image data. In this context, we propose to study the shifts in the learned weights of trained CNN models. Here we focus on the properties of the distributions of dominantly used 3×3 convolution filter kernels. We collected and publicly provide a dataset with over 1.4 billion filters from hundreds of trained CNNs, using a wide range of datasets, architectures, and vision tasks. In a first use case of the proposed dataset, we can show highly relevant properties of many publicly available pre-trained models for practical applications: I) We analyze distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like visual category of the dataset, task, architecture, or layer depth. Based on these results, we conclude that model pre-training can succeed on arbitrary datasets if they meet size and variance conditions. II) We show that many pre-trained models contain degenerated filters which make them less robust and less suitable for fine-tuning on target applications. Data & Project website: https://github.com/paulgavrikov/cnn-filter-db.
Over the last years, Convolutional Neural Networks (CNNs) have been the dominating neural architecture in a wide range of computer vision tasks. From an image and signal processing point of view, this success might be a bit surprising as the inherent spatial pyramid design of most CNNs is apparently violating basic signal processing laws, i.e. Sampling Theorem in their down-sampling operations. However, since poor sampling appeared not to affect model accuracy, this issue has been broadly neglected until model robustness started to receive more attention. Recent work in the context of adversarial attacks and distribution shifts, showed after all, that there is a strong correlation between the vulnerability of CNNs and aliasing artifacts induced by poor down-sampling operations. This paper builds on these findings and introduces an aliasing free down-sampling operation which can easily be plugged into any CNN architecture: FrequencyLowCut pooling. Our experiments show, that in combination with simple and Fast Gradient Sign Method (FGSM) adversarial training, our hyper-parameter free operator substantially improves model robustness and avoids catastrophic overfitting. Our code is available at https://github.com/GeJulia/flc_pooling
Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries, and to reduce overfitting. In this context, we perform an investigation of 3 × 3 convolution filters that form in adversarially- trained models. Filters are extracted from 71 public models of the ℓ ∞ -RobustBench CIFAR-10/100 and ImageNet1k leaderboard and compared to filters extracted from models built on the same architectures but trained without robust regularization. We observe that adversarially-robust models appear to form more diverse, less sparse, and more orthogonal convolution filters than their normal counterparts. The largest differences between robust and normal models are found in the deepest layers, and the very first convolution layer, which consistently and predominantly forms filters that can partially eliminate perturbations, irrespective of the architecture.
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences. Data & Project website: https://github.com/GeJulia/robustness_confidences_evaluation
Estimating the Robustness of Classification Models by the Structure of the Learned Feature-Space
(2022)
Over the last decade, the development of deep image classification networks has mostly been driven by the search for the best performance in terms of classification accuracy on standardized benchmarks like ImageNet. More recently, this focus has been expanded by the notion of model robustness, \ie the generalization abilities of models towards previously unseen changes in the data distribution. While new benchmarks, like ImageNet-C, have been introduced to measure robustness properties, we argue that fixed testsets are only able to capture a small portion of possible data variations and are thus limited and prone to generate new overfitted solutions. To overcome these drawbacks, we suggest to estimate the robustness of a model directly from the structure of its learned feature-space. We introduce robustness indicators which are obtained via unsupervised clustering of latent representations from a trained classifier and show very high correlations to the model performance on corrupted test data.
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by using adversarial examples during training, which effectively reduces the measurable model attackability. In contrast, research on analyzing the source of a model’s vulnerability is scarce. In this paper, we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.
The authors set the focus in this paper on the description of polarization with the help of the Jones calculus and the application of polarization in photography. Furthermore, the effect of the circular polarization filter is described by using the Jones calculus. Also, an enhancement of artistic and creative possibilities in photography through quantization or parametrization by the Jones matrices is presented.
Teaching and learning concepts that are adapted to the constantly evolving requirements due to rapid technological progress are essential for teaching in media photonics technology. After the development of a concept for research-oriented education in optics and photonics, the next step will be a conceptual restructuring and redesign of the entire curriculum for education in media photonics technology. By including typical research activities as essential components of the learning process, a broad platform for practical projects and applied research can be created, offering a variety of new development opportunities.
Voice user interfaces (VUIs) offer an intuitive, fast and convenient way for humans to interact with machines and computers. Yet, whether they’ll be truly successful and find widespread uptake in the near future depends on the user experience (UX) they offer. With this survey-based study (n = 108), we aim to identify the major annoyances German voice assistant users are facing in voice-driven human-computer interactions. The results of our questionnaire show that irritations appear in six categories: privacy issues, unwanted activation, comprehensibility, response quality, conversational design and voice characteristics. Our findings can help identify key areas of work to optimize voice user experience in order to achieve greater adaptation of the technology. In addition, they can provide valuable information for the further development and standardization of voice user experience (VUX) research.
The conversion of space heating for private households to climate-neutral energy sources is an essential component of the energy transition, as this sector as of 2018 was responsible for 9.4 % of Germany’s carbon dioxide emissions. In addition to reducing demand through better insulation, the use of heat pumps fed with electricity from renewable energy sources, such as on-site photovoltaics (PV) systems, is an important solution approach.
Advanced energy management and control can help to make optimal use of such heating systems. Optimal here can e.g. refer to maximizing self-consumption of self-generated PV power, extended component lifetime or a grid-friendly behavior that avoids load peaks. A powerful method for this is model predictive control (MPC), which calculates optimal schedules for the controllable influence variables based on models of the system dynamics, current measurements of system states and predictions of future external influence parameters.
In this paper, we will discuss three different use cases that show how artificial intelligence can contribute to the realization of such an MPC-based energy management and control system. This will be done using the example of a real inhabited single family home that has provided the necessary data for this purpose and where the methods are implemented and tested. The heating system consists of an air-water heat pump with direct condensation, a thermal stratified storage tank, a pellet burner and a heating rod and provides both heating and hot water. The house generates a significant portion of its electricity needs through a rooftop PV system.
As a university it is more and more difficult to reach all target groups equally. Common problems like information overload, numerous institutions with same focuses or multi-channel-communication make it hard to gain the attention of the target group. This paper is four-fold: we present an overview of the state of art and the importance of the study (I), based on which we highlight the approach to user experience analysis. First, we identified the irritations in the course of an expert evaluation (II) and verified them within the test, including the target groups (III). Finally, based on the results, we were able to pro-vide recommendations for action to improve the UX and to be used for the conception of an intranet (IV).
Recently, RobustBench (Croce et al. 2020) has become a widely recognized benchmark for the adversarial robustness of image
classification networks. In it’s most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l∞ perturbations limited to ϵ = 8/255. With leading scores of the currently best performing models of around 60% of the baseline, it is fair to characterize this benchmark to be quite challenging. Despite it’s general acceptance in recent literature, we aim to foster discussion about the suitability of RobustBench as a key indicator for robustness which could be generalized to practical applications. Our line of argumentation against this is two-fold and supported by excessive experiments presented in this paper: We argue that I) the alternation of data by AutoAttack with l∞, ϵ = 8/255 is unrealistically strong, resulting in close to perfect detection rates of adversarial samples even by simple detection algorithms and human observers.
We also show that other attack methods are much harder to detect while achieving similar success rates. II) That results on low resolution data sets like CIFAR10 do not generalize well to higher resolution images as gradient based attacks appear to become even more detectable with increasing resolutions.
Seismic data has often missing traces due to technical acquisition or economical constraints. A compete dataset is crucial in several processing and inversion techniques. Deep learning algorithms, based on convolutional neural networks (CNNs), have shown alternative solutions that overcome limitation of traditional interpolation methods e.g. data regularity, linearity assumption, etc. There are two different paradigms of CNN methods for seismic interpolation. The first one, so-called deep prior interpolation (DPI), trains a CNN to map random noise to a complete seismic image using only the decimated image itself. The second one, referred as standard deep learning method, trains a CNN to map a decimated seismic image into a complete one using a dataset of complete and artificially decimated images. Within this research, we systematically compare the performance of both methods for different quantities of regular and irregular missing traces using 4 datasets. We evaluate the results of both methods using 5 well-known metrics. We found that DPI method performs better than the standard method if the percentage of missing traces is low (10%) and otherwise if the level of decimation is high (50%).
In this work, we explore three deep learning algorithms apply to seismic interpolation: deep prior image (DPI), standard, and generative adversarial networks (GAN). The standard and GAN approaches rely on a dataset of complete and decimated seismic images for the training process, while the DPI method learns from a decimated image itself, without training images. We carry out two main experiments, considering 10%, 30%, and 50% of regular and irregular decimation. The first tests the optimal situation for the GAN and the standard approaches, where training and testing images are from the same dataset. The second tests the ability of GAN and standard methods to learn simultaneously from three datasets, and generalize to a fourth dataset not used during training. The standard method provides the best results in the first experiment, when the training distribution is similar to the testing one. In this situation, the DPI approach reports the second best results. In the second experiment, the standard method shows the ability to learn simultaneously and effectively three data distributions for the regular case. In the irregular case, the DPI approach is more effective. The GAN approach is the less effective of the three deep learning methods in both experiments.
Harnessing the overall benefits of the latest advancements in artificial intelligence (AI) requires the extensive collaboration of academia and industry. These collaborations promote innovation and growth while enforcing the practical usefulness of newer technologies in real life. The purpose of this article is to outline the challenges faced during cross-collaboration between academia and industry. These challenges are also inspected with the help of an ongoing project titled “Quality Assurance of Machine Learning Applications” (Q-AMeLiA), in which three universities cooperate with five industry partners to make the product risk of AI-based products visible. Further, we discuss the hurdles and the key challenges in machine learning (ML) technology transformation from academia to industry based on robustness, simplicity, and safety. These challenges are an outcome of the lack of common standards, metrics, and missing regulatory considerations when state-of-the-art (SOTA) technology is developed in academia. The use of biased datasets involves ethical concerns that might lead to unfair outcomes when the ML model is deployed in production. The advancement of AI in small and medium sized enterprises (SMEs) requires more in terms of common tandardization of concepts rather than algorithm breakthroughs. In this paper, in addition to the general challenges, we also discuss domain specific barriers for five different domains i.e., object detection, hardware benchmarking, continual learning, action recognition, and industrial process automation, and highlight the steps necessary for successfully managing the cross-sectoral collaborations between academia and industry.
Recent work has investigated the distributions of learned convolution filters through a large-scale study containing hundreds of heterogeneous image models. Surprisingly, on average, the distributions only show minor drifts in comparisons of various studied dimensions including the learned task, image domain, or dataset. However, among the studied image domains, medical imaging models appeared to show significant outliers through "spikey" distributions, and, therefore, learn clusters of highly specific filters different from other domains. Following this observation, we study the collected medical imaging models in more detail. We show that instead of fundamental differences, the outliers are due to specific processing in some architectures. Quite the contrary, for standardized architectures, we find that models trained on medical data do not significantly differ in their filter distributions from similar architectures trained on data from other domains. Our conclusions reinforce previous hypotheses stating that pre-training of imaging models can be done with any kind of diverse image data.
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Adversarial training (AT) is often considered as a remedy to train more robust networks. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences.
In this paper, we propose a unified approach for network pruning and one-shot neural architecture search (NAS) via group sparsity. We first show that group sparsity via the recent Proximal Stochastic Gradient Descent (ProxSGD) algorithm achieves new state-of-the-art results for filter pruning. Then, we extend this approach to operation pruning, directly yielding a gradient-based NAS method based on group sparsity. Compared to existing gradient-based algorithms such as DARTS, the advantages of this new group sparsity approach are threefold. Firstly, instead of a costly bilevel optimization problem, we formulate the NAS problem as a single-level optimization problem, which can be optimally and efficiently solved using ProxSGD with convergence guarantees. Secondly, due to the operation-level sparsity, discretizing the network architecture by pruning less important operations can be safely done without any performance degradation. Thirdly, the proposed approach finds architectures that are both stable and well-performing on a variety of search spaces and datasets.
In this study, various imaging algorithms for the localization of objects have been investigated. Therefore, an Ultra-Wideband (UWB) radar based experimental setup with a circular antenna array is designed as part of this work. This concept could be particularly useful in microwave medical imaging applications. In order to validate its applicability in microwave imaging, different imaging algorithms have been evaluated and compared by means of our experimental setup. Accurate imaging results have been achieved with our system under multiple test-scenarios.
In this study, an approach to a microwave-based radar system for the localization of objects has been proposed. This could be particularly useful in microwave imaging applications such as cardiac catheter detection. An experimental system is defined and realized with the selection of an appropriate antenna design. Hardware control functions and different imaging algorithms are implemented as well. The functionality of this measurement setup has been analyzed considering multiple testscenarios and it is proved to be capable of locating multiple objects as well as expanded objects.
In automotive parking scenario, where the curb shall be detected and classified to be traversable or not, radars play an important role. There are different approaches already proposed in other works to estimate the target height. This paper assesses and compares two methods. The first is based on Angle of Arrival (AoA) estimation of input signals of multiple antennas using the Multiple-Input-Multiple-Output (MIMO) principle. The second method uses the geometry in multipath propagation of the radar echo signal for one antenna input. In this work a modified method of calculation of the curb height based on the second method is proposed. The theory of approach is mathematically proved and effectiveness is demonstrated by evaluation of measurements with a 77 GHz Frequency Modulated Continuous Wave (FMCW) radar. In order to evaluate the performance of the introduced method the mean square error (MSE) is used in the proposed scenario. This method, using only one antenna input, produced up to 3.4 times better results for curb height detection in comparison with former methods.
3D printing offers customisation capabilities regarding suspensions for oscillators of vibration energy harvesters. Adjusting printing parameters or geometry allows to influence dynamic properties like resonance frequency or bandwidth of the oscillator. This paper presents simulation results and measurements for a spiral shaped suspension printed with polylactic acid (PLA) and different layer heights. Eigenfrequencies have been simulated and measured and damping ratios have been experimentally determined.
This paper presents the development of a capacitive level sensor for robotics applications, which is designed for measurements of liquid levels during a pouring process. The proposed sensor design applies the advantages of guard electrodes in combination with passive shielding to increase resistance against external influences. This is important for reliable operations in rapidly changing measurement environments, as they occur in the field of robotics. The non-contact type sensor for liquid level measurement is the solution for avoiding contaminations and suit food guidelines. The designed sensor can be utilized in gastronomic applications. Two versions of the sensor were simulated, fabricated, and compared. The first version is based on copper electrodes, and the other type is fully 3D printed with electrodes made of conductive polylactic acid (PLA).
The development of a 3D printed force sensor for a gripper was studied applying an embedded constantan wire as sensing element. In the first section, the state of the art is explained. In the main section of the paper the modeling, simulation and verification of a sensor element are described for a three-point bending test made in accordance with the DIN EN ISO 178. The 3D printing process of the Fused Filament Fabrication (FFF) utilized for manufacturing the sensor samples in combination with an industrial robot are shown. A comparison between theory and practice are considered in detail. Finally, an outlook is given regarding the integration of the sensor element in gripper jaws.
In the development of new vehicles, increasing customer comfort requirements and rising safety regulations often result in an increase in weight. Nevertheless, in order to be able to meet the demand for reduced fuel consumption, it is necessary within product development process to implement complex and filigree lightweight structures. This contribution therefore addresses the potential of generatively developed components for fiber-reinforced additive manufacturing (FRAM). Currently, several commercial systems for this application are available on the market. Therefore, a comparison of the systems is first made to determine a suitable system. Then, a highly stressed and safety-relevant chassis component of a race car is generatively designed and manufactured using FRAM. A matrix with short fiber reinforcement and additional long fiber reinforcement with carbon fibers is applied. Finally, tensile tests are carried out to check the mechanical properties. In addition, relevant properties such as weight and cost are obtained in order to be able to compare them with conventionally developed and manufactured components.
The integration of additive manufacturing processes into the teaching of students is an important prerequisite for the further dissemination of this new technology. In this context, the DfAM is of particular importance. For this reason, this paper presents an approach in which a connection is made between methodical product development and practical implementation by AM. Using a model racing car as an example, students independently develop significant improvements of particular assemblies. A final evaluation shows that the students have significantly improved their skills and competencies.
This paper presents a method for supporting the application of Additive Tooling (AT)-based validation environments in integrated product development. Based on a case study, relevant process steps, activities and possible barriers in the realisation of an injection-moulded product are identified and analysed. The aim of the method is to support the target-oriented application of Additive Tooling to obtain physical prototypes at an early stage and to shorten validation cycles.
Separation Estimation with Thermal Cameras for Separation Monitoring in Human-Robot Collaboration
(2022)
Human-Robot Collaborative applications have the drawback of being less efficient than their non-collaborative counterparts. One of the main reasons is, that the robot has to slow down when a human being is within the operating space of the robot. There are different approaches on dynamic speed and separation monitoring in human-robot collaborative applications. One approach additionally differentiates between human and non-human objects to increase efficiency in speed and separation monitoring. This paper proposes to estimate the separation distance by measuring the temperature of the approaching object. Measurements show that the measured temperature of a human being decreases with 1 deg C per meter distance from the sensor. This allows an estimation of separation between a robotic system and a human being.
A novelty solution for controls of assistive technology represent the usage of eye tracking devices such as for smart wheelchairs and robotic arms [10, 4]. In this context usage supporting methods like artificial feedback are not well explored. Vibrotactile feedback has shown to be helpful to decrease the cognitive load on the visual and auditive channels and can provide a perception of touch [17]. People with severe limitations of motor functions could benefit from eye tracking controls supported with vibrotactile feedback. In this study fundamental results will be presented in the design of an appropriate vibrotactile feedback system for eye tracking applications. We will show that a perceivable vibrotactile stimulus has no significant effect on the accuracy and precision of a head worn eye tracking device. It is anticipated that the results of this paper will lead to new insights in the design of vibrotactile feedback for eye tracking applications and eye tracking controls.
Lithium-ion batteries show strongly nonlinear behaviour regarding the battery current and state of charge. Therefore, the modelling of lithium-ion batteries is complex. Combining physical and data-driven models in a grey-box model can simplify the modelling. Our focus is on using neural networks, especially neural ordinary differential equations, for grey-box modelling of lithium-ion batteries. A simple equivalent circuit model serves as a basis for the grey-box model. Unknown parameters and dependencies are then replaced by learnable parameters and neural networks. We use experimental full-cycle data and data from pulse tests of a lithium iron phosphate cell to train the model. Finally, we test the model against two dynamic load profiles: one consisting of half cycles and one dynamic load profile representing a home-storage system. The dynamic response of the battery is well captured by the model.
Robust scheduling problem is a major decision problem that is addressed in the literature, especially for remanufacturing systems; this problem is complex because of the high uncertainty and complex constraints involved. Generally, the existing approaches are dedicated to specific processes and do not enable the quick and efficient generation and evaluation of schedules. With the emergence of the Industry 4.0 paradigm, data availability is now considered an opportunity to facilitate the decision-making process. In this study, a data-driven decisionmaking process is proposed to treat the robust scheduling problem of remanufacturing systems in uncertain environments. In particular, this process generates simulation models based on a data-driven modeling approach. A robustness evaluation approach is proposed to answer several decision questions. An application of the decision process in an industrial case of a remanufacturing system is presented herein, illustrating the impact of robustness evaluation results on real-life decisions.
During the periods of social isolation to contain the advance of COVID-19 in 2020 and 2021, educational institutions have had the challenge to adopt technological strategies not only to ensure continuity in students’ classes, but also to support their mental health in a period of uncertainty and health risks. Loneliness is an emotional distress caused by the lack of meaningful social connections; it has increasingly affected young adults worldwide during the pandemic's social isolation and still bears psychological effects in the current post-pandemic period. In the light of this challenge, the Nonenliness App was developed as a way to bring together university communities to address issues related to loneliness and mental health disorders through a gamified and social online environment. In this paper, we present the app and its main functionalities (Beta version) and discuss the preliminary results of a pilot clinical study conducted with university students in Germany (N = 12) to verify the app's efficacy and usability, alongside the challenges faced and the next steps to be taken regarding the platform's improvement.
This work documents the rising acceptance of social robots for healthcare as well as their growing economic potential from 2017 to 2021. The comparison is based on two studies in the active assisted living (AAL) community. We first provide a brief overview of social robotics and a discussion of the economic potential of social health robots. We found that, despite the huge potential for robotic support in healthcare and domestic routines, social robots still lack the functionality to access that potential. At the same time, the study exemplifies a rise in acceptance: all health-related activities are more accepted in 2021 when in 2017, most of them with high statistical significance. When investigating the economic perspective, we found that persons are aware of the influence of cultural, spiritual, or religious beliefs. Most experts (57%), having a European background, expect the state or the government to be the key driver for establishing social robots in health and significantly prefer leasing or renting a social health robot to buying one. Nevertheless, we speculate that it might be a global financial elite which is first to adopt social robots.
Physik durch Informatik
(2022)
Selbsttests in Lernmanagementsystemen (LMS) ermöglichen es Studierenden, den eigenen Lernfortschritt einzuschätzen. Das didaktische Konzept Physik durch Informatik (PDI) ist charakterisiert durch die Nutzung einer Programmiersprache zur Lösungseingabe bei Mathematik und Physik-Aufgaben. Im Gegensatz zur Lösungseingabe durch Zahlenwerte oder im Antwort-Auswahl-Verfahren erfordert die Implementierung einer Lösung in einer Programmiersprache eine höhere Kompetenzstufe.
An import ban of Russian energy sources to Germany is currently being increasingly discussed. We want to support the discussion by showing a way how the electricity system in Germany can manage low energy imports in the short term and which measures are necessary to still meet the climate protection targets. In this paper, we examine the impact of a complete stop of Russian fossil fuel imports on the electricity sector in Germany, and how this will affect the climate coals of an earlier coal phase-out and climate neutrality by 2045.
Following a scenario-based analysis, the results gave a point of view on how much would be needed to completely rely on the scarce non-renewable energy resources in Germany. Huge amounts of investments would be needed in order to ensure a secure supply of electricity, in both generation energy sources (RES) and energy storage systems (ESS). The key findings are that a rapid expansion of renewables and storage technologies will significantly reduce the dependence of the German electricity system on energy imports. The huge integration of renewable energy does not entail any significant imports of the energy sources natural gas, hard coal, and mineral oil, even in the long term. The results showed that a ban on fossil fuel imports from Russia outlines huge opportunities to go beyond the German government's climate targets, where the 1.5-degree-target is achieved in the electricity system.
Peer-to-peer energy trading and local electricity markets have been widely discussed as new options for the transformation of the energy system from the traditional centralized scheme to the novel decentralized one. Moreover, it has also been proposed as a more favourable alternative for already expiring feed in tariff policies that promote investment in renewable energy sources. Peer-to-peer energy trading is usually defined as the integration of several innovative technologies, that enable both prosumers and consumers to trade electricity, without intermediaries, at a consented price. Furthermore, the techno-economic aspects go hand in hand with the socio-economic aspects, which represent at the end significant barriers that need to be tackled to reach a higher impact on current power systems. Applying a qualitative analysis, two scalable peer-to-peer concepts are presented in this study and the possible participant´s entry probability into such concepts. Results show that consumers with a preference for environmental aspects have in general a higher willingness to participate in peer-to-peer energy trading. Moreover, battery storage systems are a key technology that could elevate the entry probability of prosumers into a peer-to-peer market.
In the railway technical centers, scheduling the maintenance activities is a very complex task, it consists in ordering, in the time, all the maintenance operations on the workstations, while respecting the number of resources, precedence constraints, and the workstations' availabilities. Currently, this process is not completely automatic. For improving this situation, this paper presents a mathematical model for the maintenance activities scheduling in the case of railway remanufacturing systems. The studied problem is modeled as a flexible job-shop, with the possibility for a job to be executed several times on a stage. MILP formulation is implemented with the Makespan as an objective, representing the time for remanufacturing the train. The aim is to create a generic model for optimizing the planning of the maintenance activities and improving the performance of the railway technical centers. At last, numerical results are presented, discussing the impact of the instances size on the computing time to solve the described problem.
We consider the local group of agents for exchanging the time-series data value and computing the approximation of the mean value of all agents. An agent represented by a node knows all local neighbor nodes in the same group. The node has the contact information of other nodes in other groups. The nodes interact with each other in synchronous rounds to exchange the updated time-series data value using the random call communication model. The amount of data exchanged between agent-based sensors in the local group network affects the accuracy of the aggregation function results. At each time step, the agent-based sensor can update the input data value and send the updated data value to the group head node. The group head node sends the updated data value to all group members in the same group. Grouping nodes in peer-to-peer networks show an improvement in Mean Squared Error (MSE).
Due to the increasing aging of the population, the number of elderly people requiring care is growing in most European countries. However, the number of caregivers working in nursing homes and on daily care services is declining in countries like Germany or Italy. This limits the time for interpersonal communication. Furthermore, as a result of the Covid-19 pandemic, social distancing during contact restrictions became more important, causing an additional reduction of personal interaction. This social isolation can strongly increase emotional stress. Robotic assistance could contribute to addressing this challenge on three levels: (1) supporting caregivers to respond individually to the needs of patients and residents in nursing homes; (2) observing patients’ health and emotional state; (3) complying with high hygiene standards and minimizing human contact if required. To further the research on emotional aspects and the acceptance of robotic assistance in care, we conducted two studies where elderly participants interacted with the social robot Misa. Facial expression and voice analysis were used to identify and measure the emotional state of the participants during the interaction. While interpersonal contact plays a major role in elderly care, the findings reveal that robotic assistance generates added value for both caregivers and patients and that they show emotions while interacting with them.
To achieve Germany's climate targets, the industrial sector, among others, must be transformed. The decarbonization of industry through the electrification of heating processes is a promising option. In order to investigate this transformation in energy system models, high-resolution temporal demand profiles of the heat and electricity applications for different industries are required. This paper presents a method for generating synthetic electricity and heat load profiles for 14 industry types. Using this methodology, annual profiles with a 15-minute resolution can be generated for both energy demands. First, daily profiles for the electricity demand were generated for 4 different production days. These daily profiles are additionally subdivided into eight end-use application categories. Finally, white noise is applied to the profile of the mechanical drives. The heat profile is similar to the electrical but is subdivided into four temperature ranges and the two applications hot water and space heating. The space heating application is additionally adjusted to the average monthly outdoor temperature. Both time series were generated for the analysis of an electrification of industrial heat application in energy system modelling.
The energy system is changing since some years in order to achieve the climate goals from the Paris Agreement which wants to prevent an increase of the global temperature above 2 °C [1]. Decarbonisation of the energy system has become for governments a big challenge and different strategies are being stablished. Germany has set greenhouse gas reduction limits for different years and keeps track of the improvement made yearly. The expansion of renewable energy systems (RES) together with decarbonisation technologies are a key factor to accomplish this objective.
This research is done to analyse the effect of introducing biochar, a decarbonisation technology, and study how it will affect the energy system. Pyrolysis is the process from which biochar is obtained and it is modelled in an open-source energy system model. A sensibility analysis is done in order to assess the effect of changing the biomass potential and the costs for pyrolysis.
The role of pyrolysis is analysed in the form of different future scenarios for the year 2045 to evaluate the impact when the CO2 emission limit is zero. All scenarios are compared to the reference scenario, where pyrolysis is not considered.
Results show that biochar can be used to compensate the emissions from other conventional power plant and achieve an energy transition with lower costs. Furthermore, it was also found that pyrolysis can also reduce the need of flexibility. This study also shows that the biomass potential and the pyrolysis costs can strongly affect the behaviour of pyrolysis in the energy system.
Solar energy plays a central role in the energy transition. Clouds generate locally large fluctuations in the generation output of photovoltaic systems, which is a major problem for energy systems such as microgrids, among others. For an optimal design of a power system, this work analyzed the variability using a spatially distributed sensor network at Stuttgart Airport. It has been shown that the spatial distribution partially reduces the variability of solar radiation. A tool was also developed to estimate the output power of photovoltaic systems using irradiation time series and assumptions about the photovoltaic sites. For days with high fluctuations of the estimated photovoltaic power, different energy system scenarios were investigated. It was found the approach can be used to have a more realistic representation of aggregated PV power taking spatial smoothing into account and that the resulting PV power generation profiles provide a good basis for energy system design considerations like battery sizing.
One of the major challenges impeding the energy transition is the intermittency of solar and wind electricity generation due to their dependency on weather changes. The demand-side energy flexibility contributes considerably to mitigate the energy supply/demand imbalances resulting from external influences such as the weather. As one of the largest electricity consumers, the industrial enterprises present a high demand-side flexibility potential from their production processes and on-site energy assets. In this direction, methods are needed with a focus on enabling the energy flexibility and ensure an active participation of such enterprises in the electricity markets especially with variable prices of electricity. This paper presents a generic model library for an industrial enterprise implemented with optimal control for energy flexibility purposes. The components in the model library represent the typical technical units of an industrial enterprise on material, media, and energy flow levels with their operative constraints. A case study of a plastic manufacturing plant using the generic model library is also presented, in which the results of two simulation with different electricity prices are compared and the behavior of the model can be assessed. The results show that the model provides an optimal scheduling of the manufacturing system according to the variations in the electricity prices, and ensures an optimal control for utilities and energy systems needed for the production.
Featherweight Go (FG) is a minimal core calculus that includes essential Go features such as overloaded methods and interface types. The most straightforward semantic description of the dynamic behavior of FG programs is to resolve method calls based on run-time type information. A more efficient approach is to apply a type-directed translation scheme where interface-values are replaced by dictionaries that contain concrete method definitions. Thus, method calls can be resolved by a simple lookup of the method definition in the dictionary. Establishing that the target program obtained via the type-directed translation scheme preserves the semantics of the original FG program is an important task.
To establish this property we employ logical relations that are indexed by types to relate source and target programs. We provide rigorous proofs and give a detailed discussion of the many subtle corners that we have encountered including the need for a step index due to recursive inter- faces and method definitions.
The contribution of the RoofKIT student team to the SDE 21/22 competition is the extension of an existing café in Wuppertal, Germany, to create new functions and living space for the building with simultaneous energetic upgrading. A demonstration unit is built representing a small cut-out of this extension. The developed energy concept was thoroughly simulated by the student team in seminars using Modelica. The system uses mainly solar energy via PVT collectors as the heat source for a brine-water heat pump (space heating and hot water). Energy storage (thermal and electrical) is installed to decouple generation and consumption. Simulation results confirm that carbon neutrality is achieved for the building operation, consuming and generating around 60 kWh/m2a.
Im Projekt „BioMeth“ wurden zwei neuartige und bislang noch nicht für die biologische Methanisierung beschriebene Anlagenkonzepte entwickelt. Der neuentwickelte Invers-Membranreaktor (IMR) ermöglicht es, den Eintrag der erforderlichen Eduktgase Wasserstoff H2 und Kohlendioxid CO2 über kommerziell erhältliche Ultrafiltrationsmembranen und den Entgasungsbereich für den Methanaustrag räumlich zu trennen und zusätzlich einen hydraulischen Druck zur Steigerung des Wasserstoffeintrages zu nutzen. Ein Vorteil des Verfahrens ist, dass perspektivisch sowohl das CO2 aus klassischem Biogas als auch CO2-Quellen aus industriellen Abluftströmen, z. B. aus der Zementindustrie als Kohlenstoffquelle genutzt werden können.
Über die biologische Methanisierung hinaus eignet sich der Invers-Membranreaktor der Einschätzung der Autoren nach auch generell zur biotechnologischen Herstellung nicht-flüchtiger Wertstoffe ausgehend von gasförmigen Substraten. Im IMR kann z. B. ein Membranmodul zum Eintrag der Eduktgase verwendet werden, während ein weiteres Hohlmembranmodul zur zyklischen oder kontinuierlichen Abtrennung der wertstoffhaltigen Reaktionslösung unter Rückhaltung der Mikrobiologie im Sinne eines In-situ Product Recovery (ISPR)-Konzeptes genutzt werden kann.
Als herausragendes Ergebnis erwies sich während der Untersuchung des IMR, dass mit dem Konzept der Membranbegasung CH4-Konzentrationen von > 90 Vol.-% über eine einjährige Versuchsreihe kontinuierlich und mit flexiblem Gaseintrag erzielt werden konnten. Nach Inbetriebnahme war dabei außer der Zugabe von H2 und CO2 als Energie- bzw. C-Quelle lediglich eine zweimalige Ergänzung von Supplementen erforderlich. Die maximal erreichte membranflächen-spezifische Methanbildungsrate ohne Gaszirkulation lag bei 83 LN Methan pro m2 Membranfläche und Tag bei einer Produktgaszusammensetzung von 94 Vol.% Methan, 2 Vol.% H2, und 4 Vol.% CO2.
Das zweite noch in der frühen Testphase befindliche Verfahren nutzt Druckunterschiede in einer 10 m hohen gepackten Gegenstromblasensäule, die mit einem ebenfalls 10 m hohen separaten Entgasungs-Reaktor kombiniert wurde. Diese Verfahrenskonzept soll es ermöglichen, eine hohe Wasserstofflöslichkeit aufgrund des am Säulenfuß vorliegenden hydrostatischen Druckes zu erreichen und dabei gleichzeitig den Energiebedarf zu minimieren, die Investitionskosten zu reduzieren und optimale zeitliche und räumlichen Bedingungen für die mikrobiologische Umsetzung von H2 und CO2 zu schaffen. Erste Untersuchungen am Gegenstromblasensäulenreaktor zum Stoffübergang von Luft bestätigten eine gute Anreicherung der im Kreislauf geführten Flüssigkeit bereits bei verhältnismäßig niedrigen Gasleerrohrgeschwindigkeiten. In der zweiten Säule des Reaktoraufbaus sollte am Kopf aufgrund der Druckentspannung ein Ausgasen der im Vergleich zu Atmosphärendruck mit Gas übersättigten Flüssigkeit erfolgen. Das Ausgasen der Flüssigkeit konnte ebenfalls am Beispiel des Lufteintrages bestätigt werden.
We consider large scale Peer-to-Peer Sensor Networks, which try to calculate and distribute the mean value of all sensor inputs. For this we design, simulate and evaluate distributed approximation algorithms which reduce the number of messages. The main difference of these algorithms is the underlying communication protocol which all use the random call model, where in discrete round model each node can call a random sensor node with uniform probability.The amount of data exchanged between sensor nodes and used in the calculation process affects the accuracy of the aggregation results leading to a trade-off situation. The key idea of our algorithms is to limit the sample size using the Finite Population Correction (FPC) method and collect the data using a distribution aggregation using Push-Pull Sampling, Pull Sampling, and Push Sampling communication protocols. It turns out that all methods show exponential improvement of Mean Squared Error (MSE) with the number of messages and rounds.
Biodegradable metals have entered the implant market in recent years, but still do not show fully satisfactory degradation behaviour and mechanical properties. In contrast, it has been shown that pure molybdenum has an excellent combination of the required properties in this respect. We report on PM based screen printing of thin-walled molybdenum tubes as a processing step for medical stent manufacture. We also present data on the in vivo degradation and biocompatibility of molybdenum. The degradation of molybdenum wires implanted in the aorta of rats was evaluated by SEM and EDX. Biocompatibility was assessed by histological investigation of organs and analysis of molybdenum levels in tissue extracts and body fluids. Degradation rates of up to 13.5 μm/y were observed after 12 months. No histological changes or elevated molybdenum levels in organ tissues were observed. In summary, the results further underline that molybdenum is a highly promising biodegradable metallic material.
The purpose of this study was to describe the effects of running speed and slope on metatarsophalangeal (MTP) joint kinematics. 22 male and female runners underwent 3D motion analysis on an instrumented treadmill at three different speeds (2.5 m/s, 3.0 m/s, 3.5 m/s). At each speed, participants ran at seven slope conditions (downhill: -15%, -10%, -5%, level, and uphill: +5%, +10%, +15%). We found a significant main effect (p < 0.001) of running speed and slope on peak MTP dorsiflexion and a running speed by slope interaction effect (p < 0.001) for peak MTP dorsiflexion velocity. These findings highlight the need to consider running intensity and environmental factors like running surface inclination when considering MTP joint mechanics and technological aids to support runners.
Weitsprung mit und ohne Unterschenkelprothese – gleiche Sportart, unterschiedliche Disziplinen
(2022)
The EREMI project is a 2-year project funded under the ERASMUS+ framework programme and its team has developed and will validate an advanced higher education program, including life-long learning, on the interdisciplinary topic of resource efficiency in manufacturing industries and the overall system optimization of low or not digitized physical infrastructure. All of these will be achieved by applying IoT technologies towards efficient industrial systems, and by utilizing a high-level educated human capital on these economically, politically, and technically crucial and highly relevant topics for the rapidly developing industries and economies of intensively economically and industrially transforming countries - Bulgaria, North Macedonia, and Romania. Efficiency will be attained by utilizing the experience and expertise of the involved German partner organisation.
In recent years, the topic of embedded machine learning has become very popular in AI research. With the help of various compression techniques such as pruning, quantization and others compression techniques, it became possible to run neural networks on embedded devices. These techniques have opened up a whole new application area for machine learning. They range from smart products such as voice assistants to smart sensors that are needed in robotics. Despite the achievements in embedded machine learning, efficient algorithms for training neural networks in constrained domains are still lacking. Training on embedded devices will open up further fields of applications. Efficient training algorithms would enable federated learning on embedded devices, in which the data remains where it was collected, or retraining of neural networks in different domains. In this paper, we summarize techniques that make training on embedded devices possible. We first describe the need and requirements for such algorithms. Then we examine existing techniques that address training in resource-constrained environments as well as techniques that are also suitable for training on embedded devices, such as incremental learning. At the end, we also discuss which problems and open questions still need to be solved in these areas.
Rising societies’ demands require more sustainable products and technologies. Although numerous methods and tools have been developed in the last decades to support environmental-friendly product and process development, an interdisciplinary knowledge base of eco-innovative examples linked to the eco-innovative problems and solution principles is lacking. The paper proposes an ontology of examples for eco-friendly products and technologies assigned to the Inventive Principles (IPs) of the TRIZ methodology in accordance with the German TRIZ Standard VDI 4521. The examples of sustainable technologies and products build a database for sharing and reusing eco-innovation knowledge. The ontology acts as a tool for systematic solving of specific environmental problems in typical life cycle phases, for different environmental impact categories and engineering domains. Finally, the paper defines a future research agenda in the field of the TRIZ-based systematic eco-innovation.
Eco-Feasibility Study and Application of Natural Inventive Principles in Chemical Engineering Design
(2022)
The early stages of the front-end process development are critical for the future success of projects involving new technologies. The application of eco-inventive principles identified in natural systems to the design of chemical processes and equipment allows one to find ways to mitigate or avoid secondary ecological problems such as, for example, higher consumption of raw materials or energy, generation of hazardous waste and pollution of the environment by toxic chemicals. However, before implementing a new technology in a real operational environment, it is necessary to completely investigate its undesirable ecological impact and to evaluate the future viability of this technology. Therefore, the research paper presents a study of ecological feasibility of an innovative process design utilising natural eco-inventive principles and analyses the correlations between applied inventive principles. Such eco-feasibility study can be considered as an important decision gate to determine whether the technology implementation should be moved forward. Furthermore, the study evaluates the practicability of natural inventive principles to the eco-friendly process design and is illustrated with an example of a sustainable technology for nickel extraction from pyrophyllite.
During the coronavirus crisis, labs had to be offered in digital form in mechanical engineering at short notice. For this purpose, digital twins of more complex test benches in the field of fluid energy machines were used in the mechanical engineering course, with which the students were able to interact remotely to obtain measurement data. The concept of the respective lab was revised with regard to its implementation as a remote laboratory. Fortunately, real-world labs were able to be fully replaced by remote labs. Student perceptions of remote labs were mostly positive. This paper explains the concept and design of the digital twins and the lab as well as the layout, procedure, and finally the results of the accompanying evaluation. However, the implementation of the digital twins to date does not yet include features that address the tactile experience of working in real-world labs.
The purpose of this study was to 1) compare knee joint kinematics and kinetics of fake-and-cut tasks of varying complexity in 51 female handball players and 2) present a case study of one athlete who ruptured her ACL three weeks post data collection. External knee joint moments and knee joint angles in all planes at the instance of the peak external knee abduction moment (KAM) as well as moment and angle time curves were analyzed. Peak KAMs and knee internal rotation moments were substantially higher than published values obtained during simple change-of-direction tasks and, along with flexion angles, differed significantly between the tasks. Introducing a ball reception and a static defender increased joint loads while they partially decreased again when anticipation was lacking. Our results suggest to use game-specific assessments of injury risk while complexity levels do not directly increase knee loading. Extreme values of several risk factors for a post-test injured athlete highlight the need and usefulness of appropriate screenings.
This study aimed to compare a simplified calculation of the knee abduction moment with the traditional inverse dynamics calculation when athletes perform fake-cut maneuvers with different complexities. In the simplified calculation, we multiply the force vector with its lever arm to the knee, projected onto the local coordinate system of the proximal thigh, hence neglecting the inertial contributions from distal segments. We found very strong ranking consistency using Spearman’s rank correlation coefficient when using the simplified method compared to the traditional calculation. Independent of the tasks, the simplified method resulted in higher moments than the inverse dynamics. This was caused by ignoring the moment caused by segment linear acceleration generating a counteracting moment by about 7%. An alternative to the complex calculations of inverse dynamics can be used to investigate the contributions of the GRF magnitude and its lever arm to the knee.
Effect of downhill running on biomechanical risk factors associated with iliotibial band syndrome
(2022)
The purpose of this study was to identify the influence of downhill running on biomechanical risk factors for iliotibial band syndrome. We conducted a 3D motion analysis of 22 females and males running on an instrumented treadmill at four different inclinations (0%, -5%, -10%, -15%) at a speed of 3.5 m/s. We found significant differences for biomechanical risk factors associated with iliotibial band syndrome. Peak knee flexion angle at initial ground contact (p < .001), peak knee adduction angle (p = .005), and iliotibial band strain (p < .001) systematically increased with increasing slope. Downhill running increases biomechanical risk factors for iliotibial band syndrome. Our results highlight the need to consider the individual running environment in assessing overuse injury risk in runners.
In this paper, we study the runtime performance of symmetric cryptographic algorithms on an embedded ARM Cortex-M4 platform. Symmetric cryptographic algorithms can serve to protect the integrity and optionally, if supported by the algorithm, the confidentiality of data. A broad range of well-established algorithms exists, where the different algorithms typically have different properties and come with different computational complexity. On deeply embedded systems, the overhead imposed by cryptographic operations may be significant. We execute the algorithms AES-GCM, ChaCha20-Poly1305, HMAC-SHA256, KMAC, and SipHash on an STM32 embedded microcontroller and benchmark the execution times of the algorithms as a function of the input lengths.
Additive manufacturing with plastics enables the production of lightweight and resilient components with a high degree of design freedom. In the low-cost sector, Material Extrusion as Fused Layer Modeling (FLM) has so far been the leading method, as it offers simple 3D printers and a variety of inexpensive 3D materials. However, printing times for 6FLM are very long and dimensional accuracy and surface finish are rather poor. Recently, new processes from the field of Vat Polymerization have appeared on the market, such as masked Stereolithography (mSLA), which offer a significant improvement in component quality and build speed at equally favorable machine costs.
This paper therefore analyzes the technical and economic capabilities of the two competing additive processes. For this purpose, the achievable dimensional and surface qualities are determined using a test specimen which represents various important geometry elements. In addition, the machine and material costs are determined and compared with each other. Finally, the resulting environmental impact is determined in the form of the CO2 footprint. In order to optimize the strength of the printed components, material properties of the tensile specimens produced additively with mSLA are determined. The use of ABS-like resins will also be investigated to determine optimal processing settings.
The visual-inertial mapping and localization system maplab is analyzed by its implementation and subsequent evaluation. The mapping or localization is based on environmental feature detection. In addition to creating maps, there is also the option of fusion of several maps and thus mapping extensive areas and using them for further analysis of data. In this way, various software tools can be used to optimize the existing data sets.
Two sensor components are needed: an inertial measuring unit (IMU) and a monochrome camera, which are combined by a hardware rig and put into operation for the analysis of the visual-inertial system. System calibration is crucial for precision and system functioning and is based on nonlinear dynamic state estimation. This ensures the best possible estimate of the position of the environmental feature and the map. Maplab is particularly suitable for mapping rooms or small building complexes as the implementation and evaluation of the results in different application scenarios show. Special emphasis is laid on the evaluation of larger scenarios, in which is shown, that the system is struggling to keep up geometric consistencies and thus provide an accurate map.
Spatially Distributed Wireless Networks (SDWN) are one of the basic technologies for the Internet of Things (IoT) and (Industrial) Internet of Things (IIoT) applications. These SDWN for many of these applications has strict requirements such as low cost, simple installation and operations, and high potential flexibility and mobility. Among the different Narrowband Wireless Wide Area Networking (NBWWAN) technologies, which are introduced to address these categories of wireless networking requirements, Narrowband Internet of Things (NB-IoT) is getting more traction due to attractive system parameters, energy-saving mode of operation with low data rates and bandwidth, and its applicability in 5G use cases. Since several technologies are available and because the underlying use cases come with various requirements, it is essential to perform a systematic comparative analysis of competing technologies to choose the right technology. It is also important to perform testing during different phases of the system development life cycle. This paper describes the systematic test environment for automated testing of radio communication and systematic measurements of the performance of NB-IoT.
Objective: Dickkopf 3 (DKK3) has been identified as a urinary biomarker. Values above 4000 pg/mg creatinine (Cr) were linked with a higher risk of short-term decline of kidney function (J Am Soc Nephrol 29: 2722–2733). However, as of today, there is little experience with DKK3 as a risk marker in everyday clinical practice. We used algorithm-based data analysis to evaluate the potential dependence of DKK3 in a cohort from a large single center in Germany.
Method: DKK3 was measured in all CKD patients in our center October 1 st 2018 till Dec. 31 2019, together with calculated GFR (eGFR) and urinary albumin/creatinine ratio (UACR). Kidney transplant patients were excluded. Until the end of follow-up Dec 31 st 2021, repeated measurements were performed for all parameters. Data analysis was performed using MD-Explorer (BioArtProducts, Rostock, Germany) and Python with multiple libraries. Linear regression models were applied in patients for DKK3, eGFR and UACR. Comparison of the models was performed with a twosided Kolmogorov-Smirnov test.
Results: 1206 DKK3 measurements were performed in 1103 patients (621 male, age 70yrs, eGFR 29,41 ml/min/1.73qm, UACR 800 mg/g). 134 patients died during follow-up. DKK3 mean was 2905 pg/mg Cr (max. 20000, 75 % percentile 3800). 121 pts had DKK3 > 4000. At the end of follow-up 7 % of patients with DKK3 < 4000 (initial eGFR 17.6) versus 39.6 % of patients with DDK3 > 4000 (initial eGFR 15.7) underwent dialysis. Compared to eGFR and UACR at baseline, DKK3 > 4000 performed best to predict eGFR loss over the next 12 months.
Conclusion: In this cohort of CKD patients, DKK3 > 4000 at baseline predicted the eGFR slope better than eGFR or UACR at baseline. DKK3 > 4000 reflected a higher risk of progression towards ESRD in patients with similar baseline eGFR levels.
This work focuses on the dependencies between typical design parameters of surface acoustic wave (SAW) resonators and the nonlinear emitted signals of second and third order. The parameters metalization ratio and pitch are used as examples, but the approach can be extended to other design parameters as well. It is shown, that the interaction between the nonlinear current generation and the linear admittance is defining the measured nonlinear power signals. It is also discussed, that changes in linear properties get more pronounced in nonlinear responses. Therefore, slight effects on linear parameters will have significant influence on the observed nonlinearity.
Nonlinear acoustic waves are considered that have displacements localized at the tip of an elastic wedge. The evolution equation governing their propagation is discussed and compared with its analogues pertaining to nonlinear acoustic surface and bulk waves. Solitary wave solutions of the evolution equation have been determined numerically for the cases of two rectangular edges which may be viewed as generated by splitting a half-space, consisting of crystalline silicon, into two quarter-spaces. For these two geometries, the kernel in the nonlinear terms of the evolution equation has been calculated from the second-order and third-order elastic constants of silicon, and weak dispersion due to tip truncation has been considered. Solitary pulse shapes have been computed and collisions of solitary pulses have been simulated for various relative speeds of the two collision partners. Collision scenarios for the two wedge geometries were found to differ considerably. Special attention is paid to the peculiar interaction of two initially identical solitary pulses.
Projektmanagement entwickelt sich kontinuierlich, auch in qualitativen Sprüngen und Zyklen. Planungsiterationen aus der Agilität und die coronabedingte Digitalisierung der Kommunikation sind nicht die einzigen aktuellen Entwicklungen. Nicht einmal die Wichtigsten. Es wird ein Überblick vermittelt, der nicht nur verstehen, sondern gestalten hilft.
The majority of anterior cruciate ligament (ACL) injuries in team sports are non-contact injuries, with cutting maneuvers identified as high-risk tasks. Young female handball players have been shown to be at greater risk for ACL injuries than males. One risk factor for ACL injuries is the magnitude of the knee abduction moment (KAM). Cutting technique variables on foot placement, overall approach and knee kinematics have been shown to influence the KAM. Since injury risk is believed to increase with increasing task complexity, the purpose of the study was to test the effect of task complexity on technique variables that influence the KAM in female handball players during fake-and-cut tasks.
Towards a Formal Verification of Seamless Cryptographic Rekeying in Real-Time Communication Systems
(2022)
This paper makes two contributions to the verification of communication protocols by transition systems. Firstly, the paper presents a modeling of a cyclic communication protocol using a synchronized network of transition systems. This protocol enables seamless cryptographic rekeying embedded into cyclic messages. Secondly, we test the protocol using the model checking verification technique.
Today, Additive Manufacturing (AM) is an important part of teaching for the education of future engineers. Therefore, a variety of approaches have been developed in recent years on how to bring the design for additive manufacturing (DfAM) into university teaching. In a detailed literature review, the advantages and disadvantages of the previous approaches are considered and analysed. Based on this, an extended approach is presented in which students analyse and optimize a given product with respect to additive manufacturing. In doing so, the students have to solve challenging tasks in optimization in product development with the help of methodical approaches and practically implement their developed solutions with state-of-the-art additive processes. To work on this task, the students have two different 3D printers at their disposal, which work with different processes and materials. Thus, the students learn to adapt the design to different manufacturing processes and to consider the restrictions of different materials. The assessment of the results from this course is done through feedback and a written survey.
For some years now, additive manufacturing (AM) has offered an alternative to conventional manufacturing processes. The strengths of AM are primarily the rapid implementation of ideas into a usable product and the ability to produce geometrically complex shapes. It has also significantly advanced the lightweight design of products made of plastic. So far, the strength of printed components made of polymers is previously very limited.
Recently, new AM processes have become available that allow the embedding of short and also long fibers in polymer matrix. Thus, the manufacturing of components that provide a significant increase in strength becomes possible. In this way, both complex geometries and sophisticated applications can be implemented. This paper therefore investigates how this new technology can be implemented in product development, focusing on sports equipment. An extensive literature research shows that lightweight design plays a decisive role in sports equipment. In addition, the advantages of AM in terms of individualized products and low quantities can be fully exploited.
An example of this approach is the steering system for a seat sled used by paraplegic athletes in the Olympic discipline of Nordic paraskiing. A particular challenge here is the placement and alignment of the long carbon fibers within the polymer matrix and the verification of the strength by means of Finite-Element-Analysis (FEA). In addition, findings from bionics are used to optimize the lightweight design of the steering system. Using this example, it can be shown that the weight of the steering system can be drastically reduced compared to conventional manufacturing. At the same time, a number of parts can be saved through function integration and thus the manufacturing and assembly effort can be reduced significantly.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
Electrode modelling and simulation of diagnostic and pulmonary vein isolation in atrial fibrillation
(2022)
DE\GLOBALIZE
(2022)
The artistic research cycle DE\GLOBALIZE is a media ecological search movement for the terrestrial. After examining matters of fact in India (2014-18), matters of concern in Egypt (2016-2019) and matters of care in the Upper Rhine (2018-22), the focus turns toward matters of violence in the Congo (2022). From matter to mater, mother-earth, the garden to exploitation. From science, water and climate to migration, oppression and extermination.
The long-term research is accessible through interactive web documentation. The platform serves as a continuous media-archaeological archive for a speculative ethnography. The relational structure of the videographic essay is enabling the forensic processing of single documents in the sense of the actor-network theory.
The subject of the presentation at IFM is a field trip to the Congo planned for March 2022, which will focus on the ambivalence of violence and care in collaboration with local artists. The field trip is based on the postcolonial reflection luderitzcargo by the author from 1996, in which a freight container was transformed into a translocal cinema in Namibia.
Through the journey to Congo, a group of media artists, a psychotherapist, a theater dramaturg, a filmmaker and a philosopher intend to explore the political, technological and psycho-geographic borders. By artistic interventions with locals, we want to interfere with relational string figures as part of the new Earth Politics. They are focusing on the displaced consumption of resources which are hard-fought and guarantee prosperity in the global north. The so-called ghost acreages are repressed and justified as part of a civilizational mission. With this trip, we want to confront our self-lies with the ones of our hosts. We want to confront ourselves with the foreign, the dark and the displaced ghosts within ourselves. In the presentation at the #IFM2022 Conference, the platform DE\GLOBALIZE will be problematized itself as an example of epistemic violence for the ethnographic memory of (Western) knowledge.
We are not the missionaries but the perplexed travellers. In our search movement, we are dealing with psychoanalysis, video, performance and trance. As disoriented white men we try the reversal of Black Skin and White Mask by Franz Fanon without blackfacing. We will not only care about the sensitivity of our skin but that of our g/hosts and the one of mother earth.
VR-based implementation of interactive laboratory experiments in optics and photonics education
(2022)
Within the framework of a developed blended learning concept, a lot of experience has already been gained with a mixture of theoretical lectures and hands-on activities, combined with the advantages of modern digital media. Here, visualizations using videos, animations and augmented reality have proven to be effective tools to convey learning content in a sustainable way. In the next step, ideas and concepts were developed to implement hands-on laboratory experiments in a virtual environment. The main focus is on the realization of virtual experiments and environments that give the students a deep insight into selected subfields of optics and photonics.
To deal with frequent power outages in developing countries, people turn to solutions like uninterruptible power supply (UPS), which stores electric energy during normal operating hours and use it to meet energy needs during rolling blackout intervals. Locally produced UPSs of poorer power quality are widely accessible in the marketplaces, and they have a negative impact on power quality. The charging and discharging of the batteries in these UPSs generate significant amount of power losses in weak grid environments. The Smart-UPS is our proposed smart energy metering (SEM) solution for low voltage consumers that is provided by the distribution company. It does not require batteries, therefore there is no power loss or harmonic distortion due to corresponding charging and discharging. Through load flow and harmonic analysis of both traditional UPS and Smart-UPS systems on ETAP, this paper examines their impact on the harmonics and stability of the distribution grid. The simulation results demonstrate that Smart-UPS can assist fixing power quality issues in a developing country like Pakistan by providing cleaner energy than the battery-operated traditional UPSs.
Konstrukteure im Maschinenbau stehen häufig vor der Problemstellung, hochfest vorgespannte Schraubenverbindungen und einen durchgehenden Korrosionsschutz zu vereinen. Die einschlägigen Normen und Richtlinien bieten hierzu Stand heute keine ausreichende Hilfestellung. In diesem Beitrag werden an Versuchsblechen ermittelte Setzbeträge von maschinenbautypischen organischen Beschichtungssystemen unter Variation der Belastungshöhe und der Umgebungstemperatur präsentiert und mit in Bauteilversuchen gemessenen Vorspannkraftverlusten vergleichend bewertet.
Additive manufacturing offers completely new production technologies thanks to the layered structure and the simultaneous processing of several materials. In order to exploit the potential of this new technology, it is already necessary in product development to consider the components no longer as monolithic blocks, but as a structure of many layers and individual elements (voxels). Therefore, this paper will examine the current state of voxel-based CAD systems and the subsequent 3D multi-material printing of the designed components. Different voxel-based CAD systems are used and analyzed for component design and a sample component is additively manufactured. The results show that simple components can be designed using voxel-based CAD systems. With the application of 3D multi-material printing, different materials and thus functions can be assigned to the designed voxel-based CAD-model.
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37% can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.
Due to the Covid-19 pandemic, the RoboCup WorldCup 2021 was held completely remotely. For this competition the Webots simulator (https://cyberbotics.com/) was used, so all teams needed to transfer their robot to the simulation. This paper describes our experiences during this process as well as a genetic learning approach to improve our walk engine to allow a more stable and faster movement in the simulation. Therefore we used a docker setup to scale easily. The resulting movement was one of the outstanding features that finally led to the championship title.