Refine
Year of publication
- 2022 (262) (remove)
Document Type
- Conference Proceeding (90)
- Part of a Book (62)
- Article (reviewed) (34)
- Other (20)
- Article (unreviewed) (20)
- Book (16)
- Report (7)
- Patent (5)
- Doctoral Thesis (3)
- Letter to Editor (3)
Conference Type
- Konferenzartikel (71)
- Konferenz-Abstract (13)
- Konferenz-Poster (3)
- Sonstiges (3)
Has Fulltext
- no (262) (remove)
Is part of the Bibliography
- yes (262)
Keywords
- injury (10)
- COVID-19 (7)
- Digitalisierung (7)
- Machine Learning (6)
- running (6)
- biomechanics (5)
- ACL (4)
- Robustness (4)
- Digitaltechnik (3)
- Entrepreneurship (3)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (72)
- Fakultät Medien (M) (ab 22.04.2021) (72)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (70)
- Fakultät Wirtschaft (W) (54)
- INES - Institut für nachhaltige Energiesysteme (16)
- POIM - Peter Osypka Institute of Medical Engineering (16)
- IMLA - Institute for Machine Learning and Analytics (13)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (10)
- ACI - Affective and Cognitive Institute (5)
- IfTI - Institute for Trade and Innovation (4)
Open Access
- Closed (160)
- Open Access (69)
- Bronze (39)
- Closed Access (33)
- Diamond (15)
- Hybrid (5)
- Gold (2)
Seismic data has often missing traces due to technical acquisition or economical constraints. A compete dataset is crucial in several processing and inversion techniques. Deep learning algorithms, based on convolutional neural networks (CNNs), have shown alternative solutions that overcome limitation of traditional interpolation methods e.g. data regularity, linearity assumption, etc. There are two different paradigms of CNN methods for seismic interpolation. The first one, so-called deep prior interpolation (DPI), trains a CNN to map random noise to a complete seismic image using only the decimated image itself. The second one, referred as standard deep learning method, trains a CNN to map a decimated seismic image into a complete one using a dataset of complete and artificially decimated images. Within this research, we systematically compare the performance of both methods for different quantities of regular and irregular missing traces using 4 datasets. We evaluate the results of both methods using 5 well-known metrics. We found that DPI method performs better than the standard method if the percentage of missing traces is low (10%) and otherwise if the level of decimation is high (50%).
In this paper, the influence of the material hardening behavior on plasticity-induced fatigue crack closure is investigated for strain-controlled loading and fully plastic, large-scale yielding conditions by means of the finite element method. The strain amplitude and the strain ratio are varied for given Ramberg–Osgood material properties representing materials with different hardening behavior. The results show a pronounced influence of the hardening behavior on crack closure, while no significant effect is found from the considered strain amplitude and strain ratio. The effect of the hardening behavior on the crack opening stress cannot be described by existing crack opening stress equations.
The desire to connect more and more devices and to make them more intelligent and more reliable, is driving the needs for the Internet of Things more than ever. Such IoT edge systems require sound security measures against cyber-attacks, since they are interconnected, spatially distributed, and operational for an extended period of time. One of the most important requirements for the security in many industrial IoT applications is the authentication of the devices. In this paper, we present a mutual authentication protocol based on Physical Unclonable Functions, where challenge-response pairs are used for both device and server authentication. Moreover, a session key can be derived by the protocol in order to secure the communication channel. We show that our protocol is secure against machine learning, replay, man-in-the-middle, cloning, and physical attacks. Moreover, it is shown that the protocol benefits from a smaller computational, communication, storage, and hardware overhead, compared to similar works.
This paper presents a method for supporting the application of Additive Tooling (AT)-based validation environments in integrated product development. Based on a case study, relevant process steps, activities and possible barriers in the realisation of an injection-moulded product are identified and analysed. The aim of the method is to support the target-oriented application of Additive Tooling to obtain physical prototypes at an early stage and to shorten validation cycles.
Linear acceleration is a key performance determinant and major training component of many sports. Although extensive research about lower limb kinetics and kinematics is available, consistent definitions of distinctive key body positions, the underlying mechanisms and their related movement strategies are lacking. The aim of this ‘Method and Theoretical Perspective’ article is to introduce a conceptual framework which classifies the sagittal plane ‘shin roll’ motion during accelerated sprinting. By emphasising the importance of the shin segment’s orientation in space, four distinctive key positions are presented (‘shin block’, ‘touchdown’, ‘heel lock’ and ‘propulsion pose’), which are linked by a progressive ‘shin roll’ motion during swing-stance transition. The shin’s downward tilt is driven by three different movement strategies (‘shin alignment’, ‘horizontal ankle rocker’ and ‘shin drop’). The tilt’s optimal amount and timing will contribute to a mechanically efficient acceleration via timely staggered proximal-to-distal power output. Empirical data obtained from athletes of different performance levels and sporting backgrounds are required to verify the feasibility of this concept. The framework presented here should facilitate future biomechanical analyses and may enable coaches and practitioners to develop specific training programs and feedback strategies to provide athletes with a more efficient acceleration technique.
Voice user interfaces (VUIs) offer an intuitive, fast and convenient way for humans to interact with machines and computers. Yet, whether they’ll be truly successful and find widespread uptake in the near future depends on the user experience (UX) they offer. With this survey-based study (n = 108), we aim to identify the major annoyances German voice assistant users are facing in voice-driven human-computer interactions. The results of our questionnaire show that irritations appear in six categories: privacy issues, unwanted activation, comprehensibility, response quality, conversational design and voice characteristics. Our findings can help identify key areas of work to optimize voice user experience in order to achieve greater adaptation of the technology. In addition, they can provide valuable information for the further development and standardization of voice user experience (VUX) research.
Featherweight Generic Go (FGG) is a minimal core calculus modeling the essential features of the programming language Go. It includes support for overloaded methods, interface types, structural subtyping and generics. The most straightforward semantic description of the dynamic behavior of FGG programs is to resolve method calls based on runtime type information of the receiver.
This article shows a different approach by defining a type-directed translation from FGG to an untyped lambda-calculus. The translation of an FGG program provides evidence for the availability of methods as additional dictionary parameters, similar to the dictionary-passing approach known from Haskell type classes. Then, method calls can be resolved by a simple lookup of the method definition in the dictionary.
Every program in the image of the translation has the same dynamic semantics as its source FGG program. The proof of this result is based on a syntactic, step-indexed logical relation. The step-index ensures a well-founded definition of the relation in the presence of recursive interface types and recursive methods.
In dem Projekt BioMeth wurde der Ansatz der Membranbegasung zur Erhöhung der Verfügbarkeit von gelöstem Wasserstoff für die biologische Methanisierung im Sinn der Etablierung eines Power-to-Gas-Konzeptes zur Energiespeicherung verfolgt. Übergeordnetes Ziel war die Entwicklung eines skalierbaren Verfahrenskonzeptes, dass sich zur Nutzung CO2-haltiger Gasvolumenström eignet. Geplant war es, das Verfahren am Beispiel der Biogasanlage der Biokäserei Monte-Ziego in Teningen zu demonstrieren und dort das bestehende Konzept der parallelen Abwasseraufbereitung und Energieerzeugung zu erweitern. Die ursprüngliche Struktur des Arbeitspaketplanes ist in nachfolgender Abbildung gezeigt.
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries, and to reduce overfitting. In this context, we perform an investigation of 3 × 3 convolution filters that form in adversarially- trained models. Filters are extracted from 71 public models of the ℓ ∞ -RobustBench CIFAR-10/100 and ImageNet1k leaderboard and compared to filters extracted from models built on the same architectures but trained without robust regularization. We observe that adversarially-robust models appear to form more diverse, less sparse, and more orthogonal convolution filters than their normal counterparts. The largest differences between robust and normal models are found in the deepest layers, and the very first convolution layer, which consistently and predominantly forms filters that can partially eliminate perturbations, irrespective of the architecture.
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. To reveal model weaknesses, adversarial attacks are specifically optimized to generate small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained by using adversarial examples during training, which in most cases reduces the measurable model attackability. Unfortunately, this technique can lead to robust overfitting, which results in non-robust models. In this paper, we analyze adversarially trained, robust models in the context of a specific network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from downsampling artifacts, aka. aliasing, than baseline models. In the case of robust overfitting, we observe a strong increase in aliasing and propose a novel early stopping approach based on the measurement of aliasing.
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by using adversarial examples during training, which effectively reduces the measurable model attackability. In contrast, research on analyzing the source of a model’s vulnerability is scarce. In this paper, we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.
Running footwear is continuously being modified and improved; however, running-related overuse injury rates remain high. Nevertheless, novel manufacturing processes enable the production of individualized running shoes that can fit the individual needs of runners, with the potential to reduce injury risk. For this reason, it is essential to investigate functional groups of runners, a collective of runners who respond similarly to a footwear intervention. Therefore, the objective of this study was to develop a framework to identify functional groups based on their individual footwear response regarding injury-specific running-related risk factors for Achilles tendinopathy, Tibial stress fractures, Medial tibial stress syndrome, and Patellofemoral pain syndrome. In this work, we quantified the footwear response patterns of 73 female and male participants when running in three different footwear conditions using unsupervised learning (k-means clustering). For each functional group, we identified the footwear conditions minimizing the injury-specific risk factors. We described differences in the functional groups regarding their running style, anthropometric, footwear perception, and demographics. The results implied that most functional groups showed a tendency for a single footwear condition to reduce most biomechanical risk factors for a specific overuse injury. Functional groups often differed in their hip and pelvis kinematics as well as their subjective rating of the footwear conditions. The footwear intervention only partially affected biomechanical risk factors attributed to more proximal joints. Due to its adaptive nature, the framework could be applied to other footwear interventions or performance-related biomechanical variables.
Solar energy plays a central role in the energy transition. Clouds generate locally large fluctuations in the generation output of photovoltaic systems, which is a major problem for energy systems such as microgrids, among others. For an optimal design of a power system, this work analyzed the variability using a spatially distributed sensor network at Stuttgart Airport. It has been shown that the spatial distribution partially reduces the variability of solar radiation. A tool was also developed to estimate the output power of photovoltaic systems using irradiation time series and assumptions about the photovoltaic sites. For days with high fluctuations of the estimated photovoltaic power, different energy system scenarios were investigated. It was found the approach can be used to have a more realistic representation of aggregated PV power taking spatial smoothing into account and that the resulting PV power generation profiles provide a good basis for energy system design considerations like battery sizing.
We consider the local group of agents for exchanging the time-series data value and computing the approximation of the mean value of all agents. An agent represented by a node knows all local neighbor nodes in the same group. The node has the contact information of other nodes in other groups. The nodes interact with each other in synchronous rounds to exchange the updated time-series data value using the random call communication model. The amount of data exchanged between agent-based sensors in the local group network affects the accuracy of the aggregation function results. At each time step, the agent-based sensor can update the input data value and send the updated data value to the group head node. The group head node sends the updated data value to all group members in the same group. Grouping nodes in peer-to-peer networks show an improvement in Mean Squared Error (MSE).
Jeder redet heutzutage von Digitalisierung. Und dass Unternehmen unter Digitalisierungsdruck stehen, ist ein geflügeltes Wort. Aber was heißt das konkret? Welche Aufgaben stellen sich Führungskräften und welche Lösungsansätze gibt es dafür? Um diese Fragen zu beantworten, klärt das Buch zunächst die Begriffe „Digitalisierung“ und „Management“. Auf dieser begrifflichen Grundlage werden dann wichtige Aspekte des Managements der Digitalisierung analysiert. Experten aus Wissenschaft und Wirtschaft zeigen, wie sich der Kundennutzen optimieren lässt, wie maschinelles Lernen das Entscheiden unterstützt und wie man virtuelle Realität im Unternehmen praktisch einsetzen kann. Vorgestellt werden außerdem neue Entwicklungen in Regulatorik und Berichtswesen sowie Möglichkeiten der IT-gestützten Wirtschaftsförderung. Führungskräfte erhalten so eine Fülle wertvoller Anregungen, um ihre Unternehmen längerfristig noch erfolgreicher zu machen.
As a university it is more and more difficult to reach all target groups equally. Common problems like information overload, numerous institutions with same focuses or multi-channel-communication make it hard to gain the attention of the target group. This paper is four-fold: we present an overview of the state of art and the importance of the study (I), based on which we highlight the approach to user experience analysis. First, we identified the irritations in the course of an expert evaluation (II) and verified them within the test, including the target groups (III). Finally, based on the results, we were able to pro-vide recommendations for action to improve the UX and to be used for the conception of an intranet (IV).
Bach, Gas, Strom und Wasser
(2022)
In this paper, the Bauschinger effect and latent hardening of single crystals are assessed in finite element calculations using a single crystal plasticity model with kinematic hardening. To this end, results of cyclic micro-bending experiments on single crystal Alloy 718 in different crystal orientations (single slip and multi slip) with respect to the loading direction are used to determine the slip system related material properties of the single crystal plasticity model. Two kinematic hardening laws are considered: a kinematic hardening law describing latent hardening and a kinematic hardening law without latent hardening. For the determination of material properties for both hardening laws, a gradient-based optimization method is used. The results show that the different strength levels observed for micro-bending tests on different crystal orientations can only be described with latent kinematic hardening well, whereas the pronounced Bauschinger effect is described well by both kinematic hardening laws. It is concluded that cyclic micro-bending experiments on single crystals using different crystal orientations give an appropriate data base for the determination of the slip system related material properties of the single crystal plasticity model with latent kinematic hardening.
Bildnisverwertungsklauseln
(2022)
Biodegradable metals have entered the implant market in recent years, but still do not show fully satisfactory degradation behaviour and mechanical properties. In contrast, it has been shown that pure molybdenum has an excellent combination of the required properties in this respect. We report on PM based screen printing of thin-walled molybdenum tubes as a processing step for medical stent manufacture. We also present data on the in vivo degradation and biocompatibility of molybdenum. The degradation of molybdenum wires implanted in the aorta of rats was evaluated by SEM and EDX. Biocompatibility was assessed by histological investigation of organs and analysis of molybdenum levels in tissue extracts and body fluids. Degradation rates of up to 13.5 μm/y were observed after 12 months. No histological changes or elevated molybdenum levels in organ tissues were observed. In summary, the results further underline that molybdenum is a highly promising biodegradable metallic material.
Die thermischen Wirkungsgrade von Kraftwerken zur Stromerzeugung sind relativ gering. Beispielsweise erreichen moderne Kohlekraftwerke heute bis etwa 45 %, Gasturbinen maximal 40 % sowie Diesel-und Gas-Motoren bis ca. 50 %. Kombinations-Kraftwerke, d. h. Gas- und Dampfturbinen-Prozesse, können über 60 % thermischen Wirkungsgrad bei der Umwandlung der zugeführten Wärme in mechanische bzw. elektrische Energie erzielen. Ein ähnlich hoher Wert wird in Zukunft von den Brennstoffzellen erwartet. Der nicht in Arbeit umgewandelte Anteil der zugeführten Wärme fällt als Abwärme an und geht ungenutzt in die Umgebung. Ein Teil dieser Abwärme lässt sich durch entsprechende Installationen bei allen Kraftwerksprozessen zur Wassererwärmung oder zur Dampferzeugung für industrielle Zwecke nutzen. Für Heiz- und Prozesswärme genügt eine Temperatur der Abwärme von 60 bis 80 °C, während die Erzeugung von Industriedampf deutlich höhere Temperaturen voraussetzt.
Die neuen Realitäten digitalwirtschaftlicher Geschäftsmodelle stellen die Verfügbarkeit und Verwendung großer Datenmengen in den Mittelpunkt unternehmerischer Aktivitäten. Das Risikomanagement, das bereits intensiv stochastische Methoden anwendet, sollte an dieser Entwicklung teilhaben. Im vorliegenden Beitrag geht es um die angemessene Rahmung und Einordnung von Analytics-Projekten.
This study aimed to compare a simplified calculation of the knee abduction moment with the traditional inverse dynamics calculation when athletes perform fake-cut maneuvers with different complexities. In the simplified calculation, we multiply the force vector with its lever arm to the knee, projected onto the local coordinate system of the proximal thigh, hence neglecting the inertial contributions from distal segments. We found very strong ranking consistency using Spearman’s rank correlation coefficient when using the simplified method compared to the traditional calculation. Independent of the tasks, the simplified method resulted in higher moments than the inverse dynamics. This was caused by ignoring the moment caused by segment linear acceleration generating a counteracting moment by about 7%. An alternative to the complex calculations of inverse dynamics can be used to investigate the contributions of the GRF magnitude and its lever arm to the knee.
Research is often conducted to investigate footwear mechanical properties and their effects on running biomechanics, but little is known about their influence on runner satisfaction, or how well the shoe is perceived. A tool to predict runner satisfaction in a shoe from its mechanical properties would be advantageous for footwear companies. Data in this study were from a database (n = 615 subject-shoe pairings) of satisfaction ratings (gathered after participants ran on a treadmill), and mechanical testing data for 87 unique subjects across 61 unique shoes. Random forest and elastic net logistic regression models were built to test if footwear mechanical properties and subject characteristics could predict runner satisfaction in 3 ways: degree-of-satisfaction on a 7-point Likert scale, overall satisfaction on a 3-point Likert scale, and willingness-to-purchase the shoe (yes/no response). Data were divided into training and validation sets, using an 80–20 split, to build the models and test their accuracy, respectively. Model accuracies were compared against the no-information rate (i.e. proportion of data belonging to the largest class). The models were not able to predict degree-of-satisfaction or overall satisfaction from footwear mechanical properties but could predict runner’s willingness to purchase with 68–75% accuracy. Midsole Gmax at the heel and forefoot appeared in the top five of variable importance rankings across both willingness-to-purchase models, suggesting its role as a major factor in purchase decisions. The negative regression coefficient for both heel and forefoot Gmax indicated that softer midsoles increase the likelihood of a shoe purchase. Future models to predict satisfaction may improve accuracy with the addition of more subject-specific parameters, such as running goals or foot proportions.
Dieses Buch gibt einen fundierten Überblick über Change- und Corporate-Venture-Capital-Strategien im Mediensektor. Viele Medienunternehmen stehen vor der Herausforderung, das eigene (Kern-)Geschäft weiterzuentwickeln sowie eine deutlichere Orientierung am Kundennutzen zu verfolgen und dennoch die eigene kreative und publizistische Mission beizubehalten. Darüber hinaus gilt es, neue Geschäftsfelder aufzubauen und den Ankauf von Unternehmen oder die Beteiligung an Start-ups voranzutreiben. Der dabei erforderliche Spagat zwischen strategischen und finanziellen Zielen sowie die operative Umsetzung stellen eine beachtliche Herausforderung dar.
Die Branchenexperten analysieren in ihren Beiträgen die vielfältigen Potenziale sowie konkrete Maßnahmen und Best Cases. Dabei kommen Medieninsider und medienunabhängige Experten zur Sprache, die die Change-Strategien, -Maßnahmen und -Fallstudien beleuchten und neue Gestaltungsmöglichkeiten aufzeigen.
A circuit arrangement of a motor vehicle includes a high-voltage battery for storing electrical energy, an electric machine for driving the motor vehicle, a converter via which high-voltage direct current voltage provided by the high-voltage battery is convertible into high-voltage alternating current voltage for operating the electric machine, and a charging connection for providing electrical energy for charging the high-voltage battery. The converter is a three-stage converter having a first switch unit which is assigned to a first phase of the electric machine. The first switch unit has two switch groups connected in series which each have two insulated-gate bipolar transistors (IGBTs) connected in series, where a connection is disposed between the IGBTs of one of the two switch groups, which connection is electrically connected directly to a line of the charging connection.
Currently, many theoretical as well as practically relevant questions towards the transferability and robustness of Convolutional Neural Networks (CNNs) remain unsolved. While ongoing research efforts are engaging these problems from various angles, in most computer vision related cases these approaches can be generalized to investigations of the effects of distribution shifts in image data. In this context, we propose to study the shifts in the learned weights of trained CNN models. Here we focus on the properties of the distributions of dominantly used 3×3 convolution filter kernels. We collected and publicly provide a dataset with over 1.4 billion filters from hundreds of trained CNNs, using a wide range of datasets, architectures, and vision tasks. In a first use case of the proposed dataset, we can show highly relevant properties of many publicly available pre-trained models for practical applications: I) We analyze distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like visual category of the dataset, task, architecture, or layer depth. Based on these results, we conclude that model pre-training can succeed on arbitrary datasets if they meet size and variance conditions. II) We show that many pre-trained models contain degenerated filters which make them less robust and less suitable for fine-tuning on target applications. Data & Project website: https://github.com/paulgavrikov/cnn-filter-db.
Additive manufacturing with plastics enables the production of lightweight and resilient components with a high degree of design freedom. In the low-cost sector, Material Extrusion as Fused Layer Modeling (FLM) has so far been the leading method, as it offers simple 3D printers and a variety of inexpensive 3D materials. However, printing times for 6FLM are very long and dimensional accuracy and surface finish are rather poor. Recently, new processes from the field of Vat Polymerization have appeared on the market, such as masked Stereolithography (mSLA), which offer a significant improvement in component quality and build speed at equally favorable machine costs.
This paper therefore analyzes the technical and economic capabilities of the two competing additive processes. For this purpose, the achievable dimensional and surface qualities are determined using a test specimen which represents various important geometry elements. In addition, the machine and material costs are determined and compared with each other. Finally, the resulting environmental impact is determined in the form of the CO2 footprint. In order to optimize the strength of the printed components, material properties of the tensile specimens produced additively with mSLA are determined. The use of ABS-like resins will also be investigated to determine optimal processing settings.
The present invention relates to open-loop and closed-loop control units for extracorporeal circulatory support, to systems comprising such an open-loop and closed-loop control unit, and to corresponding methods. An open-loop and closed-loop control unit (10) for extracorporeal circulatory support is proposed, which is configured to receive a measurement of an ECG signal (12) of a supported patient over a predefined period of time, wherein the ECG signal (12) comprises multiple data points for each time point within a heart cycle. The open-loop and closed-loop control unit (10) comprises an evaluation unit (100) which is configured to evaluate the data points for at least one time point in a spatial and/or temporal manner and to determine at least one amplitude change (14) within the heart cycle based on the evaluated data points. The open-loop and closed-loop control unit (10) is further configured to output an open-loop and/or closed-loop signal (16) for extracorporeal circulatory support at a predefined point in time after the at least one amplitude change (14).
The present invention relates to open-loop and closed-loop control units for extracorporeal circulatory support, to systems comprising such an open-loop and closed-loop control unit, and to corresponding methods. An open-loop and closed-loop control unit (10) for extracorporeal circulatory support is proposed, which is configured to receive a measurement of an ECG signal (12) of a supported patient over a predefined period of time, wherein the ECG signal (12) comprises multiple data points for each time point within a heart cycle. The open-loop and closed-loop control unit (10) comprises an evaluation unit (100) which is configured to evaluate the data points for at least one time point in a spatial and/or temporal manner and to determine at least one amplitude change (14) within the heart cycle based on the evaluated data points. The open-loop and closed-loop control unit (10) is further configured to output an open-loop and/or closed-loop signal (16) for extracorporeal circulatory support at a predefined point in time after the at least one amplitude change (14).
Eine Pandemie mit neuen Hygiene- und Abstandsregelungen ist vordergründig keine spezifische Herausforderung für den Bildungssektor auf den unterschiedlichen Ebenen. Da jedoch unsere Bildungseinrichtungen so angelegt sind, dass der persönliche Kontakt und Veranstaltungen in Präsenzform die Regel sind und auch unabdingbar erscheinen, waren alle Ebenen der Bildung massiv von den Einschränkungen der Jahre 2020 und 2021 betroffen. Systeme, die seit Jahren in der pädagogischen und/oder didaktischen Konzeption gefordert sind, neuen Herausforderungen gerecht zu werden und neue Impulse aufzunehmen, müssen im Sinne eines Corona-Managements nicht nur das aktuelle Risiko- und Krisenmanagement, sondern die digitale Transformation und die strategische Neuausrichtung im Rahmen der Schul- und Hochschulentwicklung bewältigen.
Export sichert Millionen von Arbeitsplätzen in Deutschland. Auch in anderen Ländern profitieren Menschen von positiven Effekten durch internationale Aktivitäten von Unternehmen. Finanzierung und Risikoabsicherung durch staatliche Exportkreditagenturen spielen dabei eine wesentliche Rolle, wenn der Markt versagt. Dies gilt gerade in Krisenzeiten wie der Covid-19-Pandemie. Regierungen haben mit Coronahilfen für die Exportwirtschaft Außenhandel ermöglicht und dadurch zahlreiche Arbeitsplätze gesichert. Mit einer Vielzahl von Aktivitäten haben unter anderem Dänemark, Deutschland, Polen und Österreich im Jahr 2020 schnell und effizient agiert. Teilweise deutlich erhöhte Finanzmittel, neue Garantieprodukte, verbesserte Finanzierungs- und Versicherungsbedingungen sowie vereinfachte Antragsverfahren waren zentrale Maßnahmen europäischer Regierungen. Gezeigt hat sich, dass auch in der Zukunft eine übergeordnete strategische Ausrichtung, ein gemeinsamer Förderansatz sowie eine wirkungsorientierte Gestaltung von Förderinstitutionen wichtig sind.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37% can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.
Cyber Threat Intelligence
(2022)
DE\GLOBALIZE
(2022)
The artistic research cycle DE\GLOBALIZE is a media ecological search movement for the terrestrial. After examining matters of fact in India (2014-18), matters of concern in Egypt (2016-2019) and matters of care in the Upper Rhine (2018-22), the focus turns toward matters of violence in the Congo (2022). From matter to mater, mother-earth, the garden to exploitation. From science, water and climate to migration, oppression and extermination.
The long-term research is accessible through interactive web documentation. The platform serves as a continuous media-archaeological archive for a speculative ethnography. The relational structure of the videographic essay is enabling the forensic processing of single documents in the sense of the actor-network theory.
The subject of the presentation at IFM is a field trip to the Congo planned for March 2022, which will focus on the ambivalence of violence and care in collaboration with local artists. The field trip is based on the postcolonial reflection luderitzcargo by the author from 1996, in which a freight container was transformed into a translocal cinema in Namibia.
Through the journey to Congo, a group of media artists, a psychotherapist, a theater dramaturg, a filmmaker and a philosopher intend to explore the political, technological and psycho-geographic borders. By artistic interventions with locals, we want to interfere with relational string figures as part of the new Earth Politics. They are focusing on the displaced consumption of resources which are hard-fought and guarantee prosperity in the global north. The so-called ghost acreages are repressed and justified as part of a civilizational mission. With this trip, we want to confront our self-lies with the ones of our hosts. We want to confront ourselves with the foreign, the dark and the displaced ghosts within ourselves. In the presentation at the #IFM2022 Conference, the platform DE\GLOBALIZE will be problematized itself as an example of epistemic violence for the ethnographic memory of (Western) knowledge.
We are not the missionaries but the perplexed travellers. In our search movement, we are dealing with psychoanalysis, video, performance and trance. As disoriented white men we try the reversal of Black Skin and White Mask by Franz Fanon without blackfacing. We will not only care about the sensitivity of our skin but that of our g/hosts and the one of mother earth.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research has shown, how such detection can be achieved by deep learning methods. However, that particular approach is limited to the identification of only 4 types of vulnerabilities. Subsequently, we analyze to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardized LLVM Intermediate Representation. The vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, the methodology presented allows an accurate detection of 23 (compared to 4) vulnerabilities.
This work focuses on the dependencies between typical design parameters of surface acoustic wave (SAW) resonators and the nonlinear emitted signals of second and third order. The parameters metalization ratio and pitch are used as examples, but the approach can be extended to other design parameters as well. It is shown, that the interaction between the nonlinear current generation and the linear admittance is defining the measured nonlinear power signals. It is also discussed, that changes in linear properties get more pronounced in nonlinear responses. Therefore, slight effects on linear parameters will have significant influence on the observed nonlinearity.
The development of a 3D printed force sensor for a gripper was studied applying an embedded constantan wire as sensing element. In the first section, the state of the art is explained. In the main section of the paper the modeling, simulation and verification of a sensor element are described for a three-point bending test made in accordance with the DIN EN ISO 178. The 3D printing process of the Fused Filament Fabrication (FFF) utilized for manufacturing the sensor samples in combination with an industrial robot are shown. A comparison between theory and practice are considered in detail. Finally, an outlook is given regarding the integration of the sensor element in gripper jaws.
This paper presents the development of a capacitive level sensor for robotics applications, which is designed for measurements of liquid levels during a pouring process. The proposed sensor design applies the advantages of guard electrodes in combination with passive shielding to increase resistance against external influences. This is important for reliable operations in rapidly changing measurement environments, as they occur in the field of robotics. The non-contact type sensor for liquid level measurement is the solution for avoiding contaminations and suit food guidelines. The designed sensor can be utilized in gastronomic applications. Two versions of the sensor were simulated, fabricated, and compared. The first version is based on copper electrodes, and the other type is fully 3D printed with electrodes made of conductive polylactic acid (PLA).
In recent years, Physical Unclonable Functions (PUFs) have gained significant attraction in the Internet of Things (IoT) for security applications such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of physical devices to generate unique fingerprints for security applications. One common approach for designing PUFs is exploiting the intrinsic features of sensors and actuators such as MEMS elements, which typically exist in IoT devices. This work presents the Cantilever-PUF, a PUF based on a specific MEMS device – Aluminum Nitride (AlN) piezoelectric cantilever. We show the variations of electrical parameters of AlN cantilevers such as resonance frequency, electrical conductivity, and quality factor, as a result of uncontrollable manufacturing process variations. These variations, along with high thermal and chemical stability, and compatibility with silicon technology, makes AlN cantilever a decent candidate for PUF design. We present a cantilever design, which magnifies the effect of manufacturing process variations on electrical parameters. In order to verify our findings, the simulation results of the Monte Carlo method are provided. The results verify the eligibility of AlN cantilever to be used as a basic PUF device for security applications. We present an architecture, in which the designed Cantilever-PUF is used as a security anchor for PUF-enabled device authentication as well as communication encryption.
For some years now, additive manufacturing (AM) has offered an alternative to conventional manufacturing processes. The strengths of AM are primarily the rapid implementation of ideas into a usable product and the ability to produce geometrically complex shapes. It has also significantly advanced the lightweight design of products made of plastic. So far, the strength of printed components made of polymers is previously very limited.
Recently, new AM processes have become available that allow the embedding of short and also long fibers in polymer matrix. Thus, the manufacturing of components that provide a significant increase in strength becomes possible. In this way, both complex geometries and sophisticated applications can be implemented. This paper therefore investigates how this new technology can be implemented in product development, focusing on sports equipment. An extensive literature research shows that lightweight design plays a decisive role in sports equipment. In addition, the advantages of AM in terms of individualized products and low quantities can be fully exploited.
An example of this approach is the steering system for a seat sled used by paraplegic athletes in the Olympic discipline of Nordic paraskiing. A particular challenge here is the placement and alignment of the long carbon fibers within the polymer matrix and the verification of the strength by means of Finite-Element-Analysis (FEA). In addition, findings from bionics are used to optimize the lightweight design of the steering system. Using this example, it can be shown that the weight of the steering system can be drastically reduced compared to conventional manufacturing. At the same time, a number of parts can be saved through function integration and thus the manufacturing and assembly effort can be reduced significantly.