Refine
Year of publication
- 2022 (262) (remove)
Document Type
- Conference Proceeding (90)
- Part of a Book (62)
- Article (reviewed) (34)
- Other (20)
- Article (unreviewed) (20)
- Book (16)
- Report (7)
- Patent (5)
- Doctoral Thesis (3)
- Letter to Editor (3)
- Contribution to a Periodical (1)
- Working Paper (1)
Conference Type
- Konferenzartikel (71)
- Konferenz-Abstract (13)
- Konferenz-Poster (3)
- Sonstiges (3)
Has Fulltext
- no (262) (remove)
Is part of the Bibliography
- yes (262)
Keywords
- injury (10)
- COVID-19 (7)
- Digitalisierung (7)
- Machine Learning (6)
- running (6)
- biomechanics (5)
- ACL (4)
- Robustness (4)
- Digitaltechnik (3)
- Entrepreneurship (3)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (72)
- Fakultät Medien (M) (ab 22.04.2021) (72)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (70)
- Fakultät Wirtschaft (W) (54)
- INES - Institut für nachhaltige Energiesysteme (16)
- POIM - Peter Osypka Institute of Medical Engineering (16)
- IMLA - Institute for Machine Learning and Analytics (13)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (10)
- ACI - Affective and Cognitive Institute (5)
- IfTI - Institute for Trade and Innovation (4)
Open Access
- Closed (160)
- Open Access (69)
- Bronze (39)
- Closed Access (33)
- Diamond (15)
- Hybrid (5)
- Gold (2)
Seismic data has often missing traces due to technical acquisition or economical constraints. A compete dataset is crucial in several processing and inversion techniques. Deep learning algorithms, based on convolutional neural networks (CNNs), have shown alternative solutions that overcome limitation of traditional interpolation methods e.g. data regularity, linearity assumption, etc. There are two different paradigms of CNN methods for seismic interpolation. The first one, so-called deep prior interpolation (DPI), trains a CNN to map random noise to a complete seismic image using only the decimated image itself. The second one, referred as standard deep learning method, trains a CNN to map a decimated seismic image into a complete one using a dataset of complete and artificially decimated images. Within this research, we systematically compare the performance of both methods for different quantities of regular and irregular missing traces using 4 datasets. We evaluate the results of both methods using 5 well-known metrics. We found that DPI method performs better than the standard method if the percentage of missing traces is low (10%) and otherwise if the level of decimation is high (50%).
In this paper, the influence of the material hardening behavior on plasticity-induced fatigue crack closure is investigated for strain-controlled loading and fully plastic, large-scale yielding conditions by means of the finite element method. The strain amplitude and the strain ratio are varied for given Ramberg–Osgood material properties representing materials with different hardening behavior. The results show a pronounced influence of the hardening behavior on crack closure, while no significant effect is found from the considered strain amplitude and strain ratio. The effect of the hardening behavior on the crack opening stress cannot be described by existing crack opening stress equations.
The desire to connect more and more devices and to make them more intelligent and more reliable, is driving the needs for the Internet of Things more than ever. Such IoT edge systems require sound security measures against cyber-attacks, since they are interconnected, spatially distributed, and operational for an extended period of time. One of the most important requirements for the security in many industrial IoT applications is the authentication of the devices. In this paper, we present a mutual authentication protocol based on Physical Unclonable Functions, where challenge-response pairs are used for both device and server authentication. Moreover, a session key can be derived by the protocol in order to secure the communication channel. We show that our protocol is secure against machine learning, replay, man-in-the-middle, cloning, and physical attacks. Moreover, it is shown that the protocol benefits from a smaller computational, communication, storage, and hardware overhead, compared to similar works.
This paper presents a method for supporting the application of Additive Tooling (AT)-based validation environments in integrated product development. Based on a case study, relevant process steps, activities and possible barriers in the realisation of an injection-moulded product are identified and analysed. The aim of the method is to support the target-oriented application of Additive Tooling to obtain physical prototypes at an early stage and to shorten validation cycles.
Linear acceleration is a key performance determinant and major training component of many sports. Although extensive research about lower limb kinetics and kinematics is available, consistent definitions of distinctive key body positions, the underlying mechanisms and their related movement strategies are lacking. The aim of this ‘Method and Theoretical Perspective’ article is to introduce a conceptual framework which classifies the sagittal plane ‘shin roll’ motion during accelerated sprinting. By emphasising the importance of the shin segment’s orientation in space, four distinctive key positions are presented (‘shin block’, ‘touchdown’, ‘heel lock’ and ‘propulsion pose’), which are linked by a progressive ‘shin roll’ motion during swing-stance transition. The shin’s downward tilt is driven by three different movement strategies (‘shin alignment’, ‘horizontal ankle rocker’ and ‘shin drop’). The tilt’s optimal amount and timing will contribute to a mechanically efficient acceleration via timely staggered proximal-to-distal power output. Empirical data obtained from athletes of different performance levels and sporting backgrounds are required to verify the feasibility of this concept. The framework presented here should facilitate future biomechanical analyses and may enable coaches and practitioners to develop specific training programs and feedback strategies to provide athletes with a more efficient acceleration technique.
Voice user interfaces (VUIs) offer an intuitive, fast and convenient way for humans to interact with machines and computers. Yet, whether they’ll be truly successful and find widespread uptake in the near future depends on the user experience (UX) they offer. With this survey-based study (n = 108), we aim to identify the major annoyances German voice assistant users are facing in voice-driven human-computer interactions. The results of our questionnaire show that irritations appear in six categories: privacy issues, unwanted activation, comprehensibility, response quality, conversational design and voice characteristics. Our findings can help identify key areas of work to optimize voice user experience in order to achieve greater adaptation of the technology. In addition, they can provide valuable information for the further development and standardization of voice user experience (VUX) research.
Featherweight Generic Go (FGG) is a minimal core calculus modeling the essential features of the programming language Go. It includes support for overloaded methods, interface types, structural subtyping and generics. The most straightforward semantic description of the dynamic behavior of FGG programs is to resolve method calls based on runtime type information of the receiver.
This article shows a different approach by defining a type-directed translation from FGG to an untyped lambda-calculus. The translation of an FGG program provides evidence for the availability of methods as additional dictionary parameters, similar to the dictionary-passing approach known from Haskell type classes. Then, method calls can be resolved by a simple lookup of the method definition in the dictionary.
Every program in the image of the translation has the same dynamic semantics as its source FGG program. The proof of this result is based on a syntactic, step-indexed logical relation. The step-index ensures a well-founded definition of the relation in the presence of recursive interface types and recursive methods.
In dem Projekt BioMeth wurde der Ansatz der Membranbegasung zur Erhöhung der Verfügbarkeit von gelöstem Wasserstoff für die biologische Methanisierung im Sinn der Etablierung eines Power-to-Gas-Konzeptes zur Energiespeicherung verfolgt. Übergeordnetes Ziel war die Entwicklung eines skalierbaren Verfahrenskonzeptes, dass sich zur Nutzung CO2-haltiger Gasvolumenström eignet. Geplant war es, das Verfahren am Beispiel der Biogasanlage der Biokäserei Monte-Ziego in Teningen zu demonstrieren und dort das bestehende Konzept der parallelen Abwasseraufbereitung und Energieerzeugung zu erweitern. Die ursprüngliche Struktur des Arbeitspaketplanes ist in nachfolgender Abbildung gezeigt.
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries, and to reduce overfitting. In this context, we perform an investigation of 3 × 3 convolution filters that form in adversarially- trained models. Filters are extracted from 71 public models of the ℓ ∞ -RobustBench CIFAR-10/100 and ImageNet1k leaderboard and compared to filters extracted from models built on the same architectures but trained without robust regularization. We observe that adversarially-robust models appear to form more diverse, less sparse, and more orthogonal convolution filters than their normal counterparts. The largest differences between robust and normal models are found in the deepest layers, and the very first convolution layer, which consistently and predominantly forms filters that can partially eliminate perturbations, irrespective of the architecture.