Refine
Year of publication
Document Type
- Conference Proceeding (1089)
- Article (unreviewed) (558)
- Article (reviewed) (529)
- Part of a Book (454)
- Book (222)
- Other (138)
- Contribution to a Periodical (123)
- Patent (94)
- Report (62)
- Letter to Editor (30)
Conference Type
- Konferenzartikel (856)
- Konferenz-Abstract (153)
- Sonstiges (40)
- Konferenz-Poster (31)
- Konferenzband (13)
Language
- German (1734)
- English (1595)
- Other language (5)
- Russian (3)
- Multiple languages (2)
- French (1)
- Spanish (1)
Has Fulltext
- no (3341) (remove)
Keywords
- Digitalisierung (39)
- RoboCup (32)
- Dünnschichtchromatographie (26)
- Arbeitszeugnis (22)
- Finite-Elemente-Methode (22)
- Energieversorgung (21)
- Kommunikation (21)
- Management (19)
- Industrie 4.0 (18)
- Machine Learning (18)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (786)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (717)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (704)
- Fakultät Wirtschaft (W) (559)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (390)
- INES - Institut für nachhaltige Energiesysteme (178)
- Fakultät Medien (M) (ab 22.04.2021) (173)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (133)
- IMLA - Institute for Machine Learning and Analytics (72)
- ACI - Affective and Cognitive Institute (57)
Open Access
- Closed Access (1241)
- Open Access (866)
- Closed (533)
- Bronze (190)
- Diamond (53)
- Gold (11)
- Hybrid (11)
- Grün (7)
This paper investigates the maximum torque capability and torque ripple reduction using the asymmetric stator teeth for interior permanence magnet (IPM) synchronous machines. Traditional electric machines have the identical width for all stator teeth and the winding function is fixed. Using different widths for different stator teeth changes the winding function, therefore, the torque ripple components. The mathematical modeling of interior permanent magnet (IPM) synchronous machine torque ripple and finite element analysis simulation results for the characteristic properties of electric machines are presented. Compared with a similar rating IPM machine, certain combinations of the teeth widths can reduce the torque ripple by 80% with less than 4% average torque decline.
Tryptamines can occur naturally in plants, mushrooms, microbes, and amphibians. Synthetic tryptamines are sold as new psychoactive substances (NPS) because of their hallucinogenic effects. When it comes to NPS, metabolism studies are of crucial importance, due to the lack of pharmacological and toxicological data. Different approaches can be taken to study in vitro and in vivo metabolism of xenobiotica. The zygomycete fungus Cunninghamella elegans (C. elegans) can be used as a microbial model for the study of drug metabolism. The current study investigated the biotransformation of four naturally occurring and synthetic tryptamines [N,N‐Dimethyltryptamine (DMT), 4‐hydroxy‐N‐methyl‐N‐ethyltryptamine (4‐HO‐MET), N,N‐di allyl‐5‐methoxy tryptamine (5‐MeO‐DALT) and 5‐methoxy‐N‐methyl‐N‐isoporpoyltryptamine (5‐MeO‐MiPT)] in C. elegans after incubation for 72 hours. Metabolites were identified using liquid chromatography–high resolution–tandem mass spectrometry (LC–HR–MS/MS) with a quadrupole time‐of‐flight (QqTOF) instrument. Results were compared to already published data on these substances. C. elegans was capable of producing all major biotransformation steps: hydroxylation, N‐oxide formation, carboxylation, deamination, and demethylation. On average 63% of phase I metabolites found in the literature could also be detected in C. elegans. Additionally, metabolites specific for C. elegans were identified. Therefore, C. elegans is a suitable complementary model to other in vitro or in vivo methods to study the metabolism of naturally occurring or synthetic tryptamines.
Numerous 2,5-dimethoxy-N-benzylphenethylamines (NBOMe), carrying a variety of lipophilic substituents at the 4-position, are potent agonists at 5-hydroxytryptamine (5HT2A ) receptors and show hallucinogenic effects. The present study investigated the metabolism of 25D-NBOMe, 25E-NBOMe, and 25N-NBOMe using the microsomal model of pooled human liver microsomes (pHLM) and the microbial model of the fungi Cunninghamella elegans (C. elegans). Identification of metabolites was performed using liquid chromatography-high resolution-tandem mass spectrometry (LC-HR-MS/MS) with a quadrupole time-of-flight (QqToF) instrument. In total, 36 25D-NBOMe phase I metabolites, 26 25E-NBOMe phase I metabolites and 24 25N-NBOMe phase I metabolites were detected and identified in pHLM. Furthermore, 14 metabolites of 25D-NBOMe, 11 25E-NBOMe metabolites, and nine 25N-NBOMe metabolites could be found in C. elegans. The main biotransformation steps observed were oxidative deamination, oxidative N-dealkylation also in combination with hydroxylation, oxidative O-demethylation possibly combined with hydroxylation, oxidation of secondary alcohols, mono- and dihydroxylation, oxidation of primary alcohols, and carboxylation of primary alcohols. Additionally, oxidative di-O-demethylation for 25E-NBOMe and reduction of the aromatic nitro group and N-acetylation of the primary aromatic amine for 25N-NBOMe took place. The resulting 25N-NBOMe metabolites were unique for NBOMe compounds. For all NBOMes investigated, the corresponding 2,5-dimethoxyphenethylamine (2C-X) metabolite was detected. This study reports for the first time 25X-NBOMe N-oxide metabolites and hydroxylamine metabolites, which were identified for 25D-NBOMe and 25N-NBOMe and all three investigated NBOMes, respectively. C. elegans was capable of generating all main biotransformation steps observed in pHLM and might therefore be an interesting model for further studies of new psychoactive substances (NPS) metabolism.
The automatic processing of handwritten forms remains a challenging task, wherein detection and subsequent classification of handwritten characters are essential steps. We describe a novel approach, in which both steps - detection and classification - are executed in one task through a deep neural network. Therefore, training data is not annotated by hand, but manufactured artificially from the underlying forms and yet existing datasets. It can be demonstrated that this single-task approach is superior in comparison to the state-of-the-art two task approach. The current study focuses on hand-written Latin letters and employs the EMNIST data set. However, limitations were identified with this data set, necessitating further customization. Finally, an overall recognition rate of 88.28% was attained on real data obtained from a written exam.
Es wird ein neuer Ansatz zur Bestimmung des Abstands zweier oder mehrerer Smartphones zueinander vorgestellt. Dabei wird die Position des jeweiligen Smartphones im Raum bzw. im Gelände bezüglich eines Referenzpunkts (Spatial Anchor Point) ermittelt. Über einen zentralen Server tauschen die Smartphones ihre Position relativ zum Referenzpunkt aus und können daraus die Abstände zueinander berechnen. Unterschreitet der Abstand zweier Smartphones einen Schwellwert (< 2 m), erfolgt eine entsprechende Signalisierung auf den Smartphones.
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
(2023)
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings. Yet, previous work showed that even slight mistakes during sampling, leading to aliasing, can be directly attributed to the networks' lack in robustness. To address such issues and facilitate simpler and faster adversarial training, [12] recently proposed FLC pooling, a method for provably alias-free downsampling - in theory. In this work, we conduct a further analysis through the lens of signal processing and find that such current pooling methods, which address aliasing in the frequency domain, are still prone to spectral leakage artifacts. Hence, we propose aliasing and spectral artifact-free pooling, short ASAP. While only introducing a few modifications to FLC pooling, networks using ASAP as downsampling method exhibit higher native robustness against common corruptions, a property that FLC pooling was missing. ASAP also increases native robustness against adversarial attacks on high and low resolution data while maintaining similar clean accuracy or even outperforming the baseline.
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be. To facilitate such study, several challenges need to be addressed, most importantly: (i) We need to provide an effective way for models to learn large filters (potentially as large as the input data) without increasing their memory consumption during training or inference, (ii) the study of filter sizes has to be decoupled from other effects such as the network width or number of learnable parameters, and (iii) the employed convolution operation should be a plug-and-play module that can replace any conventional convolution in a Convolutional Neural Network (CNN) and allow for an efficient implementation in current frameworks. To facilitate such models, we propose to learn not spatial but frequency representations of filter weights as neural implicit functions, such that even infinitely large filters can be parameterized by only a few learnable weights. The resulting neural implicit frequency CNNs are the first models to achieve results on par with the state-of-the-art on large image classification benchmarks while executing convolutions solely in the frequency domain and can be employed within any CNN architecture. They allow us to provide an extensive analysis of the learned receptive fields. Interestingly, our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters practically translate into well-localized and relatively small convolution kernels in the spatial domain.
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. To reveal model weaknesses, adversarial attacks are specifically optimized to generate small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained by using adversarial examples during training, which in most cases reduces the measurable model attackability. Unfortunately, this technique can lead to robust overfitting, which results in non-robust models. In this paper, we analyze adversarially trained, robust models in the context of a specific network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from downsampling artifacts, aka. aliasing, than baseline models. In the case of robust overfitting, we observe a strong increase in aliasing and propose a novel early stopping approach based on the measurement of aliasing.
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by using adversarial examples during training, which effectively reduces the measurable model attackability. In contrast, research on analyzing the source of a model’s vulnerability is scarce. In this paper, we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.
Over the last years, Convolutional Neural Networks (CNNs) have been the dominating neural architecture in a wide range of computer vision tasks. From an image and signal processing point of view, this success might be a bit surprising as the inherent spatial pyramid design of most CNNs is apparently violating basic signal processing laws, i.e. Sampling Theorem in their down-sampling operations. However, since poor sampling appeared not to affect model accuracy, this issue has been broadly neglected until model robustness started to receive more attention. Recent work in the context of adversarial attacks and distribution shifts, showed after all, that there is a strong correlation between the vulnerability of CNNs and aliasing artifacts induced by poor down-sampling operations. This paper builds on these findings and introduces an aliasing free down-sampling operation which can easily be plugged into any CNN architecture: FrequencyLowCut pooling. Our experiments show, that in combination with simple and Fast Gradient Sign Method (FGSM) adversarial training, our hyper-parameter free operator substantially improves model robustness and avoids catastrophic overfitting. Our code is available at https://github.com/GeJulia/flc_pooling