Refine
Year of publication
Document Type
- Article (unreviewed) (131) (remove)
Is part of the Bibliography
- yes (131) (remove)
Keywords
- Export (5)
- Machine Learning (4)
- Trade (4)
- Deep Learning (3)
- Energieversorgung (3)
- Ganztagsschule (3)
- Innovation (3)
- Maschinelles Lernen (3)
- Rezension (3)
- Advanced Footwear Technology (2)
Institute
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (33)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (23)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (23)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (23)
- Fakultät Wirtschaft (W) (23)
- IMLA - Institute for Machine Learning and Analytics (15)
- IfTI - Institute for Trade and Innovation (8)
- INES - Institut für nachhaltige Energiesysteme (5)
- Fakultät Medien (M) (ab 22.04.2021) (4)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (3)
Open Access
- Open Access (131) (remove)
State-of-the-art models for pixel-wise prediction tasks such as image restoration, image segmentation, or disparity estimation, involve several stages of data resampling, in which the resolution of feature maps is first reduced to aggregate information and then sequentially increased to generate a high-resolution output. Several previous works have investigated the effect of artifacts that are invoked during downsampling and diverse cures have been proposed that facilitate to improve prediction stability and even robustness for image classification. However, equally relevant, artifacts that arise during upsampling have been less discussed. This is significantly relevant as upsampling and downsampling approaches face fundamentally different challenges. While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling. Blurring is therefore not an option and dedicated operations need to be considered. In this work, we are the first to explore the relevance of context during upsampling by employing convolutional upsampling operations with increasing kernel size while keeping the encoder unchanged. We find that increased kernel sizes can in general improve the prediction stability in tasks such as image restoration or image segmentation, while a block that allows for a combination of small-size kernels for fine details and large-size kernels for artifact removal and increased context yields the best results.
In Zeiten großer Veränderungen haben genossenschaftlich organisierte KMU die Möglichkeit, auf komplexe Herausforderungen mit kooperativen Lösungsansätzen zu reagieren, vor allem wenn dabei die Kraft und Kreativität der Gemeinschaft genutzt wird. Getreu dem Motto „Was einer alleine nicht schafft, das schaffen viele“ des Genossenschaftsvorreiters Friedrich Wilhelm Raiffeisen ist gemeinschaftliches unternehmerisches Handeln identitätsstiftend und motivierend, woraus wiederum eine sich selbst verstärkende Eigendynamik entstehen kann. Wie Mittelstand, Politik und Gesellschaft davon profitieren, stellen Prof. Dr. Tobias Popovic und Prof. Dr. Thomas Baumgärtler in diesem Beitrag dar.
This paper presents the new Deep Reinforcement Learning (DRL) library RL-X and its application to the RoboCup Soccer Simulation 3D League and classic DRL benchmarks. RL-X provides a flexible and easy-to-extend codebase with self-contained single directory algorithms. Through the fast JAX-based implementations, RL-X can reach up to 4.5x speedups compared to well-known frameworks like Stable-Baselines3.
Wärmepumpen sind eine Schlüsseltechnologie der Wärmewende. Durch die Nutzbarmachung von Umweltwärme und den Antrieb mit Elektrizität, die zunehmend aus erneuerbaren Energien gewonnen wird, kann die CO2-Intensität der Wärmeversorgung gesenkt werden. Eine Herausforderung besteht in der Anwendung in größeren Mehrfamilienbestandsgebäuden. Lösungsansätze und beispielhafte Umsetzungen werden hierzu vorgestellt.
Die Mehrheit der deutschen Unternehmen verspricht sich aus KI-gestützter Datenanalyse einen großen Geschäftsvorteil. Doch gerade das Thema Datenbestand ist eine der größten, immer noch häufig unterschätzten Hürde beim Trainieren und Einführen von KI-Algorithmen. Im Folgenden sind vier konkrete Erfahrungen und Tipps für KI- & Datenanalyseprojekte in Unternehmen aufgeführt.
Künstliche Intelligenz (KI) kommt laut einer Interxion-Studie bei 96 Prozent der Schweizer Unternehmen zum Einsatz. Allerdings gaben nur 22 Prozent der Schweizer IT-Entscheider an, dass sie KI bereits für einen ersten Anwendungsfall einsetzen. Dabei ist KI etwa im Datenmanagement sehr hilfreich – sofern Qualität und Quantität der Trainingsdaten stimmen.
"Machen Sie doch mal mehr PR und Werbung für Ihre Schule": Kommunikationscontrolling in Schulen
(2015)
Henry Fords Bonmot zur Werbeerfolgskontrolle ist sicherlich der bekannteste Satz im Sektor des Kommunikationscontrollings: „Die Hälfte unserer Werbegelder werfen wir zum Fenster raus. Ich weiß nur nicht, welche Hälfte das ist.“ Diese kritische Würdigung von Kommunikationsleistungen ist auch heute noch immer wieder Thema und gerade im Umfeld von Schule, wo diese Prozesse noch keine sehr lange Tradition haben, Teil der internen und externen Diskussion. Die Steuerung von Kommunikationsprozessen erfordert jedoch nicht nur die Quantifizierung von Kommunikationsleistungen, sondern eine Einbettung in die gesamte Marketingstrategie und in die Bewertung einzelner Marketingbereiche und der dort entwickelten Marketingziele.
With economic weight shifting toward net zero, now is the time for ECAs, Exim-Banks, and PRIs to lead. Despite previous success, aligning global economic governance to climate goals requires additional activities across export finance and investment insurance institutions. The new research project initiated by Oxford University, ClimateWorks Foundation, and Mission 2020 including other practitioners and academics from institutions such as Atradius DSB, Columbia University, EDC, FMO and Offenburg University focuses on reshaping future trade and investment governance in light of climate action. The idea of a ‘Berne Union Net Zero Club’ is an important item in a potential package of reforms. This can include realigning mandates and corporate strategies, principles of intervention, as well as ECA, Exim-Bank and PRI operating models in order to accelerate net zero transformation. Full transparency regarding Berne Union members’ activities would be an excellent starting point. We invite all interested parties in the sector to come together to chart our own path to net zero
Objective: This paper deals with the design and the optimization of mechatronic devices.
Introduction: Comparing with existing works, the design approach presented in this paper aims to integrate optimization in the design phase of complex mechatronic systems in order to increase the efficiency of this method.
Methods: To solve this problem, a novel mechatronic system design approach has been developed in order to take the multidisciplinary aspect and to consider optimization as a tool that can be used within the embodiment design process to build mechatronic solutions from a set of solution concepts designed with innovative or routine design methods.
Conclusions: This approach has then been applied to the design and optimization of a wind turbine system that can be implemented to autonomously supply a mountain cottage.
Die Heterogenität der Studienanfänger/innen erleben viele Lehrende unmittelbar in den Anfängerveranstaltungen, Heterogenität nicht nur in Bezug auf fachliche Vorbildung, sondern auch bezüglich verfügbaren Lernstrategien, Fertigkeiten, Motivation und
Selbstdisziplin. Schon allein einer 90-minütigen Vorlesung konzentriert zu folgen und die
Ergebnisse strukturiert zu sichern, ist für viele eine sehr große Herausforderung. In diesem Erfahrungsbericht wird das seit dem WS 2015/16
an der Hochschule Offenburg erprobte Potenzial moderner Tablets untersucht, Vorteile
von klassischem handschriftlichen An- und Mitschreiben mit einer Vorstruktur, wie sie
z.B. PPT-Slides ermöglichen, zu vereinen.
During pyrolysis, biomass is carbonised in the absence of oxygen to produce biochar with heat and/or electricity as co-products making pyrolysis one of the promising negative emission technologies to reach climate goals worldwide. This paper presents a simplified representation of pyrolysis and analyses the impact of this technology on the energy system. Results show that the use of pyrolysis can allow getting zero emissions with lower costs by making changes in the unit commitment of the power plants, e.g. conventional power plants are used differently, as the emissions will be compensated by biochar. Additionally, the process of pyrolysis can enhance the flexibility of energy systems, as it shows a correlation between the electricity generated by pyrolysis and the hydrogen installation capacity, being hydrogen used less when pyrolysis appears. The results indicate that pyrolysis, which is available on the market, integrates well into the energy system with a promising potential to sequester carbon.
Smart Cities und Big Data
(2019)
Sharing Economy
(2019)
Unternehmerische Resilienz
(2019)
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the G are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
Deep generative models have recently achieved impressive results for many real-world applications, successfully generating high-resolution and diverse samples from complex datasets. Due to this improvement, fake digital contents have proliferated growing concern and spreading distrust in image content, leading to an urgent need for automated ways to detect these AI-generated fake images.
Despite the fact that many face editing algorithms seem to produce realistic human faces, upon closer examination, they do exhibit artifacts in certain domains which are often hidden to the naked eye. In this work, we present a simple way to detect such fake face images - so-called DeepFakes. Our method is based on a classical frequency domain analysis followed by basic classifier. Compared to previous systems, which need to be fed with large amounts of labeled data, our approach showed very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios. For the evaluation on high resolution face images, we combined several public datasets of real and fake faces into a new benchmark: Faces-HQ. Given such high-resolution images, our approach reaches a perfect classification accuracy of 100% when it is trained on as little as 20 annotated samples. In a second experiment, in the evaluation of the medium-resolution images of the CelebA dataset, our method achieves 100% accuracy supervised and 96% in an unsupervised setting. Finally, evaluating a low-resolution video sequences of the FaceForensics++ dataset, our method achieves 91% accuracy detecting manipulated videos.
The term attribute transfer refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator.
In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our Attribute Transfer Inpainting Generative Adversarial Network (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
Recent studies have shown remarkable success in image-to-image translation for attribute transfer applications. However, most of existing approaches are based on deep learning and require an abundant amount of labeled data to produce good results, therefore limiting their applicability. In the same vein, recent advances in meta-learning have led to successful implementations with limited available data, allowing so-called few-shot learning.
In this paper, we address this limitation of supervised methods, by proposing a novel approach based on GANs. These are trained in a meta-training manner, which allows them to perform image-to-image translations using just a few labeled samples from a new target class. This work empirically demonstrates the potential of training a GAN for few shot image-to-image translation on hair color attribute synthesis tasks, opening the door to further research on generative transfer learning.
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (e.g. Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Minderjährige genießen in diversen Rechtsgebieten zu Recht besonderen Schutz. Dazu gehören das allgemeine Vertragsrecht des Bürgerlichen Gesetzbuchs (BGB), das Lauterkeitsrecht des UWG1 und auch das Datenschutzrecht, wo dies in der Datenschutz-Grundverordnung (DS-GVO) ausdrücklich festgeschrieben wird. Der Beitrag diskutiert einige der relevanten Fragen.
Beuys-Gespräch
(2022)
In sicherheitskritschen Systemen darf kein Stück Code im Produktionsbetrieb ablaufen, ohne vorher intensive Tests durchlaufen zu haben. Aber auch zur Qualitätssicherung muss Software getestet werden. Um die Codeüberdeckung zu prüfen, sind zusätzliche Prüf-Instruktionen im Quellcode erforderlich. Auf kleinen Systemen mit wenig RAM kann sich der Entwickler dann etwas einfallen lassen, damit das funktioniert.
Im Fahrzeug stehen große Wärmeströme zur Verfügung, die nicht genutzt werden. Die Energie des Abgases weist gegenüber der des Motor/Kühlsystems eine wesentlich höhere Arbeitsfähigkeit auf. Untersuchungen zielen dahin, diese thermische Energie zum Beispiel mithilfe eines thermoelektrischen Generators wieder in das System einzukoppeln und als Energiequelle für Verbraucher zu nutzen.
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate label information is a challenging task. Existing methodologies often entail a compromise between variety and constraint levels for attacks and sometimes even both. In a first step towards a more holistic robustness evaluation of image classification models, we introduce an attack method based on image solarization that is conceptually straightforward yet avoids jeopardizing the global structure of natural images independent of the intensity. Through comprehensive evaluations of multiple ImageNet models, we demonstrate the attack's capacity to degrade accuracy significantly, provided it is not integrated into the training augmentations. Interestingly, even then, no full immunity to accuracy deterioration is achieved. In other settings, the attack can often be simplified into a black-box attack with model-independent parameters. Defenses against other corruptions do not consistently extend to be effective against our specific attack.
Project website: https://github.com/paulgavrikov/adversarial_solarization
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient 1×1 convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning 3×3 convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.
The energy supply of Offenburg University of Applied Sciences (HS OG) was changed from separate generation to trigeneration in 2007/2008. Trigeneration was installed for supplying heat, cooling and electrical power at HS OG. In this paper, trigeneration process and its modes of operation along with the layout of the energy facility at HS OG were described. Special emphasis was given to the operation schemes and control strategies of the operation modes: winter mode, transition mode and summer mode. The components used in the energy facility were also outlined. Monitoring and data analysis of the energy system was carried out after the commissioning of trigeneration in the period from 2008 to 2011. Thus, valuable performance data was obtained.
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
(2023)
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings. Yet, previous work showed that even slight mistakes during sampling, leading to aliasing, can be directly attributed to the networks' lack in robustness. To address such issues and facilitate simpler and faster adversarial training, [12] recently proposed FLC pooling, a method for provably alias-free downsampling - in theory. In this work, we conduct a further analysis through the lens of signal processing and find that such current pooling methods, which address aliasing in the frequency domain, are still prone to spectral leakage artifacts. Hence, we propose aliasing and spectral artifact-free pooling, short ASAP. While only introducing a few modifications to FLC pooling, networks using ASAP as downsampling method exhibit higher native robustness against common corruptions, a property that FLC pooling was missing. ASAP also increases native robustness against adversarial attacks on high and low resolution data while maintaining similar clean accuracy or even outperforming the baseline.
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be. To facilitate such study, several challenges need to be addressed, most importantly: (i) We need to provide an effective way for models to learn large filters (potentially as large as the input data) without increasing their memory consumption during training or inference, (ii) the study of filter sizes has to be decoupled from other effects such as the network width or number of learnable parameters, and (iii) the employed convolution operation should be a plug-and-play module that can replace any conventional convolution in a Convolutional Neural Network (CNN) and allow for an efficient implementation in current frameworks. To facilitate such models, we propose to learn not spatial but frequency representations of filter weights as neural implicit functions, such that even infinitely large filters can be parameterized by only a few learnable weights. The resulting neural implicit frequency CNNs are the first models to achieve results on par with the state-of-the-art on large image classification benchmarks while executing convolutions solely in the frequency domain and can be employed within any CNN architecture. They allow us to provide an extensive analysis of the learned receptive fields. Interestingly, our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters practically translate into well-localized and relatively small convolution kernels in the spatial domain.