Refine
Year of publication
Document Type
- Article (unreviewed) (125) (remove)
Has Fulltext
- no (125) (remove)
Is part of the Bibliography
- yes (125)
Keywords
- Export (4)
- Machine Learning (4)
- Deep Learning (3)
- Energieversorgung (3)
- Ganztagsschule (3)
- Innovation (3)
- Maschinelles Lernen (3)
- Rezension (3)
- Trade (3)
- Advanced Footwear Technology (2)
Institute
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (32)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (23)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (22)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (22)
- Fakultät Wirtschaft (W) (21)
- IMLA - Institute for Machine Learning and Analytics (15)
- IfTI - Institute for Trade and Innovation (8)
- INES - Institut für nachhaltige Energiesysteme (4)
- Fakultät Medien (M) (ab 22.04.2021) (3)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (3)
Open Access
- Open Access (125) (remove)
State-of-the-art models for pixel-wise prediction tasks such as image restoration, image segmentation, or disparity estimation, involve several stages of data resampling, in which the resolution of feature maps is first reduced to aggregate information and then sequentially increased to generate a high-resolution output. Several previous works have investigated the effect of artifacts that are invoked during downsampling and diverse cures have been proposed that facilitate to improve prediction stability and even robustness for image classification. However, equally relevant, artifacts that arise during upsampling have been less discussed. This is significantly relevant as upsampling and downsampling approaches face fundamentally different challenges. While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling. Blurring is therefore not an option and dedicated operations need to be considered. In this work, we are the first to explore the relevance of context during upsampling by employing convolutional upsampling operations with increasing kernel size while keeping the encoder unchanged. We find that increased kernel sizes can in general improve the prediction stability in tasks such as image restoration or image segmentation, while a block that allows for a combination of small-size kernels for fine details and large-size kernels for artifact removal and increased context yields the best results.
This paper presents the new Deep Reinforcement Learning (DRL) library RL-X and its application to the RoboCup Soccer Simulation 3D League and classic DRL benchmarks. RL-X provides a flexible and easy-to-extend codebase with self-contained single directory algorithms. Through the fast JAX-based implementations, RL-X can reach up to 4.5x speedups compared to well-known frameworks like Stable-Baselines3.
Wärmepumpen sind eine Schlüsseltechnologie der Wärmewende. Durch die Nutzbarmachung von Umweltwärme und den Antrieb mit Elektrizität, die zunehmend aus erneuerbaren Energien gewonnen wird, kann die CO2-Intensität der Wärmeversorgung gesenkt werden. Eine Herausforderung besteht in der Anwendung in größeren Mehrfamilienbestandsgebäuden. Lösungsansätze und beispielhafte Umsetzungen werden hierzu vorgestellt.
Die Mehrheit der deutschen Unternehmen verspricht sich aus KI-gestützter Datenanalyse einen großen Geschäftsvorteil. Doch gerade das Thema Datenbestand ist eine der größten, immer noch häufig unterschätzten Hürde beim Trainieren und Einführen von KI-Algorithmen. Im Folgenden sind vier konkrete Erfahrungen und Tipps für KI- & Datenanalyseprojekte in Unternehmen aufgeführt.
Künstliche Intelligenz (KI) kommt laut einer Interxion-Studie bei 96 Prozent der Schweizer Unternehmen zum Einsatz. Allerdings gaben nur 22 Prozent der Schweizer IT-Entscheider an, dass sie KI bereits für einen ersten Anwendungsfall einsetzen. Dabei ist KI etwa im Datenmanagement sehr hilfreich – sofern Qualität und Quantität der Trainingsdaten stimmen.
"Machen Sie doch mal mehr PR und Werbung für Ihre Schule": Kommunikationscontrolling in Schulen
(2015)
Henry Fords Bonmot zur Werbeerfolgskontrolle ist sicherlich der bekannteste Satz im Sektor des Kommunikationscontrollings: „Die Hälfte unserer Werbegelder werfen wir zum Fenster raus. Ich weiß nur nicht, welche Hälfte das ist.“ Diese kritische Würdigung von Kommunikationsleistungen ist auch heute noch immer wieder Thema und gerade im Umfeld von Schule, wo diese Prozesse noch keine sehr lange Tradition haben, Teil der internen und externen Diskussion. Die Steuerung von Kommunikationsprozessen erfordert jedoch nicht nur die Quantifizierung von Kommunikationsleistungen, sondern eine Einbettung in die gesamte Marketingstrategie und in die Bewertung einzelner Marketingbereiche und der dort entwickelten Marketingziele.
With economic weight shifting toward net zero, now is the time for ECAs, Exim-Banks, and PRIs to lead. Despite previous success, aligning global economic governance to climate goals requires additional activities across export finance and investment insurance institutions. The new research project initiated by Oxford University, ClimateWorks Foundation, and Mission 2020 including other practitioners and academics from institutions such as Atradius DSB, Columbia University, EDC, FMO and Offenburg University focuses on reshaping future trade and investment governance in light of climate action. The idea of a ‘Berne Union Net Zero Club’ is an important item in a potential package of reforms. This can include realigning mandates and corporate strategies, principles of intervention, as well as ECA, Exim-Bank and PRI operating models in order to accelerate net zero transformation. Full transparency regarding Berne Union members’ activities would be an excellent starting point. We invite all interested parties in the sector to come together to chart our own path to net zero
Objective: This paper deals with the design and the optimization of mechatronic devices.
Introduction: Comparing with existing works, the design approach presented in this paper aims to integrate optimization in the design phase of complex mechatronic systems in order to increase the efficiency of this method.
Methods: To solve this problem, a novel mechatronic system design approach has been developed in order to take the multidisciplinary aspect and to consider optimization as a tool that can be used within the embodiment design process to build mechatronic solutions from a set of solution concepts designed with innovative or routine design methods.
Conclusions: This approach has then been applied to the design and optimization of a wind turbine system that can be implemented to autonomously supply a mountain cottage.
Die Heterogenität der Studienanfänger/innen erleben viele Lehrende unmittelbar in den Anfängerveranstaltungen, Heterogenität nicht nur in Bezug auf fachliche Vorbildung, sondern auch bezüglich verfügbaren Lernstrategien, Fertigkeiten, Motivation und
Selbstdisziplin. Schon allein einer 90-minütigen Vorlesung konzentriert zu folgen und die
Ergebnisse strukturiert zu sichern, ist für viele eine sehr große Herausforderung. In diesem Erfahrungsbericht wird das seit dem WS 2015/16
an der Hochschule Offenburg erprobte Potenzial moderner Tablets untersucht, Vorteile
von klassischem handschriftlichen An- und Mitschreiben mit einer Vorstruktur, wie sie
z.B. PPT-Slides ermöglichen, zu vereinen.
During pyrolysis, biomass is carbonised in the absence of oxygen to produce biochar with heat and/or electricity as co-products making pyrolysis one of the promising negative emission technologies to reach climate goals worldwide. This paper presents a simplified representation of pyrolysis and analyses the impact of this technology on the energy system. Results show that the use of pyrolysis can allow getting zero emissions with lower costs by making changes in the unit commitment of the power plants, e.g. conventional power plants are used differently, as the emissions will be compensated by biochar. Additionally, the process of pyrolysis can enhance the flexibility of energy systems, as it shows a correlation between the electricity generated by pyrolysis and the hydrogen installation capacity, being hydrogen used less when pyrolysis appears. The results indicate that pyrolysis, which is available on the market, integrates well into the energy system with a promising potential to sequester carbon.
Smart Cities und Big Data
(2019)
Sharing Economy
(2019)
Unternehmerische Resilienz
(2019)
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the G are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
Deep generative models have recently achieved impressive results for many real-world applications, successfully generating high-resolution and diverse samples from complex datasets. Due to this improvement, fake digital contents have proliferated growing concern and spreading distrust in image content, leading to an urgent need for automated ways to detect these AI-generated fake images.
Despite the fact that many face editing algorithms seem to produce realistic human faces, upon closer examination, they do exhibit artifacts in certain domains which are often hidden to the naked eye. In this work, we present a simple way to detect such fake face images - so-called DeepFakes. Our method is based on a classical frequency domain analysis followed by basic classifier. Compared to previous systems, which need to be fed with large amounts of labeled data, our approach showed very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios. For the evaluation on high resolution face images, we combined several public datasets of real and fake faces into a new benchmark: Faces-HQ. Given such high-resolution images, our approach reaches a perfect classification accuracy of 100% when it is trained on as little as 20 annotated samples. In a second experiment, in the evaluation of the medium-resolution images of the CelebA dataset, our method achieves 100% accuracy supervised and 96% in an unsupervised setting. Finally, evaluating a low-resolution video sequences of the FaceForensics++ dataset, our method achieves 91% accuracy detecting manipulated videos.
The term attribute transfer refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator.
In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our Attribute Transfer Inpainting Generative Adversarial Network (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
Recent studies have shown remarkable success in image-to-image translation for attribute transfer applications. However, most of existing approaches are based on deep learning and require an abundant amount of labeled data to produce good results, therefore limiting their applicability. In the same vein, recent advances in meta-learning have led to successful implementations with limited available data, allowing so-called few-shot learning.
In this paper, we address this limitation of supervised methods, by proposing a novel approach based on GANs. These are trained in a meta-training manner, which allows them to perform image-to-image translations using just a few labeled samples from a new target class. This work empirically demonstrates the potential of training a GAN for few shot image-to-image translation on hair color attribute synthesis tasks, opening the door to further research on generative transfer learning.
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (e.g. Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Minderjährige genießen in diversen Rechtsgebieten zu Recht besonderen Schutz. Dazu gehören das allgemeine Vertragsrecht des Bürgerlichen Gesetzbuchs (BGB), das Lauterkeitsrecht des UWG1 und auch das Datenschutzrecht, wo dies in der Datenschutz-Grundverordnung (DS-GVO) ausdrücklich festgeschrieben wird. Der Beitrag diskutiert einige der relevanten Fragen.
In sicherheitskritschen Systemen darf kein Stück Code im Produktionsbetrieb ablaufen, ohne vorher intensive Tests durchlaufen zu haben. Aber auch zur Qualitätssicherung muss Software getestet werden. Um die Codeüberdeckung zu prüfen, sind zusätzliche Prüf-Instruktionen im Quellcode erforderlich. Auf kleinen Systemen mit wenig RAM kann sich der Entwickler dann etwas einfallen lassen, damit das funktioniert.
Im Fahrzeug stehen große Wärmeströme zur Verfügung, die nicht genutzt werden. Die Energie des Abgases weist gegenüber der des Motor/Kühlsystems eine wesentlich höhere Arbeitsfähigkeit auf. Untersuchungen zielen dahin, diese thermische Energie zum Beispiel mithilfe eines thermoelektrischen Generators wieder in das System einzukoppeln und als Energiequelle für Verbraucher zu nutzen.
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate label information is a challenging task. Existing methodologies often entail a compromise between variety and constraint levels for attacks and sometimes even both. In a first step towards a more holistic robustness evaluation of image classification models, we introduce an attack method based on image solarization that is conceptually straightforward yet avoids jeopardizing the global structure of natural images independent of the intensity. Through comprehensive evaluations of multiple ImageNet models, we demonstrate the attack's capacity to degrade accuracy significantly, provided it is not integrated into the training augmentations. Interestingly, even then, no full immunity to accuracy deterioration is achieved. In other settings, the attack can often be simplified into a black-box attack with model-independent parameters. Defenses against other corruptions do not consistently extend to be effective against our specific attack.
Project website: https://github.com/paulgavrikov/adversarial_solarization
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient 1×1 convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning 3×3 convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.
The energy supply of Offenburg University of Applied Sciences (HS OG) was changed from separate generation to trigeneration in 2007/2008. Trigeneration was installed for supplying heat, cooling and electrical power at HS OG. In this paper, trigeneration process and its modes of operation along with the layout of the energy facility at HS OG were described. Special emphasis was given to the operation schemes and control strategies of the operation modes: winter mode, transition mode and summer mode. The components used in the energy facility were also outlined. Monitoring and data analysis of the energy system was carried out after the commissioning of trigeneration in the period from 2008 to 2011. Thus, valuable performance data was obtained.
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
(2023)
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings. Yet, previous work showed that even slight mistakes during sampling, leading to aliasing, can be directly attributed to the networks' lack in robustness. To address such issues and facilitate simpler and faster adversarial training, [12] recently proposed FLC pooling, a method for provably alias-free downsampling - in theory. In this work, we conduct a further analysis through the lens of signal processing and find that such current pooling methods, which address aliasing in the frequency domain, are still prone to spectral leakage artifacts. Hence, we propose aliasing and spectral artifact-free pooling, short ASAP. While only introducing a few modifications to FLC pooling, networks using ASAP as downsampling method exhibit higher native robustness against common corruptions, a property that FLC pooling was missing. ASAP also increases native robustness against adversarial attacks on high and low resolution data while maintaining similar clean accuracy or even outperforming the baseline.
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be. To facilitate such study, several challenges need to be addressed, most importantly: (i) We need to provide an effective way for models to learn large filters (potentially as large as the input data) without increasing their memory consumption during training or inference, (ii) the study of filter sizes has to be decoupled from other effects such as the network width or number of learnable parameters, and (iii) the employed convolution operation should be a plug-and-play module that can replace any conventional convolution in a Convolutional Neural Network (CNN) and allow for an efficient implementation in current frameworks. To facilitate such models, we propose to learn not spatial but frequency representations of filter weights as neural implicit functions, such that even infinitely large filters can be parameterized by only a few learnable weights. The resulting neural implicit frequency CNNs are the first models to achieve results on par with the state-of-the-art on large image classification benchmarks while executing convolutions solely in the frequency domain and can be employed within any CNN architecture. They allow us to provide an extensive analysis of the learned receptive fields. Interestingly, our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters practically translate into well-localized and relatively small convolution kernels in the spatial domain.
We introduce an open source python framework named PHS-Parallel Hyperparameter Search to enable hyperparameter optimization on numerous compute instances of any arbitrary python function. This is achieved with minimal modifications inside the target function. Possible applications appear in expensive to evaluate numerical computations which strongly depend on hyperparameters such as machine learning. Bayesian optimization is chosen as a sample efficient method to propose the next query set of parameters.
Durch den Einsatz von Torschleieranlagen zwischen zwei Zonen mit unterschiedlichen Temperaturen kann der Luftaustausch aufgrund freier Konvektion verhindert werden. Der Einfluß unterschiedlicher Betriebsfälle auf den Energieverbrauch und die thermische Behaglichkeit wurde im Labor untersucht. Die Effizienz von Torschleieranlagen wurde zusätzlich im Praxisbetrieb beim Einsatz an einer Kühlzelle überprüft. Bei diesem Anwendungsfall steht nicht die thermische Behaglichkeit sondern der Energieverbrauch, die Gefahr der Kühlguterwärmung und die der Eisbildung vor dem Kühlzelleneingang im Vordergrund.
Kühlen im großen Stil
(2010)
Die Hochschule Offenburg begleitet seit Juli 2006 in Zusammenarbeit mit dem Fraunhofer ISE in Freiburg und der HfT Stuttgart die solar unterstützte Klimatisierung der Festo AG & Co. KG in Esslingen. Die Anlage wurde im Rahmen des Forschungsvorhabens „Solarthermie2000plus“ vom Bundesumweltministerium gefördert. Dabei wurde die bereits bestehende Adsorptionskälteanlage, die bisher mit Kompressorabwärme und Gaskesseln betrieben wurde, durch eine Solaranlage als drittem Wärmelieferanten ergänzt.
The COVID19 pandemic, a unique and devastating respiratory disease outbreak, has affected global populations as the disease spreads rapidly. Recent Deep Learning breakthroughs may improve COVID19 prediction and forecasting as a tool of precise and fast detection, however, current methods are still being examined to achieve higher accuracy and precision. This study analyzed the collection contained 8055 CT image samples, 5427 of which were COVID cases and 2628 non COVID. The 9544 Xray samples included 4044 COVID patients and 5500 non COVID cases. The most accurate models are MobileNet V3 (97.872 percent), DenseNet201 (97.567 percent), and GoogleNet Inception V1 (97.643 percent). High accuracy indicates that these models can make many accurate predictions, as well as others, are also high for MobileNetV3 and DenseNet201. An extensive evaluation using accuracy, precision, and recall allows a comprehensive comparison to improve predictive models by combining loss optimization with scalable batch normalization in this study. Our analysis shows that these tactics improve model performance and resilience for advancing COVID19 prediction and detection and shows how Deep Learning can improve disease handling. The methods we suggest would strengthen healthcare systems, policymakers, and researchers to make educated decisions to reduce COVID19 and other contagious diseases.
In order to make material design processes more efficient in the future, the underlying multidimensional process parameter spaces must be systematically explored using digitalisation techniques such as machine learning (ML) and digital simulation. In this paper we shortly review essential concepts for the digitalisation of electrodeposition processes with a special focus on chromium plating from trivalent electrolytes.
Additive manufacturing (AM) and in particular the application of 3D multi material printing offers completely new production technologies thanks to the degree of freedom in design and the simultaneous processing of several materials in one component. Today's CAD systems for product development are volume-based and therefore cannot adequately implement the multi-material approach. Voxel-based CAD systems offer the advantage that a component can be divided into many voxels and different materials and functions can be assigned to these voxels. In this contribution two voxel-based CAD systems will be analyzed in order to simplify the AM on voxel level with different materials. Thus, a number of suitable criteria for evaluating voxel-based CAD systems are being developed and applied. The results of a technical-economic comparison show the differences between the voxel-based systems and disclose their disadvantages compared to conventional CAD systems. In order to overcome these disadvantages, a new method is therefore presented as an approach that enables the voxelization of a component in a simple way based on a conventional CAD model. The process chain of this new method is demonstrated using a typical component from product design. The results of this implementation of the new method are illustrated and analyzed.
The mathematical representations of data in the Spherical Harmonic (SH) domain has recently regained increasing interest in the machine learning community. This technical report gives an in-depth introduction to the theoretical foundation and practical implementation of SH representations, summarizing works on rotation invariant and equivariant features, as well as convolutions and exact correlations of signals on spheres. In extension, these methods are then generalized from scalar SH representations to Vectorial Harmonics (VH), providing the same capabilities for 3d vector fields on spheres.
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present an unsupervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without superivison. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an autoencoder to generate suitable latent representation. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features could be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
Financing trade and development sustainably will be crucial for Africa. Enhanced collaboration between multilateral development banks, development finance institutions and ECAs could greatly enhance intra-regional trade. Furthermore, setting up a ‘level playing field’ on the continent will allow governments to make strategic interventions for successful export credits and trade finance solutions, fostering growth through trade. African trade is already showing signs of rebounding from the coronavirus- induced recession. Through concerted, co-operative and continent-wide efforts, drawing on the knowledge and resources of all types of institutions and policy experts, Africa will continue to grow confidently and quickly into its increasingly important role as an engine of economic growth and global trade.
Excellent organisations require targeted strategies to implement their vision and mission, deploying a stakeholder-focused approach. As part of evidence-based policy making, it is a common approach to measure government financing vehicles’ results. A state-of-the-art method in quantitative benchmarking to overcome the challenge of considering multiple inputs and outputs is Data Envelopment Analysis (DEA). Descriptive statistics and explorative-qualitative approaches are also applied in a modern ECA benchmarking model to substantiate DEA results and put them into perspective. This enabler-result model provides a holistic view and allows to identify top performing ECAs and Exim-Banks, providing the opportunity for inefficient institutions to learn from their most productive peers. This best practice approach for strategic benchmarking enables the senior management to develop and implement a cutting-edge strategy, and increase value for key stakeholders.
In an extensive research project, we have assessed the application of different service models by export credit agencies (ECAs) and export-import banks (EXIMs). We conducted interviews with 35 representatives of ECAs and EXIMs from 27 countries. The question guiding this study is: How do ECAs and EXIMs adopt public service models for supporting exporters? We conducted a holistic multiple case study, investigating if and how these organisations apply public service models developed by Schedler and Guenduez, and which roles of the state are relevant. We find that there is a variety of different service models used by ECAs and EXIMs, and that the service model approaches have great potential to learn from each other and innovate existing services.
Recent advances in spiked shoe design, characterized by increased longitudinal stiffness, thicker midsole foams, and reconfigured geometry are considered to improve sprint performance. However, so far there is no empirical data on the effects of advanced spikes technology on maximal sprinting speed (MSS) published yet. Consequently, we assessed MSS via ‘flying 30m’ sprints of 44 trained male (PR: 10.32 s - 12.08 s) and female (PR: 11.56 s - 14.18 s) athletes, wearing both traditional and advanced spikes in a randomized, repeated measures design. The results revealed a statistically significant increase in MSS by 1.21% on average when using advanced spikes technology. Notably, 87% of participants showed improved MSS with the use of advanced spikes. A cluster analysis unveiled that athletes with higher MSS may benefit to a greater extent. However, individual responses varied widely, suggesting the influence of multiple factors that need detailed exploration. Therefore, coaches and athletes are advised to interpret the promising performance enhancements cautiously and evaluate the appropriateness of the advanced spike technology for their athletes critically.
Turbulenzarme Verdrängungsströmungen (TAV), häufig auch als Laminarflow (LF) bezeichnet, werden in hochreinen Reinraumbereichen eingesetzt, um die Versorgung des kritischen Bereiches (offenes Produkt) mit schwebstoffgefilterter bzw. partikelfreier Luft zu gewährleisten. Die TAV kann durch Störgrößen, wie Thermikströme, Personeneingriffe, Strömungshindernisse, Materialtransport usw. gestört werden, womit eine unerwünschte Kontamination des kritischen Bereiches einhergehen kann.
SAP S/4HANA, das neue ERP-System der SAP SE, wird einem Funktionscheck im Bereich des Produktionscontrollings unterzogen. Ermittelte Anforderungen an die IT-Unterstützung eines modernen Produktionscontrolling-Konzeptes werden auf ihre Umsetzbarkeit mit SAP S/4HANA evaluiert und anschließend in einem realitätsnahen End-to-End-Szenario implementiert. Im aktuellen Release-Stand treten an mehreren Stellen noch funktionale Lücken auf, die nur über den Rückgriff auf Technologien und Oberflächen des Vorgängers SAP ECC geschlossen werden können.
Entity Matching (EM) defines the task of learning to group objects by transferring semantic concepts from example groups (=entities) to unseen data. Despite the general availability of image data in the context of many EM-problems, most currently available EM-algorithms solely rely on (textual) meta data. In this paper, we introduce the first publicly available large-scale dataset for "visual entity matching", based on a production level use case in the retail domain. Using scanned advertisement leaflets, collected over several years from different European retailers, we provide a total of ~786k manually annotated, high resolution product images containing ~18k different individual retail products which are grouped into ~3k entities. The annotation of these product entities is based on a price comparison task, where each entity forms an equivalence class of comparable products. Following on a first baseline evaluation, we show that the proposed "visual entity matching" constitutes a novel learning problem which can not sufficiently be solved using standard image based classification and retrieval algorithms. Instead, novel approaches which allow to transfer example based visual equivalent classes to new data are needed to address the proposed problem. The aim of this paper is to provide a benchmark for such algorithms.
Information about the dataset, evaluation code and download instructions are provided under https://www.retail-786k.org/.
Seit mehr als 40 Jahren wiederholen sich Diskussionen und Kontroversen über Sinn und Unsinn von Informationstechnik (IT) in Bildungseinrichtungen. Wurde bislang über das Arbeiten an und mit PC, Laptop oder Tablet debattiert, drehen sich aktuelle Diskussionen verstärkt um netzbasierte Anwendungen mit Rückkanal für Schülerdaten. Das Schüler*innenverhalten wird per Software ausgewertet, um Lehrinhalte automatisiert und „individualisiert“ anzupassen. Ergänzt werden solche Lernprogramme um Anwendungen der sogenannten „Künstliche Intelligenz“ (KI), die als „Lernbegleiter“ fungieren und zumindest perspektivisch fehlende Lehrkräfte ersetzen (sollen). Damit werden technische Systeme in Schulen etabliert, von denen nicht einmal mehr die Entwickler wissen, was diese Algorithmen genau tun.
Das erfordert einen kritisch-reflektierenden Diskurs. Dafür vertritt Ralf Lankau im vorliegenden Aufsatz die These, dass essenzielle Elemente der Bildung, wie die Erziehung zu Selbstbewusstsein, Reflexion und einer kritischen Bürgerschaft, mit solchen Lernprogrammen verloren gehen.
Wer sich mit Digitalisierungsbestrebungen an Schulen befasst, stellt fest, dass die Tragweite der intendierten Transformation von Bildungseinrichtungen zu automatisierten Lernfabriken durch Digitaltechnik nur von Wenigen realisiert wird. Viele Beteiligte (wollen) glauben, es ginge nur um eine bessere technische Ausstattung der Lehreinrichtungen zur Unterstützung der Lehrkräfte – und übersehen, dass mit Kybernetik und Behaviorismus zwei den Menschen determinierende Theorien eine Renaissance erleben. Vertreter dieser Disziplinen glauben daran, dass sowohl der einzelne Mensch wie ganze Gesellschaften oder Sozialgemeinschaften wie ein Maschinenpark programmiert und gesteuert werden könne. Dabei werden Lernprozesse zu Akten der systematischen Selbstentmündigung umdefiniert: die Zurichtung der Lernenden auf abfragbare Kompetenzen mit Hilfe von Algorithmen und Software.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions and preferences regarding the suitable visual qualities of SARs in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Our results indicate that Israeli and German designers share similar perceptions of visual qualities and most of the robotics roles. However, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
In the modern knowledge-based and digital economy, the value of knowledge is growing relative to other assets and new intellectual property is being created at an ever-increasing rate. Therefore, the ability to find non-trivial solutions, systematically generate new concepts, and create intellectual property rapidly become crucial to achieving competitive advantage and leveraging the intellectual potential of organizations.
The paper conceptualizes the systemic approach for enhancing innovative and competitive capacity of industrial companies (named as Advanced Innovation Design Approach – AIDA) including analysis, optimizations and further development of the innovation process and promoting the innovation climate in industrial companies. The innovation process is understood as a holistic stage-gate system comprising following typical phases with feedback loops and simultaneous auxiliary or follow-up processes: uncovering of solution-neutral customer needs, technology and market trends, identification of the needs and problems with high market potential and formulation of the innovation tasks and strategy, idea generation and problem solving, evaluation and enhancement of solution ideas, creation of innovation concepts based on solution ideas, evaluation of the innovation concepts as well as implementation, validation and market launch of chosen innovation concepts. The article presents the current state of innovation research and discusses the actual status of innovation process in the industrial environment. It defines the future research tasks for amplification of the innovation process with self-configuration, self-optimization, self-diagnostics and intelligent information processing and communication.
The Advanced Innovation Design Approach is a holistic methodology for enhancing innovative and competitive capability of industrial companies. AIDA can be considered as an open mindset, an individually adaptable range of strongest innovation techniques such as comprehensive front-end innovation process, advanced innovation methods, best tools and methods of the TRIZ methodology, organizational measures for accelerating innovation, IT-solutions for Computer-Aided Innovation, and other innovation methods, elaborated in the recent decade in the industry and academia
The European TRIZ Association ETRIA acts as a connecting link between scientific institutions, universities and other educational organizations, industrial companies and individuals concerned with conceptual and practical questions relating to organization of innovation process, invention methods, and innovation knowledge. In the meantime, more than TFC 1000 papers or presentation of scientists, educators, and practitioners from all over the world are available at the official ETRIA website. Numerous research projects were supported or funded by the European Commission.
Silicon edges as one-dimensional waveguides for dispersion-free and supersonic leaky wedge waves
(2012)
Acoustic waves guided by the cleaved edge of a Si(111) crystal were studied using a laser-based angle-tunable transducer for selectively launching isolated wedge or surface modes. A supersonic leaky wedge wave and the fundamental wedge wave were observed experimentally and confirmed theoretically. Coupling of the supersonic wave to shear waves is discussed, and its leakage into the surface acoustic wave was observed directly. The velocity and penetration depth of the wedge waves were determined by contact-free optical probing. Thus, a detailed experimental and theoretical study of linear one-dimensional guided modes in silicon is presented.
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant margin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Running shoes were categorized either as motion control, cushioned, or minimal footwear in the past. Today, these categories blur and are not as clearly defined. Moreover, with the advances in manufacturing processes, it is possible to create individualized running shoes that incorporate features that meet individual biomechanical and experiential needs. However, specific ways to individualize footwear to reduce individual injury risk are poorly understood. Therefore, the purpose of this scoping review was to provide an overview of (1) footwear design features that have the potential for individualization; (2) human biomechanical variability as a theoretical foundation for individualization; (3) the literature on the differential responses to footwear design features between selected groups of individuals. These purposes focus exclusively on reducing running-related risk factors for overuse injuries. We included studies in the English language on adults that analyzed: (1) potential interaction effects between footwear design features and subgroups of runners or covariates (e.g., age, gender) for running-related biomechanical risk factors or injury incidences; (2) footwear perception for a systematically modified footwear design feature. Most of the included articles (n = 107) analyzed male runners. Several footwear design features (e.g., midsole characteristics, upper, outsole profile) show potential for individualization. However, the overall body of literature addressing individualized footwear solutions and the potential to reduce biomechanical risk factors is limited. Future studies should leverage more extensive data collections considering relevant covariates and subgroups while systematically modifying isolated footwear design features to inform footwear individualization.
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera. This is the basis for a substantial approach to automate, for the first time, the identification of hardwood species in microscopic images of fibrous materials by deep learning. Our methodology includes a flexible pipeline for easy annotation of vessel elements. We compare the performance of different neural network architectures and hyperparameters. Our proposed method performs similarly well to human experts. In the future, this will improve controls on global wood fiber product flows to protect forests.
La industria del bacanora en Sonora, México, enfrenta la influencia de una compleja red de factores culturales, tecnológicos, económicos y legales que inhiben su desarrollo. Ello ocurre pese al esfuerzo institucional por radicar un marco normativo que elimine la práctica de los métodos informales de elaboración que derivan en calidades heterogéneas de licor. El conseguirlo se complica ante la dificultad que enfrentan los actores de esta industria para implementar prácticas efectivas de verificación de las normas vigentes en los confines de la geografía de la Denominación de Origen. En este documento se describe el uso de un prototipo de espectrómetro Raman por transformada de Fourier para analizar cualitativamente muestras desconocidas de bacanora. Este dispositivo se construyó con el uso de un interferómetro Michelson convencional, un contador de fotones de diseño propio y un foto-detector de referencia. Los resultados del trabajo confirman que dada su naturaleza de diseño y construcción, este instrumento de medición y su efectiva técnica de operación a bajo costo, constituye una alternativa viable, adaptable fácilmente a las necesidades de los actores productivos e institucionales, para asistirlos en la elaboración de bacanora y a la verificación de su calidad conforme a los criterios de la normatividad.
(1) Background: Little is known about the baroque composer Domenico Scarlatti (1685-1757), whose life was centred behind closed doors at the royal court in Spain. There are no reports about his illnesses. From his compositions, mainly for harpsichord, an outstanding virtuosity can be read. (2) Case Presentation: In this case report, the only known oil painting of Domenico Scarlatti is presented, on which he is about 50 years old. In it one recognizes conspicuous hands with hints of watch glass nails and drumstick fingers. (3) Discussion: Whether Scarlatti had chronic hypoxia of peripheral body regions as a sign of, e.g., bronchial cancer or a severe heart disease, is not known. (4) Conclusions: The above-mentioned signs recorded in the oil painting, even if they were not interpretable at that time, are clearly represented and recorded for us and are open to diagnostic discussion from today's point of view.
Projektmanagement und mit ihm die PM-Prozesse, Methoden und Werkzeuge entwickeln sich stetig weiter, in kleinen, kaum spürbaren Schritten oder in großen unübersehbaren Veränderungen. In den letzten Jahren war der Diskurs über das Pro & Contra agiler Vorgehensweisen so allgegenwärtig, dass andere Aspekte nicht immer die notwendige Aufmerksamkeit bekamen. Erkannte Notwendigkeiten der PM-Entwicklung konnten noch nicht in spürbare Fortschritte umgewandelt werden. Einflüsse der Globalisierung und der IT, aber auch die aus der zunehmenden Forderung nach Nachhaltigkeit resultierenden Veränderungen in der Projektarbeit sollen daher genauer betrachtet werden. Ist erst einmal die Sensibilität für relevante Trends beim Projektpersonal geschaffen, rücken ein aktualisiertes Kompetenzprofil und ein erweiterter Methodenkanon in greifbare Nähe.
Wie in Ausgabe 44 bereits angekündigt startet bwLehrpool im März 2017 offiziell als Landesdienst. Neben der originären Aufgabe der Bereitstellung virtueller Lehrumgebungen in PC-Räumen wurde der Dienst nun
um die Möglichkeit der einfachen und sicheren Durchführung von E-Prüfungen sowie des Pool Video Switch (PVS)-Systems erweitert. bwLehrpool wird bereits an zahlreichen Hochschulen und Universitäten in den unterschiedlichsten Fachberei-
chen erfolgreich eingesetzt.
Multi-agent systems are a subject of continuously increasing interest in applied technical sciences. Smart grids are one evolving field of application. Numerous smart grid projects with various interpretations of multi-agent systems as new control concept arose in the last decade. Although several theoretical definitions of the term ‘agent’ exist, there is a lack of practical understanding that might be improved by clearly distinguishing the agent technologies from other state-of-the-art control technologies. In this paper we clarify the differences between controllers, optimizers, learning systems, and agents. Further, we review most recent smart grid projects, and contrast their interpretations with our understanding of agents and multi-agent systems. We point out that multi-agent systems applied in the smart grid can add value when they are understood as fully distributed networks of control entities embedded in dynamic grid environments; able to operate in a cooperative manner and to automatically (re-)configure themselves.
This article deals with the problem of wireless synchronization between onboard computing devices of small-sized unmanned aerial vehicles (SUAV) equipped with integrated wireless chips (IWC). Accurate synchronization between several devices requires the precise timestamping of batches transmitting and receiving on each of them. The best precision is demonstrated by those solutions where timestamping is performed on the PHY level, right after modulation/demodulation of the batch. Nowadays, most of the currently produced IWC are Systems-on-a-Chip (SoC) that include both PHY and MAC, implemented with one or several processor cores application. SoC allows create more cost and energy efficient wireless devices. At the same time, it limits the developers direct access to the internal signals and significantly complicates precise timestamping for sent and received batches, required for mutual synchronization of industrial devices. Some modern IEEE 802.11 IWCs have inbuilt functions that use internal chip clock to register timestamps. However, high jitter of the interfaces between the external device and IWC degrades the comparison of the timestamps from the internal clock to those registered by external devices. To solve this problem, the article proposes a novel approach to the synchronization, based on the analysis of IWC receiver input potential. The benefit of this approach is that there is no need to demodulate and decode the received batches, thus allowing it implementation with low-cost IWCs. In this araticle, Cypress CYW43438 was taken as an example for designing hardware and software solutions for synchronization between two SUAV onboard computing devices, equipped with IWC. The results of the performed experimental studies reveal that mutual synchronization error of the proposed method does not exceed 10 μs.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research has shown, how such detection can be achieved by deep learning methods. However, that particular approach is limited to the identification of only 4 types of vulnerabilities. Subsequently, we analyze to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardized LLVM Intermediate Representation. The vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, the methodology presented allows an accurate detection of 23 (compared to 4) vulnerabilities.
Eine kontinuierliche Überwachung von Ethernet-Leitungne beugt Maschinenausfällen in der Industrie vor. Aktuell fehlen jedoch geiegnete Methoden, um diese Überwachung flächendeckend durchzuführen. Im Projekt Ko²SiBus wurde deshalb ein kostengünstiges Verfahren zur kontinuierlichen Überwachung von Ethernet-Leitungen entwickelt.
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
Das Virtuelle Informatiklabor soll Schülern und Studierenden den übergroßen Respekt vor dem Fach Informatik nehmen und sie beim Lernen der Inhalte unterstützen. Zu diesem Zweck werden grundlegende Algorithmen der Informatik anhand konkreter Aufgabenstellungen in interaktiven Anwendungen behandelt, um den Lernenden das explorative Erkunden zu ermöglichen. Animationen sollen das Verstehen fördern, Experimente das eigenständige, durch vielfältige Hilfen unterstützte Anwenden und Umsetzen des Gelernten. Der erste Themenbereich im Virtuellen Informatiklabor umfasst die Rekursion, die in mehreren Anwendungen präsentiert wird.
Die von der Bundesregierung beschlossene Energiewende stellt Politik und Gesellschaft, Wirtschaft und Wissenschaft vor große Herausforderungen. Entscheidend für den Erfolg der Energiewende wird es sein, die Wettbewerbsfähigkeit des Industriestandortes Deutschland zu erhalten. Dafür muss weiterhin eine hohe Stromversorgungsqualität bei zugleich international wettbewerbsfähigen Strompreisen sichergestellt sein. Der BDI stellt fünf Prinzipien auf dem Weg zu einem neuen Strommarktdesign auf und zeigt, dass eine informations- und kommunikationstechnische Vernetzung relevanter Komponenten des Energiesystems für das künftige System essenziell ist.