Refine
Document Type
- Conference Proceeding (46)
- Article (unreviewed) (16)
- Article (reviewed) (10)
- Book (2)
- Part of a Book (2)
- Contribution to a Periodical (1)
- Doctoral Thesis (1)
- Report (1)
Conference Type
- Konferenzartikel (44)
- Konferenz-Abstract (1)
- Sonstiges (1)
Is part of the Bibliography
- yes (79)
Keywords
- Deep Leaning (11)
- Machine Learning (9)
- Robustness (4)
- Data Science (3)
- Generative Adversarial Network (3)
- image classification (3)
- Aliasing (2)
- Benutzererlebnis (2)
- CNNs (2)
- Computer Vision (2)
Institute
- IMLA - Institute for Machine Learning and Analytics (79) (remove)
Open Access
- Open Access (56)
- Bronze (16)
- Closed Access (11)
- Closed (10)
- Diamond (9)
- Gold (4)
- Hybrid (3)
- Grün (2)
Online grocery shopping (OGS) has significantly risen due to accelerated retail digitization and reshaped consumer shopping behaviors over the last years. Despite this trend, the German online grocery market lags behind its international counterparts. Notably, with almost half of the German population aged over 50 and the 55–64 age group emerging as the largest user segment in e-commerce, the over-50 demographic presents an attractive yet relatively overlooked audience for the expansion of the online grocery market. However, research on OGS behavior among German over-50s is scarce. This study addresses this gap, empirically investigating OGS adoption factors within this demographic through an online survey with 179 respondents. Our findings reveal that over a third of the over-50 demographic has embraced OGS, indicating a growing receptivity for OGS among the over-50s. Notably, home delivery, product variety, convenience, and curiosity emerged as primary drivers for OGS adoption among this demographic. Surprisingly, most adopters did not increase online grocery orders since 2020 and a not inconsiderable proportion have even stopped buying groceries online again. For potential OGS adopters, regional product availability turned out as a motivator, signaling substantial growth potential and providing online grocers with strategic opportunities to target this demographic. In light of our research, we offer practical suggestions to online grocery retailers, aiming to overcome barriers and capitalize on key drivers identified in our study for sustained growth in the over-50 market segment.
Der Online-Handel verzeichnet seit Jahren ein stetiges Wachstum. Durch die COVID-19-Pandemie kaufen nun auch Nutzende, die zuvor physische Kanäle bevorzugten, vermehrt online ein. Der Anbietererfolg hängt dabei wesentlich von der Kenntnis über die Kund*innen ab. Allerdings dominieren einige große Anbieter den Markt, während kleinere Online-Shops Schwierigkeiten haben, ihre Angebote zu personalisieren. Eine Lösung bietet der Ansatz selbstbestimmter Identitäten. Dieser ermöglicht Kund*innen, ihre eigenen Shoppingdaten zu kontrollieren und sie selektiv mit Online-Shops zu teilen. Dadurch können individuelle Wünsche und Anforderungen der Kund*innen in Online-Shops berücksichtigt und ein personalisiertes Angebot sowie eine gute Nutzungserfahrung geboten werden. Trotz des großen Potenzials selbstbestimmter Identitäten ist der Ansatz in Deutschland kaum verbreitet. Dieser Beitrag beleuchtet den Einsatz selbstbestimmter Identitäten im Online-Handel. Mithilfe eines menschenzentrierten Gestaltungsprozesses wurden Personas und Ist-Szenarien erstellt, sowie daraus resultierend Anforderungen erhoben und Potenziale identifiziert. Auf Basis dessen konnte ein Daten- und Architekturmodell zur Integration von selbstbestimmten Identitäten im Online-Handel entwickelt werden.
This paper presents the new Deep Reinforcement Learning (DRL) library RL-X and its application to the RoboCup Soccer Simulation 3D League and classic DRL benchmarks. RL-X provides a flexible and easy-to-extend codebase with self-contained single directory algorithms. Through the fast JAX-based implementations, RL-X can reach up to 4.5x speedups compared to well-known frameworks like Stable-Baselines3.
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient 1×1 convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning 3×3 convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.
State-of-the-art models for pixel-wise prediction tasks such as image restoration, image segmentation, or disparity estimation, involve several stages of data resampling, in which the resolution of feature maps is first reduced to aggregate information and then sequentially increased to generate a high-resolution output. Several previous works have investigated the effect of artifacts that are invoked during downsampling and diverse cures have been proposed that facilitate to improve prediction stability and even robustness for image classification. However, equally relevant, artifacts that arise during upsampling have been less discussed. This is significantly relevant as upsampling and downsampling approaches face fundamentally different challenges. While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling. Blurring is therefore not an option and dedicated operations need to be considered. In this work, we are the first to explore the relevance of context during upsampling by employing convolutional upsampling operations with increasing kernel size while keeping the encoder unchanged. We find that increased kernel sizes can in general improve the prediction stability in tasks such as image restoration or image segmentation, while a block that allows for a combination of small-size kernels for fine details and large-size kernels for artifact removal and increased context yields the best results.
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
(2023)
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings. Yet, previous work showed that even slight mistakes during sampling, leading to aliasing, can be directly attributed to the networks' lack in robustness. To address such issues and facilitate simpler and faster adversarial training, [12] recently proposed FLC pooling, a method for provably alias-free downsampling - in theory. In this work, we conduct a further analysis through the lens of signal processing and find that such current pooling methods, which address aliasing in the frequency domain, are still prone to spectral leakage artifacts. Hence, we propose aliasing and spectral artifact-free pooling, short ASAP. While only introducing a few modifications to FLC pooling, networks using ASAP as downsampling method exhibit higher native robustness against common corruptions, a property that FLC pooling was missing. ASAP also increases native robustness against adversarial attacks on high and low resolution data while maintaining similar clean accuracy or even outperforming the baseline.
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be. To facilitate such study, several challenges need to be addressed, most importantly: (i) We need to provide an effective way for models to learn large filters (potentially as large as the input data) without increasing their memory consumption during training or inference, (ii) the study of filter sizes has to be decoupled from other effects such as the network width or number of learnable parameters, and (iii) the employed convolution operation should be a plug-and-play module that can replace any conventional convolution in a Convolutional Neural Network (CNN) and allow for an efficient implementation in current frameworks. To facilitate such models, we propose to learn not spatial but frequency representations of filter weights as neural implicit functions, such that even infinitely large filters can be parameterized by only a few learnable weights. The resulting neural implicit frequency CNNs are the first models to achieve results on par with the state-of-the-art on large image classification benchmarks while executing convolutions solely in the frequency domain and can be employed within any CNN architecture. They allow us to provide an extensive analysis of the learned receptive fields. Interestingly, our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters practically translate into well-localized and relatively small convolution kernels in the spatial domain.
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate label information is a challenging task. Existing methodologies often entail a compromise between variety and constraint levels for attacks and sometimes even both. In a first step towards a more holistic robustness evaluation of image classification models, we introduce an attack method based on image solarization that is conceptually straightforward yet avoids jeopardizing the global structure of natural images independent of the intensity. Through comprehensive evaluations of multiple ImageNet models, we demonstrate the attack's capacity to degrade accuracy significantly, provided it is not integrated into the training augmentations. Interestingly, even then, no full immunity to accuracy deterioration is achieved. In other settings, the attack can often be simplified into a black-box attack with model-independent parameters. Defenses against other corruptions do not consistently extend to be effective against our specific attack.
Project website: https://github.com/paulgavrikov/adversarial_solarization
Entity Matching (EM) defines the task of learning to group objects by transferring semantic concepts from example groups (=entities) to unseen data. Despite the general availability of image data in the context of many EM-problems, most currently available EM-algorithms solely rely on (textual) meta data. In this paper, we introduce the first publicly available large-scale dataset for "visual entity matching", based on a production level use case in the retail domain. Using scanned advertisement leaflets, collected over several years from different European retailers, we provide a total of ~786k manually annotated, high resolution product images containing ~18k different individual retail products which are grouped into ~3k entities. The annotation of these product entities is based on a price comparison task, where each entity forms an equivalence class of comparable products. Following on a first baseline evaluation, we show that the proposed "visual entity matching" constitutes a novel learning problem which can not sufficiently be solved using standard image based classification and retrieval algorithms. Instead, novel approaches which allow to transfer example based visual equivalent classes to new data are needed to address the proposed problem. The aim of this paper is to provide a benchmark for such algorithms.
Information about the dataset, evaluation code and download instructions are provided under https://www.retail-786k.org/.