Refine
Year of publication
Document Type
- Conference Proceeding (19)
- Article (unreviewed) (16)
- Article (reviewed) (10)
- Doctoral Thesis (3)
- Report (3)
- Letter to Editor (1)
- Study Thesis (1)
Conference Type
- Konferenzartikel (14)
- Konferenz-Abstract (3)
- Konferenz-Poster (1)
- Sonstiges (1)
Has Fulltext
- no (53) (remove)
Is part of the Bibliography
- yes (53) (remove)
Keywords
- Deep Leaning (3)
- Deep Learning (3)
- Hochschuldidaktik (3)
- machine learning (3)
- Advanced Footwear Technology (2)
- Alexander von Humboldt (2)
- Artificial Intelligence (2)
- Biomechanics (2)
- Exercise Science (2)
- Export (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (20)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (13)
- Fakultät Wirtschaft (W) (9)
- IMLA - Institute for Machine Learning and Analytics (9)
- Fakultät Medien (M) (ab 22.04.2021) (7)
- INES - Institut für nachhaltige Energiesysteme (4)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (3)
- IfTI - Institute for Trade and Innovation (2)
- Zentrale Einrichtungen (2)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (2)
Open Access
- Diamond (53) (remove)
This report examines exporters’ challenges and possible solutions for public intervention to promote foreign trade. Based on fieldwork conducted in Georgia, we explore which policy approaches can help to stimulate Georgian exports further. Our outcomes show that exporters face substantial barriers such as navigating complex trade regulations, lack of knowledge about target markets, trade finance gaps, as well as new export promotion programs (EPPs) in competitor countries. Other upper-middle-income countries can learn from our results that exporters can significantly benefit from a comprehensive export promotion strategy combined with an ecosystem-based “team” approach. EPPs related to awareness and capacity building in Georgia should be part of this strategy, focusing on challenges such as a lack of knowledge about trade practices and international business skills. Other EPPs must help to mitigate related market failures, as information gathering is costly, and firms have no incentive to share this information with competitors. Furthermore, targeted marketing support and customer matchmaking can answer Georgian exporters’ challenges, such as lack of market access and low sector visibility. Our results also show that public intervention through financial support and risk mitigation is essential for firms with an international orientation. The high-quality, rich outcomes provide significant value for other upper-middle-income countries by exploring the example of Georgia’s contemporary circumstances in an in-depth manner based on extensive interviews and document analysis. Limitations include that our work primarily relies on qualitative data and further research could involve a quantitative study with a diverse range of sectors.
With the expansion of IoT devices in many aspects of our life, the security of such systems has become an important challenge. Unlike conventional computer systems, any IoT security solution should consider the constraints of these systems such as computational capability, memory, connectivity, and power consumption limitations. Physical Unclonable Functions (PUFs) with their special characteristics were introduced to satisfy the security needs while respecting the mentioned constraints. They exploit the uncontrollable and reproducible variations of the underlying component for security applications such as identification, authentication, and communication security. Since IoT devices are typically low cost, it is important to reuse existing elements in their hardware (for instance sensors, ADCs, etc.) instead of adding extra costs for the PUF hardware. Micro-electromechanical system (MEMS) devices are widely used in IoT systems as sensors and actuators. In this thesis, a comprehensive study of the potential application of MEMS devices as PUF primitives is provided. MEMS PUF leverages the uncontrollable variations in the parameters of MEMS elements to derive secure keys for cryptographic applications. Experimental and simulation results show that our proposed MEMS PUFs are capable of generating enough entropy for a complex key generation, while their responses show low fluctuations in different environmental conditions.
Keeping in mind that the PUF responses are prone to change in the presence of noise and environmental variations, it is critical to derive reliable keys from the PUF and to use the maximum entropy at the same time. In the second part of this thesis, we elaborate on different key generation schemes and their advantages and drawbacks. We propose the PUF output positioning (POP) and integer linear programming (ILP) methods, which are novel methods for grouping the PUF outputs in order to maximize the extracted entropy. To implement these methods, the key enrollment and key generation algorithms are presented. The proposed methods are then evaluated by applying on the responses of the MEMS PUF, where it can be practically shown that the proposed method outperforms other existing PUF key generation methods.
The final part of this thesis is dedicated to the application of the MEMS PUF as a security solution for IoT systems. We select the mutual authentication of IoT devices and their backend system, and propose two lightweight authentication protocols based on MEMS PUFs. The presented protocols undergo a comprehensive security analysis to show their eligibility to be used in IoT systems. As the result, the output of this thesis is a lightweight security solution based on MEMS PUFs, which introduces a very low overhead on the cost of the hardware.
Ultra-low-power passive telemetry systems for industrial and biomedical applications have gained much popularity lately. The reduction of the power consumption and size of the circuits poses critical challenges in ultra-low-power circuit design. Biotelemetry applications like leakage detection in silicone breast implants require low-power-consuming small-size electronics. In this doctoral thesis, the design, simulation, and measurement of a programmable mixed-signal System-on-Chip (SoC) called General Application Passive Sensor Integrated Circuit (GAPSIC) is presented. Owing to the low power consumption, GAPSIC is capable of completely passive operation. Such a batteryless passive system has lower maintenance complexity and is also free from battery-related health hazards. With a die area of 4.92 mm² and a maximum analog power consumption of 592 µW, GAPSIC has one of the best figure-of-merits compared to similar state-of-the-art SoCs. Regarding possible applications, GAPSIC can read out and digitally transmit the signals of resistive sensors for pressure or temperature measurements. Additionally, GAPSIC can measure electrocardiogram (ECG) signals and conductivity.
The design of GAPSIC complies with the International Organization for Standardization (ISO) 15693/NFC (near field communication) 5 standard for radio frequency identification (RFID), corresponding to the frequency range of 13.56 MHz. A passive transponder developed with GAPSIC comprises of an external memory storage and very few other external components, like an antenna and sensors. The passive tag antenna and reader antenna use inductive coupling for communication and energy transfer, which enables passive operation. A passive tag developed with GAPSIC can communicate with an NFC compatible smart device or an ISO 15693 RFID reader. An external memory storage contains the programmable application-specific firmware.
As a mixed-signal SoC, GAPSIC includes both analog and digital circuitries. The analog block of GAPSIC includes a power management unit, an RFID/NFC communication unit, and a sensor readout unit. The digital block includes an integrated 32-bit microcontroller, developed by the Hochschule Offenburg ASIC design center, and digital peripherals. A 16-kilobyte random-access memory and a read-only 16-kilobyte memory constitute the GAPSIC internal memory. For the fabrication of GAPSIC, one poly, six-metal 0.18 µm CMOS process is used.
The design of GAPSIC includes two stages. In the first stage, a standalone RFID/NFC frontend chip with a power management unit, an RFID/NFC communication unit, a clock regenerator unit, and a field detector unit was designed. In the second stage, the rest of the functional blocks were integrated with the blocks of the RFID/NFC frontend chip for the final integration of GAPSIC. To reduce the power consumption, conventional low-power design techniques were applied extensively like multiple power supplies, and the operation of complementary metal-oxide-semiconductor (CMOS) transistors in the sub-threshold region of operation, as well as further innovative circuit designs.
An overvoltage protection circuit, a power rectifier, a bandgap reference circuit, and two low-dropout (LDO) voltage regulators constitute the power management unit of GAPSIC. The overvoltage protection circuit uses a novel method where three stacked transistor pairs shunt the extra voltage. In the power rectifier, four rectifier units are arranged in parallel, which is a unique approach. The four parallel rectifier units provide the optimal choice in terms of voltage drop and the area required.
The communication unit is responsible for RFID/NFC communication and incorporates demodulation and load modulation circuitry. The demodulator circuit comprises of an envelope detector, a high-pass filter, and a comparator. Following a new approach, the bandgap reference circuit itself acts as the load for the envelope detector circuit, which minimizes the circuit complexity and area. For the communication between the reader and the RFID/NFC tag, amplitude-shift keying (ASK) is used to modulate signals, where the smallest modulation index can be as low as 10%. A novel technique involving a comparator with a preset offset voltage effectively demodulates the ASK signal. With an effective die area of 0.7 mm² and power consumption of 107 µW, the standalone RFID/NFC frontend chip has the best figure-of-merits compared to the state-of-the-art frontend chips reported in the relevant literature. A passive RFID/NFC tag developed with the standalone frontend chip, as well as temperature and pressure sensors demonstrate the full passive operational capability of the frontend chip. An NFC reader device using a custom-built Android-based application software reads out the sensor data from the passive tag.
The sensor readout circuit consists of a channel selector with two differential and four single-ended inputs with a programmable-gain instrumentation amplifier. The entire sensor readout part remains deactivated when not in use. The internal memory stores the measured offset voltage of the instrumentation amplifier, where a firmware code removes the offset voltage from the measured sensor signal. A 12-bit successive approximation register (SAR) type analog-to-digital-converter (ADC) based on a charge redistribution architecture converts the measured sensor data to a digital value. The digital peripherals include a serial peripheral interface, four timers, RFID/NFC interfaces, sensor readout unit interfaces, and 12-bit SAR logic.
Two sets of studies with custom-made NFC tag antennas for biomedical applications were conducted to ascertain their compatibility with GAPSIC. The first study involved the link efficiency measurements of NFC tag antennas and an NFC reader antenna with porcine tissue. In a separate experiment, the effect of a ferrite compared to air core on the antenna-coupling factor was investigated. With the ferrite core, the coupling factor increased by four times.
Among the state-of-the-art SoCs published in recent scientific articles, GAPSIC is the only passive programmable SoC with a power management unit, an RFID/NFC communication interface, a sensor readout circuit, a 12-bit SAR ADC, and an integrated 32-bit microcontroller. This doctoral research includes the preliminary study of three passive RFID tags designed with discrete components for biomedical and industrial applications like measurements of temperature, pH, conductivity, and oxygen concentration, along with leakage detection in silicone breast implants. Besides its small size and low power consumption, GAPSIC is suitable for each of the biomedical and industrial applications mentioned above due to the integrated high-performance microcontroller, the robust programmable instrumentation amplifier, and the 12-bit analog-to-digital converter. Furthermore, the simulation and measurement data show that GAPSIC is well suited for the design of a passive tag to monitor arterial blood pressure in patients experiencing Peripheral Artery Disease (PAD), which is proposed in this doctoral thesis as an exemplary application of the developed system.
The mathematical representations of data in the Spherical Harmonic (SH) domain has recently regained increasing interest in the machine learning community. This technical report gives an in-depth introduction to the theoretical foundation and practical implementation of SH representations, summarizing works on rotation invariant and equivariant features, as well as convolutions and exact correlations of signals on spheres. In extension, these methods are then generalized from scalar SH representations to Vectorial Harmonics (VH), providing the same capabilities for 3d vector fields on spheres.
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
Entity Matching (EM) defines the task of learning to group objects by transferring semantic concepts from example groups (=entities) to unseen data. Despite the general availability of image data in the context of many EM-problems, most currently available EM-algorithms solely rely on (textual) meta data. In this paper, we introduce the first publicly available large-scale dataset for "visual entity matching", based on a production level use case in the retail domain. Using scanned advertisement leaflets, collected over several years from different European retailers, we provide a total of ~786k manually annotated, high resolution product images containing ~18k different individual retail products which are grouped into ~3k entities. The annotation of these product entities is based on a price comparison task, where each entity forms an equivalence class of comparable products. Following on a first baseline evaluation, we show that the proposed "visual entity matching" constitutes a novel learning problem which can not sufficiently be solved using standard image based classification and retrieval algorithms. Instead, novel approaches which allow to transfer example based visual equivalent classes to new data are needed to address the proposed problem. The aim of this paper is to provide a benchmark for such algorithms.
Information about the dataset, evaluation code and download instructions are provided under https://www.retail-786k.org/.
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient 1×1 convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning 3×3 convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.
Selbsttests in Lernmanagementsystemen (LMS) ermöglichen es Studierenden, den eigenen Lernfortschritt einzuschätzen. Im Gegensatz zur Einreichung und Korrektur vollständig ausformulierter Aufgabenlösungen nutzen LMS überwiegend die Eingabe der Lösung im Antwort-Auswahl-Verfahren (Single-Choice). Nach didaktischen Ansatz „Physik durch Informatik“ geben die Lernenden stattdessen ihre Aufgabenlösungen in einer Programmiersprache ins LMS ein, was eine automatisierte Rückmeldung erleichtert und das Erreichen einer höheren Kompetenzstufe fördert. Es wurden zehn LMS-Selbsttests erstellt, bei denen die Lösungen zu einer Lehrbuch-Aufgabenstellung jeweils durch Eingabe in einer Programmiersprache und von einer Kontrollgruppe im Antwort-Auswahl-Verfahren abgefragt wurden. Ergebnisse aus dem ersten Einsatz dieser Selbsttests für die Lehrveranstaltung Physik im Studiengang Biotechnologie werden vorgestellt.
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera. This is the basis for a substantial approach to automate, for the first time, the identification of hardwood species in microscopic images of fibrous materials by deep learning. Our methodology includes a flexible pipeline for easy annotation of vessel elements. We compare the performance of different neural network architectures and hyperparameters. Our proposed method performs similarly well to human experts. In the future, this will improve controls on global wood fiber product flows to protect forests.
Gamification wird in vielen Bereichen, die auch den Bildungssektor einschließen, zur Motivations- und Leistungssteigerung eingesetzt. Dieser Beitrag beschreibt das Design, die Umsetzung und Evaluierung eines Gamification-Konzeptes für die Vorlesung „Software Engineering" an der Hochschule Offenburg. Gamification soll nach Intention der Lehrenden eine kontinuierliche und tiefergehende Auseinandersetzung mit den Themen der Vorlesung forcieren sowie einen positiven Einfluss auf die Motivation der Studierenden haben, um den Lernprozess zu unterstützen. Zentral für das Gamification-Design sind dabei eine freiwillige Teilnahme, die Wahrnehmung der Bedeutung der Lerninhalte und ein zielorientierter Einsatz von Gamification-Elementen. Das entwickelte Konzept wurde in der Lernplattform Moodle realisiert, über drei Semester eingesetzt und parallel evaluiert. Die Ergebnisse dieser Evaluierungen zeigen, dass die Studierenden den gamifizierten Kurs intensiv und oft über das gesamte Semester nutzten und aus eigenem Antrieb eine Vielzahl von Übungen absolvierten.
Public export credits and trade insurance require a global framework of institutions, rules and regulations to avoid subsidies and a race to the bottom. The extensive modernisation of the Arrangement on Officially Supported Export Credits (Arrangement) of the Organisation for Economic Co-operation and Development intends to re-level the playing field. This Practitioner Commentary describes the demand for adequate government interventions, considers the need for the reform and discusses key aspects of the new Arrangement. We argue that there is a breakthrough in several important areas such as tenors, repayment terms and green finance. However, we also find that the modernisation falls short in areas such as the interplay between different rulebooks, pre-shipment instruments' regulations and climate action.
Künstliche Intelligenz (KI) durchdringt unser Leben immer stärker. Studierende werden im Alltag und an Hochschulen zunehmend mit KI-Anwendungen konfrontiert. An der Hochschule Offenburg werden deshalb KI-bezogene Lehrangebote curricular verankert, um Studierende im Erwerb von KI-Kompetenz zu unterstützen.
Der Beitrag stellt ein Konzept für die Entwicklung von Lehrveranstaltungen nach der Idee des pädagogischen Makings zur Förderung von KI-Kompetenz in der Hochschullehre vor. Konkretisiert wird das Konzept anhand eines Moduls zum Thema Chatbots, dessen Lehrinhalte interdisziplinär aus verschiedenen Perspektiven ausgearbeitet werden.
Background
Internal tibial loading is influenced by modifiable factors with implications for the risk of stress injury. Runners encounter varied surface steepness (gradients) when running outdoors and may adapt their speed according to the gradient. This study aimed to quantify tibial bending moments and stress at the anterior and posterior peripheries when running at different speeds on surfaces of different gradients.
Methods
Twenty recreational runners ran on a treadmill at 3 different speeds (2.5 m/s, 3.0 m/s, and 3.5 m/s) and gradients (level: 0%; uphill: +5%, +10%, and +15%; downhill: –5%, –10%, and –15%). Force and marker data were collected synchronously throughout. Bending moments were estimated at the distal third centroid of the tibia about the medial–lateral axis by ensuring static equilibrium at each 1% of stance. Stress was derived from bending moments at the anterior and posterior peripheries by modeling the tibia as a hollow ellipse. Two-way repeated-measures analysis of variance were conducted using both functional and discrete statistical analyses.
Results
There were significant main effects for running speed and gradient on peak bending moments and peak anterior and posterior stress. Higher running speeds resulted in greater tibial loading. Running uphill at +10% and +15% resulted in greater tibial loading than level running. Running downhill at –10% and –15% resulted in reduced tibial loading compared to level running. There was no difference between +5% or –5% and level running.
Conclusion
Running at faster speeds and uphill on gradients ≥+10% increased internal tibial loading, whereas slower running and downhill running on gradients ≥–10% reduced internal loading. Adapting running speed according to the gradient could be a protective mechanism, providing runners with a strategy to minimize the risk of tibial stress injuries.
Additive manufacturing enables the production of lightweight and resilient components with extensive design freedom. In the low-cost sector, material extrusion (e.g. Fused Deposition Modeling - FDM) has been the main method used to date. Thus, robust 3D printers and inexpensive 3D materials (polymer filaments) can be used. However, the printing times for FDM are very long and the quality of the dimensions and surfaces is limited. Recently, new processes from the field of Vat polymerization have entered the market. For example, masked stereolithography (mSLA) offers a significant improvement in component quality and build speed through the use of resins and large-area curing at still reasonable costs. Currently, there is only limited knowledge available on the optimal design of components using this young process. In this contribution, design guidelines are developed to determine the possibilities and limitations of mSLA from a design point of view. For this purpose, a number of test geometries are designed and investigated to obtain systematic insights into important design features, such as wall thickness, grooves and holes. In addition, typical problems in additive manufacturing, such as the design of overhangs and fits or the hollowing of components, are investigated. The evaluation of practical 3D printing tests thus provides important parameters that can be transferred to design guidelines of components for additive manufacturing using mSLA.
Due to its performance, the field of deep learning has gained a lot of attention, with neural networks succeeding in areas like Computer Vision (CV), Neural Language Processing (NLP), and Reinforcement Learning (RL). However, high accuracy comes at a computational cost as larger networks require longer training time and no longer fit onto a single GPU. To reduce training costs, researchers are looking into the dynamics of different optimizers, in order to find ways to make training more efficient. Resource requirements can be limited by reducing model size during training or designing more efficient models that improve accuracy without increasing network size.
This thesis combines eigenvalue computation and high-dimensional loss surface visualization to study different optimizers and deep neural network models. Eigenvectors of different eigenvalues are computed, and the loss landscape and optimizer trajectory are projected onto the plane spanned by those eigenvectors. A new parallelization method for the stochastic Lanczos method is introduced, resulting in faster computation and thus enabling high-resolution videos of the trajectory and secondorder information during neural network training. Additionally, the thesis presents the loss landscape between two minima along with the eigenvalue density spectrum at intermediate points for the first time.
Secondly, this thesis presents a regularization method for Generative Adversarial Networks (GANs) that uses second-order information. The gradient during training is modified by subtracting the eigenvector direction of the biggest eigenvalue, preventing the network from falling into the steepest minima and avoiding mode collapse. The thesis also shows the full eigenvalue density spectra of GANs during training.
Thirdly, this thesis introduces ProxSGD, a proximal algorithm for neural network training that guarantees convergence to a stationary point and unifies multiple popular optimizers. Proximal gradients are used to find a closed-form solution to the problem of training neural networks with smooth and non-smooth regularizations, resulting in better sparsity and more efficient optimization. Experiments show that ProxSGD can find sparser networks while reaching the same accuracy as popular optimizers.
Lastly, this thesis unifies sparsity and neural architecture search (NAS) through the framework of group sparsity. Group sparsity is achieved through ℓ2,1-regularization during training, allowing for filter and operation pruning to reduce model size with minimal sacrifice in accuracy. By grouping multiple operations together, group sparsity can be used for NAS as well. This approach is shown to be more robust while still achieving competitive accuracies compared to state-of-the-art methods
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant m argin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
In this paper, we describe a first publicly available fine-grained product recognition dataset based on leaflet images. Using advertisement leaflets, collected over several years from different European retailers, we provide a total of 41.6k manually annotated product images in 832 classes. Further, we investigate three different approaches for this fine-grained product classification task, Classification by Image, by Text, as well as by Image and Text. The approach "Classification by Text" uses the text extracted directly from the leaflet product images. We show, that the combination of image and text as input improves the classification of visual difficult to distinguish products. The final model leads to an accuracy of 96.4% with a Top-3 score of 99.2%. We release our code at https://github.com/ladwigd/Leaflet-Product-Classification.
Artificial Intelligence (AI) can potentially transform many aspects of modern society in various ways, including automation of tasks, personalization of products and services, diagnosis of diseases and their treatment, transportation, safety, and security in public spaces, etc. Recently, AI technology has been transforming the financial industry, offering new ways to analyse data and automate processes, reduce costs, increase efficiency, and provide more personalized services to customers. However, it also raised important ethical and regulatory questions that need to be addressed by the industry and society as a whole. The aim of the Erasmus+ project Transversal Skills in Applied Artificial Intelligence - TSAAI (KA220-HED - Cooperation Partnerships in higher education) has been to establish a training platform that will incorporate teaching guidelines based on a curriculum covering different areas of application of AI technology. In this work, we will focus on applying AI models in the financial and insurance sectors.
High-tech running shoes and spikes ("super-footwear") are currently being debated in sports. There is direct evidence that distance running super shoes improve running economy; however, it is not well established to which extent world-class performances are affected over the range of track and road running events.
This study examined publicly available performance datasets of annual best track and road performances for evidence of potential systematic performance effects following the introduction of super footwear. The analysis was based on the 100 best performances per year for men and women in outdoor events from 2010 to 2022, provided by the world governing body of athletics (World Athletics).
We found evidence of progressing improvements in track and road running performances after the introduction of super distance running shoes in 2016 and super spike technology in 2019. This evidence is more pronounced for distances longer than 1500 m in women and longer than 5000 m in men. Women seem to benefit more from super footwear in distance running events than men.
While the observational study design limits causal inference, this study provides a database on potential systematic performance effects following the introduction of super shoes/spikes in track and road running events in world-class athletes. Further research is needed to examine the underlying mechanisms and, in particular, potential sex differences in the performance effects of super footwear.