Refine
Year of publication
Document Type
- Conference Proceeding (86) (remove)
Conference Type
- Konferenz-Abstract (31)
- Konferenzartikel (30)
- Sonstiges (20)
- Konferenz-Poster (5)
Language
- English (86) (remove)
Has Fulltext
- no (86) (remove)
Is part of the Bibliography
- yes (86)
Keywords
- RoboCup (31)
- injury (9)
- Biomechanik (6)
- biomechanics (5)
- running (5)
- ACL (4)
- Machine Learning (4)
- Robustness (3)
- sport (3)
- Mobile Applications (2)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (34)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (30)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (22)
- IMLA - Institute for Machine Learning and Analytics (9)
- Fakultät Wirtschaft (W) (7)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (5)
- POIM - Peter Osypka Institute of Medical Engineering (2)
- CRT - Campus Research & Transfer (1)
- Fakultät Medien (M) (ab 22.04.2021) (1)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (1)
Open Access
- Bronze (86) (remove)
This paper describes the Sweaty II humanoid adult size robot trying to qualify for the RoboCup 2018 adult size humanoid competition. Sweaty came 2nd in RoboCup 2017 adult size league. The main characteristics of Sweaty are described in the Team Description Paper 2017. The improvements that have been made or are planned to be implemented for RoboCup 2018 are described in this paper.
In this TDP we describe a new tool created for testing the strategy layer of our soccer playing agents. It is a complete 2D simulator that simulates the games based on the decisions of 22 agents. With this tool, debugging the decision and strategy layer of our agents is much more efficient than before due to various interaction methods and complete control over the simulation.
In the future, the tool could also serve as a measure to run simulations of game series much faster than with the 3D simulator. This way, the impact of different play strategies could be evaluated much faster than before.
Team description papers of magmaOffenburg are incremental in the sense that each year we address a different topic of our team and the tools around our team. In this year’s team description paper we focus on the architecture of the software. It is a main factor for being able to keep the code maintainable even after 15 years of development. We also describe how we make sure that the code follows this architecture.
The majority of anterior cruciate ligament (ACL) injuries in team sports are non-contact injuries, with cutting maneuvers identified as high-risk tasks. Young female handball players have been shown to be at greater risk for ACL injuries than males. One risk factor for ACL injuries is the magnitude of the knee abduction moment (KAM). Cutting technique variables on foot placement, overall approach and knee kinematics have been shown to influence the KAM. Since injury risk is believed to increase with increasing task complexity, the purpose of the study was to test the effect of task complexity on technique variables that influence the KAM in female handball players during fake-and-cut tasks.
Landing heel first has been associated with elevated external knee abduction moments (KAM), thereby potentially increasing the risk of sustaining a non-contact ACL injury. Apart from the foot strike angle, knee valgus angle (VAL) and vertical center of mass velocity at initial ground contact (IC) have been associated with increased KAM in females across different sidestep cuts. While real-time biofeedback training has been proven effective for gait retraining [4], the highly dynamic, non-cyclical nature of cutting maneuvers makes real-time feedback unsuitable and alternative approaches necessary. This study aimed at assessing the efficacy of immediate software-aided feedback on cutting technique in reducing KAM during handball-specific cutting maneuvers.
The purpose of this study was to 1) compare knee joint kinematics and kinetics of fake-and-cut tasks of varying complexity in 51 female handball players and 2) present a case study of one athlete who ruptured her ACL three weeks post data collection. External knee joint moments and knee joint angles in all planes at the instance of the peak external knee abduction moment (KAM) as well as moment and angle time curves were analyzed. Peak KAMs and knee internal rotation moments were substantially higher than published values obtained during simple change-of-direction tasks and, along with flexion angles, differed significantly between the tasks. Introducing a ball reception and a static defender increased joint loads while they partially decreased again when anticipation was lacking. Our results suggest to use game-specific assessments of injury risk while complexity levels do not directly increase knee loading. Extreme values of several risk factors for a post-test injured athlete highlight the need and usefulness of appropriate screenings.
Digital libraries are providing an increasing amount of data, which is normally structured in a classical way by documents and described by metadata as keywords. The data, even in scientific systems such as digital libraries and virtual research environments, will contain a great amount of noise or information unnecessary for our personal interests. Although there has been a lot of progress in the field of information retrieval, search techniques and other content finding methods, there is still much to be done in the field of information retrieval based on user behavior. This paper presents an approach deployed in the Humboldt Digital Library (HDL) to facilitate the retrieval of relevant information to the users of the system, making recommendations of paragraphs based on their profile and the behavior of other users who share similar profiles. The Humboldt digital library represents an innovative system of open access to the legacy of Alexander von Humboldt in a digital form on the Internet (www.avhumboldt.net). It contributes to the key question, how to present interconnected data in a proper form using information technologies.
In this paper, we propose a unified approach for network pruning and one-shot neural architecture search (NAS) via group sparsity. We first show that group sparsity via the recent Proximal Stochastic Gradient Descent (ProxSGD) algorithm achieves new state-of-the-art results for filter pruning. Then, we extend this approach to operation pruning, directly yielding a gradient-based NAS method based on group sparsity. Compared to existing gradient-based algorithms such as DARTS, the advantages of this new group sparsity approach are threefold. Firstly, instead of a costly bilevel optimization problem, we formulate the NAS problem as a single-level optimization problem, which can be optimally and efficiently solved using ProxSGD with convergence guarantees. Secondly, due to the operation-level sparsity, discretizing the network architecture by pruning less important operations can be safely done without any performance degradation. Thirdly, the proposed approach finds architectures that are both stable and well-performing on a variety of search spaces and datasets.
Flashcards are a well known and proven method to learn and memorise. Such a way of learning is perfectly suited for “learning on the way,” but carrying all the flashcards could be awkward. In this scenario, a mobile device (mobile phone) is an adequate solution. The new mobile device operating system Android from Google allows for writing multimedia-enriched applications.
“Today’s network landscape consists of quite different network technologies, wide range of end-devices with large scale of capabilities and power, and immense quantity of information and data represented in different formats” [9]. A lot of efforts are being done in order to establish open, scalable and seamless integration of various technologies and content presentation for different devices including mobile considering individual situation of the end user. This is very difficult because various kinds of devices used by different users or in different times/parallel by the same user which is not predictable and have to be recognized by the system in order to know device capabilities. Not only the devices but also Content and User Interfaces are big issues because they could include different kinds of data format like text, image, audio, video, 3D Virtual Reality data and upcoming other formats. Language Learning Game (LLG) is such an example of a device independent application where different kinds of devices and data formats, as a content of a flashcard is used for a collaborative learning. The idea of this game is to create a short story in a foreign language by using mobile devices. The story is developed by a group of participants by exchanging sentences/data via a flashcard system. This way the participants can learn from each other by knowledge sharing without fear of making mistakes because the group members are anonymous. Moreover they do not need a constant support from a teacher.
The title expresses goals the Kansas Geological Survey (KGS) has been working toward for some time. This report extends concepts and objectives developed while working on an earlier effort for effective interactive digital maps on the Internet. That work was reported to the 1998 DMT Workshop in Champaign, Illinois (Ross, 1998). The current project goes beyond previous efforts that focused on methods for serving the contents of a geographic information system (GIS); the points, lines, and polygons representing features of the digital geologic map and the data in the attribute tables of the GIS describing those features.
Existing approaches solving multi-vehicle pickup and delivery problems with soft time windows typically use common benchmark sets to verify their performance. However, there is a gap from these benchmark sets to real world problems with respect to instance size and problem complexity. In this paper we show that a combination of existing approaches together with improved heuristics is able to deal with the instance sizes and complexity of real world problems. The cost savings potential of the heuristics is compared to human dispatching plans generated from the data of a European carrier.
This paper describes the new Sweaty II humanoid adult size robot trying to qualify for the RoboCup 2016 adult size humanoid competition. Based on experiences during RoboCup 2014, the Sweaty robot has been completely redesigned to a new robot Sweaty II. A major change is the use of linear actuators for the legs. Another characteristic is its indirect actuation by means of rods. This allows a variable transmission ratio depending on the angle of a joint.
This paper describes the new Sweaty humanoid adult size robot trying to qualify for the RoboCup 2014 adult size humanoid competition. The robot is built from scratch to eventually allow it to run. One characteristic is that to prevent the motors from overheating, water evaporation is used for cooling. The robot is literally sweating which has given it its name. Another characteristic is, that the motors are not directly connected to the frame but by means of beams. This allows a variable transmission ratio depending on the angle.
This paper describes the Sweaty II humanoid adult size robot trying to qualify for the RoboCup 2017 adult size humanoid competition. Sweaty came 2nd in RoboCup 2016 adult size league. The paper describes the main characteristics of Sweaty that made this success possible, and improvements that have been made or are planned to be implemented for RoboCup 2017.
Technology and computer applications influence our daily lives and questions arise concerning the role of artificial intelligence and decision-making algorithms. There are warning voices, that computers can, in theory, emulate human intelligence-and exceed it. This paper points out that a replacement of humans by computers is unlikely, because human thinking is characterized by cognitive heuristics and emotions, which cannot simply be implemented in machines operating with algorithms, procedural data processing or artificial neural networks. However, we are going to share our responsibilities with superior computer systems, which are tracking and surveying all of our digital activities, whereas we have no idea of the decision-making processes inside the machines. It is shown that we need a new digital humanism defining rules of computer responsibilities to avoid digital totalism and comprehensive monitoring and controlling of individuals within the planet Earth.
This article sets the focus on methods of information technology in the Humboldt Portal, which represents an ongoing research project to develop a virtual research environment on the Internet for the legacy of Alexander von Humboldt. Based on the experiences of developing and providing the Humboldt Digital Library (www.avhumboldt.net) for more than a decade, we defined a working plan to create an Internet portal for comprehensive access to Humboldt’s writings, no matter if documents are provided as PDF files, scan images or XML-TEI documents on external archives (Google Books, Internet Archive, Deutsches Textarchiv, Bibliotheque National de France). Going far beyond services of a digital library we will provide an information network with multimedia assets, which are containing objects like terms, paragraphs, data tables, scan images, or illustrations, together with correlated properties like thematic linkage to other objects, relevant keywords with optional synonyms and dynamic hyperlinks to related translations in different languages. So the Humboldt Portal can contribute to the key question, how to present interconnected data in an appropriate form using information technologies on the Web.
More than 200 years ago, the scientist Alexander von Humboldt noted in his travel diaries that "everything is interconnectedness", when he was fascinated by nature and the phenomena observed. The view of nature has become much more detailed through the knowledge of phenomena and natural processes, which led to a more precise view of nature shaped by Humboldt. Technological progress and the artificial intelligence of highly developed computer systems are upsetting this view and changing the established world view through a new, unprecedented interaction between man and machinery. Thus we need digital axioms and comprehensive rules and laws for such autonomous acting systems that determine human interaction between cybernetic systems and biological individuals. This digital humanism should encompass our relationship to nature, our handling of the complexity and diversity of nature and the technological influences on society in order to avoid technical colonialism through supercomputers.
This paper describes the magmaOffenburg 3D simulation team trying to qualify for RoboCup 2009. It focuses on two distinctive features of the team: decisions making using extended behavior networks and its software architecture and implementation in Java to open the simulation for the Java community.
This paper describes the magmaOffenburg 3D simulation team trying to qualify for RoboCup 2010. While last year’s TDP focused on decisions making using extended behavior networks and on its software architecture and implementation in this year we describe the tool set that was created for RoboCup 3D. It contians a GUI for agent- and world state visualization, for evaluation of localization algorithms and benchmarks in general, a visual editor for Extended Behavior Networks creation and debugging, a live movement tool to interact with the joints and finally a tool for editing behavior motor files.
After having described many different aspects of our team software in previous years, in this paper we take the freedom to describe the magmaChallenge framework provided by the magmaOffenburg team. The framework is used as a benchmark tool to run different challenges like the running challenge in 2014 or the kick accuracy challenge in 2015. This description should serve as a documentation to simplify the maintenance by the community and to add new benchmarks in the future.
This paper describes the magmaOffenburg 3D simulation team trying to qualify for RoboCup 2012. While last year’s TDP focused on the tool set created for 3D simulation and the support for heterogeneous robot models, this year we focus on the different ways how robot behavior can be defined in the magmaOffenburg framework and how those behaviors can be improved by learning.
This paper describes the magmaOffenburg 3D simulation team trying to qualify for RoboCup 2013. While last year’s TDP focused on different ways how robot behavior can be defined in the magmaOffenburg framework this year we focus on how we statistically evaluate new features on distributed systems. We also show some results gained through such analysis.
This paper describes the magmaOffenburg 3D simulation team trying to qualify for RoboCup 2011. While last year’s TDP focused on the tool set created for 3D simulation in this year we describe the further improvement in this tools as well as some new features we implemented focusing on heterogeneous robot models which seem to be used in RoboCup 2012.
An additional tool was written to simply generate situation-dependent strategies. Furthermore some tools, described last year, are now integrated in one single GUI to easy things up.
Sweaty has already participated several times in RoboCup soccer competitions (Adult Size). Now the work is focused on stabilizing the gait. Moreover, we would like to overcome the constraints of a ZMP-algorithm that has a horizontal footplate as precondition for the simplification of the equations. In addition we would like to switch between impedance and position control with a fuzzy-like algorithm that might help to minimize jerks when Sweaty’s feet touch the ground.
Sweaty has already participated four times in RoboCup soccer competitions (Adult Size) and came second three times. While 2016 Sweaty needed a lot of luck to be finalist, 2017 Sweaty was a serious adversary in the preliminary rounds. In 2018 Sweaty showed up in the final with some lack of experience and room for improvements, but not without any chance. This paper describes the intended improvements of the humanoid adult size robot Sweaty in order to qualify for the RoboCup 2019 adult size competition.
This paper shows the results of the evaluation of two sets of mobile web design guidelines concerning mobile learning. The first set of guidelines is concerned with the usage of text on mobile device screens. The second set is concerned with the usage of images on mobile devices. The evaluation is performed by eye tracking (objective) as well as questionnaires and interviews (subjective) respectively.
The idea of this game is to use a flashcard system to create a short story in a foreign language. The story is developed by a group of people by exchanging sentences via a flashcard system. This way, people can learn from each other without fear of making mistakes because the group members are anonymous.
Electrode modelling and simulation of diagnostic and pulmonary vein isolation in atrial fibrillation
(2022)
Recent work has investigated the distributions of learned convolution filters through a large-scale study containing hundreds of heterogeneous image models. Surprisingly, on average, the distributions only show minor drifts in comparisons of various studied dimensions including the learned task, image domain, or dataset. However, among the studied image domains, medical imaging models appeared to show significant outliers through "spikey" distributions, and, therefore, learn clusters of highly specific filters different from other domains. Following this observation, we study the collected medical imaging models in more detail. We show that instead of fundamental differences, the outliers are due to specific processing in some architectures. Quite the contrary, for standardized architectures, we find that models trained on medical data do not significantly differ in their filter distributions from similar architectures trained on data from other domains. Our conclusions reinforce previous hypotheses stating that pre-training of imaging models can be done with any kind of diverse image data.
In this contribution, we present a novel 3D printed multi-material, electromagnetic vibration harvester. The harvester is based on a cantilever design and utilizes an embedded constantan wire within a matrix of polyethylene terephthalate glycol (PETG). A prototype has been manufactured with a combination of a fused filament fabrication (FFF) printer and a robot with a custom-made tool.
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences. Data & Project website: https://github.com/GeJulia/robustness_confidences_evaluation
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Adversarial training (AT) is often considered as a remedy to train more robust networks. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences.