Refine
Year of publication
Document Type
- Conference Proceeding (1089) (remove)
Conference Type
- Konferenzartikel (856)
- Konferenz-Abstract (153)
- Sonstiges (40)
- Konferenz-Poster (31)
- Konferenzband (13)
Language
- English (874)
- German (213)
- Multiple languages (1)
- Russian (1)
Has Fulltext
- no (1089) (remove)
Keywords
- RoboCup (32)
- Gamification (11)
- Machine Learning (11)
- injury (10)
- Biomechanik (9)
- Finite-Elemente-Methode (9)
- Kommunikation (9)
- Assistive Technology (8)
- TRIZ (8)
- Ausbildung (7)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (349)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (260)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (202)
- Fakultät Wirtschaft (W) (159)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (115)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (105)
- IMLA - Institute for Machine Learning and Analytics (45)
- INES - Institut für nachhaltige Energiesysteme (45)
- ACI - Affective and Cognitive Institute (38)
- Fakultät Medien (M) (ab 22.04.2021) (25)
Open Access
- Closed Access (455)
- Open Access (401)
- Closed (219)
- Bronze (96)
- Diamond (19)
- Grün (6)
- Hybrid (5)
- Gold (3)
In this paper, we describe a first publicly available fine-grained product recognition dataset based on leaflet images. Using advertisement leaflets, collected over several years from different European retailers, we provide a total of 41.6k manually annotated product images in 832 classes. Further, we investigate three different approaches for this fine-grained product classification task, Classification by Image, by Text, as well as by Image and Text. The approach "Classification by Text" uses the text extracted directly from the leaflet product images. We show, that the combination of image and text as input improves the classification of visual difficult to distinguish products. The final model leads to an accuracy of 96.4% with a Top-3 score of 99.2%. We release our code at https://github.com/ladwigd/Leaflet-Product-Classification.
In medical applications wireless technologies are not widely spread. Today they are mainly used in non latency-critical applications where reliability can be guaranteed through retransmission protocols and error correction mechanisms. By using retransmission protocols within the disturbed shared wireless channel latency will increase. Therefore retransmission protocols are not sufficient for removing latency-critical wired connections within operating rooms such as foot switches. Todays research aims to improve reliability through the physical characteristics of the wireless channel by using diversity methods and more robust modulation. In this paper an Architecture for building up a reliable network is presented. The Architecture offers the possibility for devices with different reliability, latency and energy consumption requirements to participate. Furthermore reliability, latency and energy consumption are scalable for every single participant.
Als Einstieg in den Diskurs über zivile Netzwerktechnologien, mobile Geräte, Onlinedienste und die Frage, wie sich die „Kirche der Zukunft“ (zumindest aus medienwissenschaftlicher Sicht) positionieren kann, dienen drei Zitate. Die Gegenüberstellung der darin vertretenen Positionen soll den Nutzen und die Folgen der zunehmend vollständigen Durchdringung (fast) aller Lebensbereiche mit Digitaltechnik für den Einzelnen wie für die Gesellschaft aufzeigen.
Gehören Sie zur „generation upload“? Laden Sie ihre privaten Bilder auf Flickr hoch und stellen Videos bei YouTube ein? Downloaden Sie MPEG-Files auf ihr Handheld oder spielen Sie ständig neue, echt witzige Apps auf ihr SmartPhone? Klicken Sie sich ihre Freunde in Facebook, MySpace oder StudiVZ zusammen, um rund um die Uhr zu chatten und zu bloggen? Oder twittern Sie eher und haben für Ihren Tweed schon Follower? Gruscheln Sie Menschen, deren Foto ihnen gefällt und sperren den Kontakt per Mausklick, wenn er oder sie doch nicht so nett ist? Software und Filme besorgen Sie sich von ihren Peers über Bit-Torrent-Tracker wie Pirate Bay? Lustig finden Sie „flash mobs“, weniger witzig „cyber mobs“? Oder sind Sie der eher rabiate Typ, der fremde Rechner hackt, spammt und „Google bombs“ platziert? Oder fragen Sie sich gerade, von was ich hier überhaupt rede? Willkommen in der „brave new world – of media”.
Die Frage nach der Struktur und Funktion von „Hochschulen“ kann man sinnvoll nicht isoliert betrachten ohne einen Blick auf Schulen. Hochschulen sind Teil des gesamten Schulsystems und eingebunden in eine (momentan noch) sehr differenzierte und vielfältige, bundesdeutsche „Bildungslandschaft“, die sich über Jahrhunderte herauskristallisiert hat. Tradition und evolutionäre Genese sind eine Konstante von Bildungseinrichtungen, der ständige Wandel und der stetige Reformdruck eine weitere. Es scheint, das an Schulen und Hochschulen immer von neuem laboriert werden muss, auch wenn das mögliche Spektrum von Einstellungen und Methoden – zumindest was Lernen und Lehrkonzepte betrifft, – seit der Antike bekannt sind.
Daher gliedert sich dieser Text in drei Abschnitte:
• Ein kurzer Blick zurück leitet zentrale Begriffe her.
• Die Analyse des Ist-Zustandes unter Berücksichtigung der seit 1998 unter dem Namen „Bologna“ realisierten Reformen (Vereinheitlichung der europäischen Studiengänge, Umstellung der Studiengänge auf andere Abschlüsse (Bachelor, Master) u.v.m.) zeigt aktuelle Fehlentwicklungen, nennt Gründe und Protagonisten .
• Der abschließend Blick nach vorn zeigt, was aus (Hoch)Schulen (wieder) werden könnten, wenn Lehrende und Studierende mutiger werden.
Controlling ist ein Begriff aus der Wirtschaftslehre und bezeichnet nicht Kontrollle, sondern Prozeßsteuerung. Definierte Ziele werden durch kleinteilige Messungen und permanente Überwachung aller Arbeitsschritte und Handlungen der beteiligten Personen protokolliert und stetig optimiert. Dieses Konzept der Planungs-, Koordinations- und Kontrollaufgaben wird beim „Bildungs-Controlling“ auf Schulen und Hochschulen übertragen. Ziel ist dabei, entsprechend der Gary Beckerschen Humankapitaltheorie, die Produktion von Humankapital mit validierten Kompetenzen. Zwei Probleme gibt es dabei: Lernen und vor allem Verstehen lassen sich nicht automatisieren und auch nicht automatisiert prüfen. Und: Sozialsysteme unter dem Regime der Kennzahlen des Quality Management (QM) oder Total Quality Management (TQM) verlieren ihre Eigenschaft als soziale Systeme
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset where, a subspace is the subset of dimensions of the data. But exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, thus, parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage, firstly, the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation has shown linear speedup. Secondly, we are developing an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
Radio frequency identification (RFID) antennas are popular for high frequency (HF) RFID, energy transfer and near field communication (NFC) applications. Particularly for wireless measurement systems the RFID/NFC technology is a good option to implement a wireless communication interface. In this context, the design of corresponding reader and transmitter antennas plays a major role for achieving suitable transmission quality. This work proves the feasibility of the rapid prototyping of a RFID/NFC antenna, which is used for the wireless communication and energy harvesting at the required frequency of 13.56 MHz. A novel and low-cost direct ink writing (DIW) technology utilizing highly viscous silver nanoparticle ink is used for this process. This paper describes the development and analysis of low-cost printed flexible RFID/NFC antennas on cost-effective substrates for a microelectronic vital parameter measurement system. Furthermore, we compare the measured technical parameters with existing copper-based counterparts on a FR4 substrate.
Significant improvements in module performance are possible via implementation of multi-wire electrodes. This is economically sound as long as the mechanical yield of the production is maintained. While flat ribbons have a relatively large contact area to exert forces onto the solar cell, wires with round cross section reduce this contact area considerably – in theory to an infinitively thin line. Therefore, the local stresses induced by the electrodes might increase to a point that mechanical production yields suffer unacceptably.
In this paper, we assess this issue by an analytical mechanical model as well as experiments with an encapsulant-free N.I.C.E. test setup. From these, we can derive estimations for the relationship between lay-up accuracy and expected breakage losses. This paves the way for cost-optimized choices of handling equipment in industrial N.I.C.E.-wire production lines.
A polarization mode dispersion measurement set-up based on a Mach-Zehnder Interferometer was realized. Measurements were carried out on short high-birefringent fibers and on long standard telecommunication single-mode fibers. In order to ensure high accurate results, special emphasis was placed on the evaluation of the interference pattern. The procedure will be described in detail and practical measurement results will be presented.
Interaction and capturing information from the surrounding is dominated by vision and hearing. Haptics on the other side, widens the bandwidth and could also replace senses (sense switching) for impaired. Haptic technologies are often limited to point-wise actuation. Here, we show that actuation in two-dimensional matrices instead creates a richer input. We describe the construction of a full-body garment for haptic communication with a distributed actuating network. The garment is divided into attachable-detachable panels or add-ons that each can carry a two dimensional matrix of actuating haptic elements. Each panel adds to an enhanced sensoric capability of the human- garment system so that together a 720° system is formed. The spatial separation of the panels on different body locations supports semantic and theme-wise separation of conversations conveyed by haptics. It also achieves directional faithfulness, which is maintaining any directional information about a distal stimulus in the haptic input.
Cross-industry innovation is commonly understood as identification of analogies and interdisciplinary transfer or copying of technologies, processes, technical solutions, working principles or models between industrial sectors. In general, creative thinking in analogies belongs to the efficient ideation techniques. However, engineering graduates and specialists frequently lack the skills to think across the industry boundaries systematically. To overcome this drawback an easy-to-use method based on five analogies has been evaluated through its applications by students and engineers in numerous experiments and industrial case studies. The proposed analogies help to identify and resolve engineering contradictions and apply approaches of the Theory of Inventive Problem Solving TRIZ and biomimetics. The paper analyses the outcomes of the systematized analogies-based ideation and outlines that its performance continuously grows with the engineering experience. It defines metrics for ideation efficiency and ideation performance function.
The paper is addressing the needs of the universities regarding qualification of students as future R&D specialists in efficient techniques for successfully running innovation process. It briefly describes the program of a novel one-semester-course of 150 hours in new product development and inventive problem solving with TRIZ methodology, offered for the master students at the Beuth University of Applied Sciences in Berlin. The paper outlines multi-source educational approach, which includes a new product development project (about 50% of the complete course), theory, practical work, self-learning with the software tools for computer-aided innovation, and demonstrates examples of the students work. The research part analyses the learning experience, identifies the factors that impact the innovation and problem solving performance of the students, and underlines the main difficulties faced by the students in the course. It describes a method for measurement of education efficiency and compares the results with educational experience in the industry. The presented results can help universities to establish the education in new product development or to improve its performance.
CONTEXT
The paper addresses the needs of medium and small businesses regarding qualification of R&D specialists in the interdisciplinary cross-industry innovation, which promises a considerable reduction of investments and R&D expenditures. The cross-industry innovation is commonly understood as identification of analogies and transfer of technologies, processes, technical solutions, working principles or business models between industrial sectors. However, engineering graduates and specialists frequently lack the advanced skills and knowledge required to run interdisciplinary innovation across the industry boundaries.
PURPOSE
The study compares the efficiency of the cross-industry innovation methods in one semester project-oriented course. It identifies the individual challenges and preferred working techniques of the students with different prior knowledge, sets of experiences, and cultural contexts, which require attention by engineering educators.
APPROACH
Two parallel one-semester courses were offered to the mechanical and process engineering students enrolled in bachelor’s and master’s degree programs at the faculty of mechanical and process engineering. The students from different years of study were working in 12 teams of 3…6 persons each on different innovation projects, spending two hours a week in the classroom and additionally on average two hours weekly on their project research. Students' feedback and self-assessments concerning gained skills, efficiency of learned tools and intermediate findings were documented, analysed, and discussed regularly along the course.
RESULTS
Analysis of numerous student projects allows to compare and to select the tools most appropriate for finding cross-industry solutions, such as thinking in analogies, web monitoring, function-oriented search, databases of technological effects and processes, special creativity techniques and others. The utilization of learned skills in practical innovation work strengthens the motivation of students and enhances their entrepreneurial competences. Suggested learning course and given recommendations help facilitate sustainable education of ambitious specialists.
CONCLUSIONS
The structured cross-industry innovation can be successfully run as a systematic process and learned in one semester course. The choice of the preferred working teqniques made by the students is affected by their prior knowledge in science, practical experience, and cultural contexts. Major outcomes of the students’ innovation projects such as feasibility, novelty and customer value of the concepts are primarily influenced by students’ engineering design skills, prior knowledge of the technologies, and industrial or business experience.
The comprehensive assessment method includes 80 innovation performance parameters and 10 key indicators of innovation capability, such as innovation process performance, innovating system performance, market and customer orientation, technology orientation, creativity, leadership, communication and knowledge management, risk and cost management, innovative climate, and innovation competences. The cross-industry study identifies parameters critical for innovation success and reveals different innovation performance patterns in companies.
The internal crowdsourcing-based ideation within a company can be defined as an involvement of its staff, specialists, managers, and other employees, to propose solution ideas for a pre-defined problem. This paper addresses a question, how many participants of the company-internal ideation process are required to nearly reach the ideation limit for the problems with a finite number of workable solutions. To answer the research question, the author proposes a set of metrics and a non-linear ideation performance function with a positive decreasing slope and ideation limit for the closed-ended problems. Three series of experiments helped to explore relationships between the metric attributes and resulted in a mathematical model which allows companies to predict the productivity metrics of their crowdsourcing ideation activities such as quantity of different ideas and ideation limit as a function of the number of contributors, their average personal creativity and ideation efficiency of a contributors’ group.
The proposed method includes identification and documentation of the elementary TRIZ inventive principles from the TRIZ body of knowledge, extension and enhancement of inventive principles by patents and technologies analysis, avoiding overlapping and redundant principles, classification and adaptation of principles to at least following categories such as working medium, target object, useful action, harmful effect, environment, information, field, substance, time, and space, assignment of the elementary inventive principles to the at least following underlying engineering domains such as universal, design, mechanical, acoustic, thermal, chemical, electromagnetic, intermolecular, biological, and data processing. The method includes classification of abstraction level of the elementary principles, definition of the statistical ranking of principles for different problem types, and specific engineering or non-technical domains, definition of strategies for selection of principles sets with high solution potential for predefined problems, automated semantic transformation of the elementary inventive principles into solution ideas, evaluation of automatically generated ideas and transformation of ideas to innovation or inventive concepts.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new teaching method and appropriate learning materials in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appears as negative side effects of eco-friendly solutions.
Based on a literature analysis and own investigations, authors propose to introduce a manageable number of eco-innovation tools into a standard one-semester design course in process engineering with particular focus on the identification of eco-problems in existing technologies, selection of the appropriate new process intensification technologies (knowledge-based engineering), and systematic ideation and problem solving (knowledge-based innovation and invention).
The proposed educational approach equips students with the advanced knowledge, skills and competences in the field of eco-innovation. Analysis of the student’s work allows one to recommend simple-to-use tools for a fast application in process engineering, such as process mapping, database of eco-friendly process intensification technologies, and up to 20 strongest inventive operators for solving of environmental problems. For the majority of students in the survey, even the small workload has strengthened their self-confidence and skills in eco-innovation
Economic growth and ecological problems have pushed industries to switch to eco-friendly technologies. However, environmental impact is still often neglected since production efficiency remains the main concern. Patent analysis in the field of process engineering shows that, on the one hand, some eco-issues appear as secondary problems of the new technologies, and on the other hand, eco-friendly solutions often show lower efficiency or performance capability. The study categorizes typical environmental problems and eco-contradictions in the field of process engineering involving solids handling and identifies underlying inventive principles that have a higher value for environmental innovation. Finally, 42 eco-innovation methods adapting TRIZ are chronologically presented and discussed.
Environmentally-friendly implementation of new technologies and eco-innovative solutions often faces additional secondary ecological problems. On the other hand, existing biological systems show a lesser environmental impact as compared to the human-made products or technologies. The paper defines a research agenda for identification of underlying eco-inventive principles used in the natural systems created through evolution. Finally, the paper proposes a comprehensive method for capturing eco-innovation principles in biological systems in addition and complementary to the existing biomimetic methods and TRIZ methodology and illustrates it with an example.
Sustainable design of equipment for process intensification requires a comprehensive and correct identification of relevant stakeholder requirements, design problems and tasks crucial for innovation success. Combining the principles of the Quality Function Deployment with the Importance-Satisfaction Analysis and Contradiction Analysis of requirements gives an opportunity to define a proper process innovation strategy more reliably and to develop an optimal process intensification technology with less secondary engineering and ecological problems.
The 40 Altshuller Inventive Principles with numerous sub-principles remain over decades the most frequently applied tool of the Theory of Inventive Problem Solving TRIZ for systematic idea generation. However, their application often requires a concentrated, creative and abstract way of thinking that can be fairly challenging for the newcomers to TRIZ. This paper describes an approach to reduce the abstraction level of inventive sub-principles and presents the results of the idea generation experiment conducted with three groups of undergraduate and graduate students from different years of study in mechanical and process engineering. The students were asked to generate and to record their individual ideas for three design problems using a pre-defined set of classical and modified sub-principles within 10 minutes. The overall outcomes of the experiment support the assumption that the less abstract wording of the modified sub-principles leads to higher number of ideas. The distribution of ideas between the fields of MATCHEM-IBD (Mechanical, Acoustic, Thermal, Chemical, Electrical, Magnetic, Intermolecular, Biological and Data processing) differs significantly between groups using modified and abstract sub-principles.
Classification of TRIZ Inventive Principles and Sub-Principles for Process Engineering Problems
(2019)
The paper proposes a classification approach of 40 Inventive Principles with an extended set of 160 sub-principles for process engineering, based on a thorough analysis of 155 process intensification technologies, 200 patent documents, 6 industrial case studies applying TRIZ, and other sources. The authors define problem-specific sub-principles groups as a more precise and productive ideation technique, adaptable for a large diversity of problem situations, and finally, examine the anticipated variety of ideation using 160 sub-principles with the help of MATCEM-IBD fields.
Growing demands for cleaner production and higher eco-efficiency in process engineering require a comprehensive analysis of technical and environmental outcomes of customers and society. Moreover, unexpected additional technical or ecological drawbacks may appear as negative side effects of new environ-mentally friendly technologies. The paper conceptualizes a comprehensive ap-proach for analysis and ranking of engineering and ecological requirements in process engineering in order to anticipate secondary problems in eco-design and to avoid compromising the environmental or technological goals. For this purpose, the paper presents a method based on integration of the Quality Func-tion Deployment approach with the Importance-Satisfaction Analysis for the requirements ranking. The proposed method identifies and classifies compre-hensively the potential engineering and eco-engineering contradictions through analysis of correlations within requirements groups such as stakehold-er requirements (SRs) and technical requirements (TRs), and additionally through cross-relationship between SRs and TRs.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new learning materials and educational tools in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appear as negative side effects of eco-friendly solutions. The paper evaluates the efficiency of the proposed interdisciplinary tool for systematic eco-innovation including creative semi-automatic knowledge-based idea generation and concept development. It analyses the learning experience and identifies the factors that impact the eco-innovation performance of the students.
Process engineering industries are now facing growing economic pressure and societies' demands to improve their production technologies and equipment, making them more efficient and environmentally friendly. However unexpected additional technical and ecological drawbacks may appear as negative side effects of the new environmentally-friendly technologies. Thus, in their efforts to intensify upstream and downstream processes, industrial companies require a systematic aid to avoid compromising of ecological impact. The paper conceptualises a comprehensive approach for eco-innovation and eco- design in process engineering. The approach combines the advantages of Process Intensification as Knowledge-Based Engineering (KBE), inventive tools of Knowledge-Based Innovation (KBI), and main principles and best-practices of Eco-Design and Sustainable Manufacturing. It includes a correlation matrix for identification of eco-engineering contradictions and a process mapping technique for problem definition, database of Process Intensification methods and equipment, as well as a set of strongest inventive operators for eco-ideation.
The paper recommends an approach to estimate effectively the probability of buffer overflow in high-speed communication networks, capable of carrying diverse traffic, including self-similar teletraffic, and supporting diverse levels of quality of service. Simulations with stochastic, long-range dependent self-similar traffic source models are conducted. A new efficient algorithm, based on a variant of the RESTART/LRE method, is developed and applied to accelerate the buffer overflow simulation in a finite buffer single server model under long-range dependent self-similar traffic load with different buffer sizes. Numerical examples and simulation results are shown
In anisotropic media, the existence of leaky surface acoustic waves is a well-known phenomenon. Very recently, their analogs at the apex of an elastic silicon wedge have been found in experiments using laser-ultrasonics. In addition to a wedge-wave (WW) pulse with low speed, a pseudo-wedge wave (p-WW) pulse was found with a velocity higher than the velocity of shear bulk waves, propagating in the same direction. With a probe-beam-deflection technique, the propagation of the WW pulses was monitored on one of the faces of the wedge at variable distance from the apex. In this way, their depth structure and the leakage of the p-WW could be visualized directly. Calculations were carried out using a method based on a representation of the displacement field in Laguerre functions. This method has been validated by calculating the surface density of states in anisotropic media and comparing the results with those obtained from the surface Green's tensor. The approach has then been extended to the continuum of acoustic modes in infinite wedges with fixed wave-vector along the apex. These calculations confirmed the measured speeds of the WW and p-WW pulses.
Cardiac resynchronization therapy with biventricular pacing is an established therapy for heart failure patients with electrical left ventricular desynchronization. The aim of this study was to evaluate left atrial conduction delay, intra left atrial conduction delay, left ventricular conduction delay and intra left ventricular conduction delay in heart failure patients using novel signal averaging transesophageal left heart ECG software.
Methods: 8 heart failure patients with dilated cardiomyopathy (DCM), age 68 ± 9 years, New York Heart Association (NYHA) class 2.9 ± 0.2, 24.8 ± 6.7 % left ventricular ejection fraction, 188.8 ± 15.5 ms QRS duration and 8 heart failure patients with ischaemic cardiomyopathy (ICM), age 67 ± 8 years, NYHA class 2.9 ± 0.3, 32.5 ± 7.4 % left ventricular ejection fraction and 167.6 ± 19.4 ms QRS duration were analysed with transesophageal and transthoracic ECG by Bard LabDuo EP system and novel National Intruments LabView signal averaging ECG software.
Results: The electrical left atrial conduction delay was 71.3 ± 17.6 ms in ICM versus 72.3 ± 12.4 ms in DCM, intra left atrial conduction delay 66.8 ± 8.6 ms in ICM versus 63.4 ± 10.9 ms in DCM and left cardiac AV delay 180.5 ± 32.6 ms in ICM versus 152.4 ± 30.4 ms in DCM. The electrical left ventricular conduction delay was 40.9 ± 7.5 ms in ICM versus 42.6 ± 17 ms in DCM and intra left ventricular conduction delay 105.6 ± 19.3 ms in ICM versus 128.3 ± 24.1 ms in DCM.
Conclusions: Left heart signal averaging ECG can be utilized to analyse left atrial conduction delay, intra left atrial conduction delay, left ventricular conduction delay and intra left ventricular conduction delay to improve patient selection for cardiac resynchronization therapy.
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
(2023)
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification.Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the "LSUN-Bedroom" dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.The code for our experiments is provided at https://github.com/deepfake-study/deepfake-multiLID.
Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector that only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for epsilon = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN feature maps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant m argin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Recently, RobustBench (Croce et al. 2020) has become a widely recognized benchmark for the adversarial robustness of image
classification networks. In it’s most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l∞ perturbations limited to ϵ = 8/255. With leading scores of the currently best performing models of around 60% of the baseline, it is fair to characterize this benchmark to be quite challenging. Despite it’s general acceptance in recent literature, we aim to foster discussion about the suitability of RobustBench as a key indicator for robustness which could be generalized to practical applications. Our line of argumentation against this is two-fold and supported by excessive experiments presented in this paper: We argue that I) the alternation of data by AutoAttack with l∞, ϵ = 8/255 is unrealistically strong, resulting in close to perfect detection rates of adversarial samples even by simple detection algorithms and human observers.
We also show that other attack methods are much harder to detect while achieving similar success rates. II) That results on low resolution data sets like CIFAR10 do not generalize well to higher resolution images as gradient based attacks appear to become even more detectable with increasing resolutions.
Smart Home or Smart Building applications are a growing market. An increasing challenge is to design energy efficient Smart Home applications to achieve sustainable and green homes. Using the example of the development of an Indoor Smart Gardening system with wireless monitoring and automated watering this paper is discussing in particular the design issue of energy autonomous working sensors and actuators for home automation. Most important part of the presented Smart Gardening system is a 3D printed smart flower pot for single plants. The smart flower pot has integrated a water reservoir for automated plant irrigation and an electronic for monitoring important plant parameters and the water level of the water reservoir. Energy harvesting with solar cells enables energy autonomous working of the flower pot. A low-power wireless interface also integrated in the flowerpot and an external gateway based on a Raspberry Pi 3 enables wireless networking of multiple of those flower pots. The gateway is used for evaluating the plant parameters and as a user interface. Particularly the architecture of the energy autonomous wireless flower pot will be considered, because fully energy autonomous sensors and actuators for home automation could not be implemented without special concepts for the energy supply and the overall electronic.
The following paper presents the results of a feasibility study about Bluetooth Low Energy (BLE) based wireless sensors. The development of industrial wireless sensors leads to important demands for the wireless technologies like a low energy consumption and a resource saving simple protocol stack. Bluetooth Low Energy (BLE) is a rather new wireless standard which will completely fulfill these fundamental requirements. A self-designed BLE sensor system has been used to explore the common applicability of BLE for wireless sensor systems. The evaluation results of various analyses with the BLE sensor system are now presented in this paper.
During the last ten years the development of wireless sensing applications has become more and more attractive. A major reason for this trend is the large quantity of available wireless technologies. The progressing demand on wireless technologies is mainly driven through development from the industrial wireless sensors market. Especially requirements like low energy consumption, a resource saving simple protocol stack and short timing delays between different states of the wireless transceivers are very important for wireless sensors. Bluetooth Low Energy (BLE) is a rather new wireless standard in addition to the traditional Bluetooth standard (Basis rate and enhanced data rate, BR/EDR) [1]. The BLE will completely fulfill these fundamental requirements. First BLE transceiver chips and modules are available and have been tested and implemented in products. In this paper the performance analysis results of a BLE sensor system which is based on the TI transceiver CC2540F [5] will be presented. The results can be taken for further important investigations like lifetime calculations or BLE simulation models.
This paper presents the competence-, business- and research-orientated education approach Fit4PracSis (= Fit for Practice and Sciences). Fit4PracSis is designed for freshman students in interdisciplinary engineering degree programs. It is an education concept, which is establishing a relationship to the future profession and scientific work during the introductory study phase. The freshman students will be early trained in important skills, which are necessary for the successful achievement of the final degree and the future business and research activities.
This paper presents a practice and science orientated education approach for freshman students of interdisciplinary bachelor engineering degree programs. This approach is meant to enhance the motivation and success of freshman students during their whole study. The education approach is called Fit4PracSis (Fit for Practice and Sciences) It was started to develop, set up and establish an education approach, which is building a relationship to students' future profession and to scientific working during the introductory study phase. The freshman students will be trained early in important skills, which are necessary for achieving the final degree successfully and handling of future business and research activities.
Smart Home-/Smart-Building-Anwendungen sind ein stetig wachsender Markt. Smart Gardening ist ein Beispiel dafür, Nutzern mehr Komfort und eine bessere Lebensqualität zu Hause oder in Bürogebäuden zu ermöglichen. Im Rahmen dieses Beitrags wird die Entwicklung eines Indoor-Smart-Gardening-Systems mit dem Fokus auf energieautarkes Arbeiten vorgestellt. Herzstück des Systems ist ein 3D-gedruckter Blumentopf für einzelne Pflanzen mit integrierter Elektronik zum Monitoring der wichtigsten Pflanzenparameter und einem integrierten Wasserreservoir mit Tauchpumpe für das automatisierte Bewässern der Pflanze. Energy Harvesting per Solarzellen ermöglicht ein energieautarkes Arbeiten des Blumentopfes. Eine selbstentwickelte Low-Power-Funkschnittstelle im Blumentopf und ein externes Gateway ermöglichen die drahtlose Vernetzung mehrerer Pflanzen. Das Gateway dient zur Auswertung der Pflanzenparameter, der Ansteuerung der im Netzwerk vorhandenen Blumentöpfe und als Benutzerinterface.