Refine
Year of publication
- 2023 (248) (remove)
Document Type
- Conference Proceeding (105)
- Article (reviewed) (67)
- Master's Thesis (34)
- Article (unreviewed) (19)
- Part of a Book (9)
- Doctoral Thesis (5)
- Bachelor Thesis (4)
- Patent (3)
- Book (1)
- Letter to Editor (1)
Conference Type
- Konferenzartikel (81)
- Konferenz-Abstract (18)
- Konferenzband (2)
- Sonstiges (2)
- Konferenz-Poster (1)
Language
- English (248) (remove)
Keywords
- Deep Leaning (10)
- Biomechanik (9)
- Deep Learning (6)
- Export (4)
- Künstliche Intelligenz (4)
- Machine Learning (4)
- Plastizität (4)
- Additive Manufacturing (3)
- Artificial Intelligence (3)
- Deep learning (3)
- ECA (3)
- Gamification (3)
- IT-Sicherheit (3)
- Maschinelles Lernen (3)
- Materialermüdung (3)
- Renewable Energy (3)
- RoboCup (3)
- Trade (3)
- Wärmepumpe (3)
- optimization (3)
- programming (3)
- 3D printing (2)
- AI-Aided Innovation (2)
- Additive manufacturing (2)
- Advanced Footwear Technology (2)
- Analysis (2)
- Automation (2)
- Biomechanics (2)
- Blockchain (2)
- Blockchains (2)
- Cloud Computing (2)
- Computersicherheit (2)
- Convolutional neural networks (2)
- Creativity (2)
- Current measurement (2)
- Cybersecurity (2)
- Energieeffizienz (2)
- Energiemanagement (2)
- Energiewende (2)
- Energy Transition (2)
- Exercise Science (2)
- Festigkeit (2)
- Fotovoltaik (2)
- General Energy (2)
- Internet der Dinge (2)
- LINUX (2)
- Landwirtschaft (2)
- Mobile Applications (2)
- Modellieren (2)
- Nachhaltigkeit (2)
- Nonlinearity (2)
- Optimization (2)
- Organischer Abfall (2)
- Public Service Models (2)
- Rissausbreitung (2)
- Security (2)
- Software (2)
- Surface Acoustic Waves (2)
- Switches (2)
- User Experience (2)
- Verfahrenstechnik (2)
- Visual Programming (2)
- athletic performance (2)
- biochar (2)
- biowaste (2)
- certificate management (2)
- cushioning (2)
- cyclic loading (2)
- deep learning (2)
- machine learning (2)
- magnetization dynamics (2)
- predictive maintenance (2)
- security (2)
- sustainability (2)
- 100% Renewable (1)
- 3D analysis (1)
- 3D bin picking (1)
- 3D-Druck (1)
- 4-Oxocrotonic acid (1)
- 4D Printing (1)
- 4D-Druck (1)
- 4D-Printing (1)
- 5G (1)
- 5G mobile communication (1)
- AI (1)
- AI aided Innovation (1)
- AR (1)
- Academic Advising (1)
- Acceptance (1)
- Accounting (1)
- Acoustic wave phenomena (1)
- Actuators (1)
- Additive Fertigung (1)
- Additive Tooling (1)
- Adversarial Attacks (1)
- Agrophotovoltaics (1)
- Alterung (1)
- Aluminum (1)
- Amazon Echo (1)
- Angriff (1)
- Anomalieerkennung (1)
- Anomaly Detection (1)
- Anti-Windup (1)
- Artificial Feedback (1)
- Artificial intelligence (1)
- Artistic Research (1)
- Assistive Technologies (1)
- Audiometrie (1)
- Augmented Reality (1)
- Authentication (1)
- Authorization (1)
- Automated Ideation (1)
- Automation Device (1)
- BIPV (1)
- Balance (1)
- Battery energy management system (1)
- Baustoff (1)
- Bautechnik (1)
- Bauökologie (1)
- Bearings (1)
- Behavioral matching (1)
- Bending moments (1)
- Bewegungsabläufe (1)
- Bike Path Analysis (1)
- Binary Executable (1)
- Bio-based building envelope (1)
- Bio-based materials (1)
- Bio-based plastics (1)
- Biochemistry (1)
- Biogas (1)
- Biological methanation (1)
- Biologika (1)
- Biologische Methanisierung (1)
- Biosimilars (1)
- Biotechnology (1)
- Blockchain-to-Blockchain communication (1)
- BlueZ (1)
- Bluetooth (1)
- Bluetooth Low Energy (1)
- Bluetooth-Standard (1)
- Bodenradar (1)
- Body sway (1)
- Bot Classification (1)
- Boundary conditions (1)
- Bowtie antenna (1)
- Brennstoffzelle (1)
- Bruch (1)
- Buchführung (1)
- Building (1)
- Building Automation (1)
- Building automation system (1)
- Building energy efficiency (1)
- Building energy model (1)
- Business Intelligence (1)
- Business Model Canvas (1)
- CCS (1)
- CNN (1)
- CO2-Bilanz (1)
- COP (1)
- COVID-19 (1)
- CPC (1)
- CSR authenticity (1)
- Cameras (1)
- Cantera (1)
- Carbon Footprint (1)
- Carnot coefficient of performance (1)
- Cash flow (1)
- Cellulasen (1)
- Cement Industry (1)
- Chatbot (1)
- Chemical Engineering (1)
- Chiffrierung (1)
- Civil and Structural Engineering (1)
- Cleaning (1)
- Client-side (1)
- Cloud computing (1)
- Coefficient of Performance (1)
- Coefficient of performance (1)
- Collaboration (1)
- Collector yield (1)
- Competitive positioning (1)
- Computer Aided Innovation (1)
- Construction (1)
- Context-based Services (1)
- Conversation analysis (1)
- Corporate social responsibility (1)
- Crisis-time behaviour (1)
- Current control (1)
- Curricular concepts (1)
- Customised (1)
- Cyber Maturity Assessment Model (1)
- CyberDeception (1)
- Cytochrome (1)
- Data Governance (1)
- Data Quality Scorecard (1)
- Data Science (1)
- Data analysis (1)
- Data breech (1)
- Datenanalyse (1)
- Datenmanagement (1)
- Datenqualität (1)
- Debinding (1)
- Decarbonization (1)
- Decoupling (1)
- Deep Neural Network (1)
- Deep Reinforcement Learning (1)
- Deep diffusion models (1)
- Degradability (1)
- DenseNet (1)
- DenseNet201 (1)
- Design (1)
- Design Methods (1)
- Design Science (1)
- Design method (1)
- Diffusion (1)
- Digital Flex Twin Optimization (1)
- Digital Twins (1)
- Digitaler Zwilling (1)
- Digitalization (1)
- Direct Digital Control (1)
- Discrete-time model (1)
- Disease Detection (1)
- Disease detection (1)
- Disruption (1)
- Disruptive Innovation (1)
- Distributed Co-Simulation Protocol (1)
- Dynamic physical activities (1)
- E-Commerce (1)
- EAP-TLS (1)
- EMG features (1)
- ESP-IDF (1)
- ETL Data Pipeline (1)
- EU 933/2019 (1)
- EU Timber Regulation (1)
- Easter Package (1)
- Eco-Innovation (1)
- Eco-inventive Principles (1)
- Edge Computing (1)
- Edge control (1)
- Education (1)
- Effizienz (1)
- Eingruppierung (1)
- Einsamkeit (1)
- Eisen- und Stahlindustrie (1)
- Elastic-plastic fracture mechanics (1)
- Electric Tractors (1)
- Electrical Engineering (1)
- Electrical and Electronic Engineering (1)
- Electrical drives (1)
- Electrochemical pressure impedance spectroscopy (EPIS) (1)
- Electronic Commerce (1)
- Electronic Engineering (1)
- Elektrofahrzeug (1)
- Elektromagnetische Reflexionsmethode (1)
- Elektrotechnik (1)
- Emotional Communication (1)
- Emotional Interaction (1)
- Empirical Research (1)
- Empirical Studies (1)
- Empirical research (1)
- EnEv (1)
- Energie (1)
- Energiespeicherung (1)
- Energietechnik (1)
- Energieverbrauch (1)
- Energy Flexibility (1)
- Energy Flexibility for Companies (1)
- Energy Load Match (1)
- Energy Marketing of Industrial Flexibilities (1)
- Energy Storage Systems (1)
- Energy System Model (1)
- Energy balance (1)
- Energy efficiency (1)
- Energy management (1)
- Energy savings (1)
- Energy simulation (1)
- Energy supply (1)
- Engineering Creativity (1)
- Engineering Design (1)
- Entkopplung (1)
- Entrepreneurs (1)
- Entrepreneurship (1)
- Enzym (1)
- Enzyme assay (1)
- Equivalent domain integral method (1)
- Estimation (1)
- Exercise Physiology (1)
- Explainable AI (1)
- Export Credit Agency (1)
- Extrusion (1)
- Eyetracking (1)
- FFI (1)
- Facility Management (1)
- Failure analysis (1)
- Fault Classification (1)
- Featherweight Go (1)
- Fermentation (1)
- Fertigungsautomatisierung (1)
- Fiber reinforcement (1)
- Field measurements (1)
- Field-programmable gate array (FPGA) (1)
- Filament (1)
- Finance (1)
- Finite Elemant Method (1)
- Finite Element Method (1)
- Finite-Elemente-Methode (1)
- Food Science (1)
- Footwear (1)
- Footwear individualisation (1)
- Full Faith and Credit (1)
- Function monitoring (1)
- Functional performance (1)
- Fused Filament Fabrication (1)
- GPR (1)
- Gait (1)
- Gait phase detection (1)
- Gebäudeleittechnik (1)
- Gebäudetechnik (1)
- General Chemistry (1)
- Generations (1)
- Generative models (1)
- Geothermal energy (1)
- Geschäftsmodell (1)
- Ghana (1)
- Glasfaser (1)
- Global Payroll (1)
- GoogleNet (1)
- Grade (1)
- Gradient (1)
- Green House effect (1)
- Grey-Box Model (1)
- Grooved Seal (1)
- Ground penetrating radar (1)
- HPTLC (1)
- Handschrift (1)
- Handswritten Character Recognition (1)
- Hardware and Architecture (1)
- Harmonic analysis (1)
- Heat pumps (1)
- Helmholtz coil (1)
- Hemp clay bricks (1)
- Heuristic algorithms (1)
- High-Cycle Fatigue (1)
- Higher Education (1)
- Hole pattern seal (1)
- Homomorphic Encryption (1)
- Hot forging (1)
- Human Resource Management (1)
- Humanoid Robots (1)
- Humanoid robot (1)
- Humanoid robots programming (1)
- Hybrid system (1)
- Hydrokultur (1)
- Hydroponic Farming (1)
- Hydroxylierung (1)
- Hygrothermal properties (1)
- IDL (1)
- IEC/IEEE 60802 security (1)
- IIoT (1)
- IYL (1)
- Identitätsverwaltung (1)
- Image Processing (1)
- Image restoration (1)
- Immersive Technology (1)
- Impedance Spectroscopy (1)
- Inclined surface (1)
- Incremental J-integral (1)
- Industrial Blockchain (1)
- Industrial Production (1)
- Industrial and Manufacturing Engineering (1)
- Industrie 4.0 (1)
- Inertial measurement unit (1)
- Informatik (1)
- Injection Molds (1)
- Injury risk factor (1)
- Innovation (1)
- Interactive Film (1)
- International Day of Light (1)
- International Year of Light (1)
- Internet of Things (1)
- Interoperability (1)
- Interpersonal coordination (1)
- Inventive Design (1)
- Inverters (1)
- IoT (1)
- JavaScript (1)
- Jitter (1)
- KI-Labor Südbaden (1)
- Konfigurationen (1)
- Kosten (1)
- Kostenrechnung (1)
- Kostensenkung (1)
- Kryptologie (1)
- Kundendaten (1)
- LXP (1)
- Landau-Lifshitz-Gilbert equation (1)
- Landau-Lifshitz-Gilbert equations (1)
- Large-scale Evaluation (1)
- Laser ablation (1)
- Lean Accounting (1)
- Lean Accounting Transformation (1)
- Lean Management (1)
- Leckage (1)
- Lehm (1)
- Leistungsbedarf (1)
- Light-weight building (1)
- Limiting (1)
- LinkedIn (1)
- Linux (1)
- Lithium-Ion Battery (1)
- Lithium-ion batteries (1)
- Location-Based Services (1)
- Location-based Services (1)
- Locomotion (1)
- Logic gates (1)
- Lohnbuchführung (1)
- Low Carbon network (1)
- Low-Cycle Fatigue (1)
- Lumbar spine (1)
- MATLAB (1)
- MEMS (1)
- Machine learning (1)
- Magnetic sensors (1)
- Magnetismus (1)
- Magnetization dynamics (1)
- Magnetoacoustic effect (1)
- Management Theory (1)
- Management Trends (1)
- Manufacturing automation (1)
- Markerless (1)
- Markerlos (1)
- Martensitic die steel (1)
- Mass transport processes (1)
- Material Extrusion (1)
- Mechanical Engineering (1)
- Mechanical loading (1)
- Mechanics of Materials (1)
- Meters (1)
- Micro Enterprise (1)
- Mikrofon (1)
- MobileNet (1)
- Mobilfunk (1)
- Modeling (1)
- Modeling and Simulation (1)
- Modellierung (1)
- Monitoring (1)
- Monocular Depth Estimation (1)
- Morocco (1)
- Motion Capture System (1)
- Motion analysis (1)
- Muscle strength (1)
- Musculoskeletal modeling (1)
- Myocontrol (1)
- NADPH (1)
- NETCONF security (1)
- Nachhaltige Entwicklung (1)
- Nahfeldkommunikation (1)
- Natural fibers (1)
- Net zero energy building (NZEB) (1)
- Neural Architecture Search (1)
- Neural Ordinary Differential Equation (1)
- Neural networks (1)
- Neuronales Netz (1)
- New Product Development (1)
- OECD (1)
- OSINT (1)
- OT security (1)
- Object-Based Services (1)
- Online Magazine (1)
- Open Source (1)
- Open Source Intelligence (1)
- Optical Fiber Network (1)
- Optical Network (1)
- Optical techniques (1)
- Optics and Photonics (1)
- Optimization with Digital Twins (1)
- Optische Zeichenerkennung (1)
- Outsourcing (1)
- Overuse injury (1)
- Oxyfuel (1)
- P3D (1)
- P450 Cytochrome (1)
- PCB-printing (1)
- PI control (1)
- PKI (1)
- PROFINET Security (1)
- PROM (1)
- PV power forecast (1)
- Parameter estimation (1)
- Passives System (1)
- Peer-to-peer trading (1)
- Performance (1)
- Performance evaluation (1)
- Periodic Table of AI (1)
- Permanent magnet machines (1)
- Pflanzenkohle (1)
- Photovoltaics (1)
- Physical unclonable function (1)
- Plant monitoring (1)
- Plasticity model (1)
- Poisoning (1)
- PolyJet Modelling (1)
- Polymer electrolyte membrane fuel cell (PEMFC) (1)
- Power Consumption (1)
- Power Quality (1)
- Predictions (1)
- Predictive Maintenance (1)
- Process Design (1)
- Programmierung (1)
- Prosthetic control (1)
- Prozessautomation (1)
- Psychische Gesundheit (1)
- Pulmonary (1)
- Pulse width modulation (1)
- Purchase intention (1)
- Python Programming (1)
- RFID (1)
- RPA (1)
- RUL (1)
- Radio frequency (1)
- Ransomware (1)
- Rationalisierung (1)
- Realtime simulation (1)
- Rechnungswesen (1)
- Recommendation (1)
- Reinforcement learning (1)
- Renewable Energy Markets (1)
- ResNet (1)
- Risikoanalyse (1)
- Risikomanagement (1)
- Risk Analysis (1)
- Risk Management (1)
- Robot Applications (1)
- Robot vision systems (1)
- Roboter (1)
- Robotic Process Automation (1)
- Rotordynamik (1)
- Rotors (1)
- Running (1)
- Running performance (1)
- Running shoe (1)
- Rust (1)
- SEI formation (1)
- SME (1)
- SOC (1)
- SPC waiver (1)
- Sales performance (1)
- Sanierung (1)
- Scalability (1)
- Security Operation Center (1)
- Security Operation Centers (1)
- Security Operations Center (1)
- Seismic processing (1)
- Semiconductor device measurement (1)
- Sensor phenomena and characterization (1)
- Sensors (1)
- Sicherheit (1)
- Simulation (1)
- Sintering (1)
- Smart Home (1)
- Smart Load Management (1)
- Smart Materials (1)
- Smart metering (1)
- Soft Robot (1)
- Software Robot (1)
- Software algorithms (1)
- Solartechnik (1)
- Sound (1)
- Sound Synthesis (1)
- Spektroskopie (1)
- Spikes (1)
- Spin dynamics (1)
- Sport Science (1)
- Sports Injury (1)
- Sports performance (1)
- Sportwissenschaft (1)
- Stand Alone (1)
- Start-up Companies (1)
- Stoffübertragung (1)
- Strain engineering (1)
- Strategy creation view (1)
- Strom (1)
- Stromregelung (1)
- Stromregelung von Drehstromantrieben (1)
- Stromzustandsregler (1)
- Studium (1)
- Sub-Saharan Africa (1)
- Substrate and product specificity (1)
- Supervised Learning (1)
- Supply Chain Risk Management (1)
- Surface acoustic wave (1)
- Surveillance (1)
- Sustainability (1)
- Sustainable development (1)
- Sweaty (1)
- Synchronization (1)
- Synchronmaschine (1)
- Systementwurf (1)
- TCO (1)
- TLS (1)
- TRIZ (1)
- TRIZ Inventive Principles (1)
- TSN security (1)
- Target Group Oriented Communication (1)
- Target Group Oriented Design (1)
- Temperature sensors (1)
- Temperature-dependent material (1)
- Terminal (1)
- Testbed (1)
- Thermal comfort (1)
- Thermomechanical loading (1)
- Tibial stress (1)
- Time Sensitive Networking (1)
- Time Series Classification (1)
- Time Synchronization (1)
- TinyML (1)
- Torque (1)
- Total Cost of Ownership (1)
- Total hip arthroplasty (1)
- Training factors (1)
- Transfer learning (1)
- Triangulation (1)
- Trichoderma reesei (1)
- Triggerstabilität (1)
- Trust management (1)
- Tuberculosis (1)
- Ultrafast magnetic effects (1)
- Unsupervised Learning (1)
- Unternehmensgründung (1)
- Unüberwachtes Lernen (1)
- Urban Building Energy Modeling (UBEM) (1)
- Urformen (1)
- Use Case (1)
- VR (1)
- Variable refrigerant flow system (1)
- Verifiability (1)
- Virtual Reality (1)
- Virtual experiments (1)
- Visual Analytics (1)
- Visual programming (1)
- Visual programming languages (1)
- Volkswirtschaft (1)
- Vulnerabilities (1)
- Warburg Element (1)
- Wars (1)
- Wasseraufbereitung (1)
- Werkstoffkunde (1)
- Wireless IoT (1)
- Wireless Technology (1)
- X-in-the-loop (1)
- Xray (1)
- Yolov5 (1)
- Zement (1)
- Zugriffsverwaltung (1)
- accelerometer (1)
- access management (1)
- accountability (1)
- accuracy (1)
- acoustic phonons (1)
- adaptive technology (1)
- adolescent (1)
- advanced collision models (1)
- adversarial attacks (1)
- animal feed (1)
- anisotropy (1)
- ankle sprain (1)
- artificial intelligence (1)
- artificial neural networks (1)
- audio (1)
- audiometry (1)
- bearing (1)
- bench-marking (1)
- bimodal hearing (1)
- binaural hearing (1)
- biogas (1)
- biomaterials (1)
- biomechanics (1)
- biosensor (1)
- blend electrode (1)
- blockchain (1)
- building energy model (1)
- calciumphosphate cement (1)
- cantilever (1)
- carbon elements (1)
- cellulase (1)
- circular economy (1)
- classification (1)
- climate (1)
- climate-neutral (1)
- cloud business impacts (1)
- community (1)
- constitutive behavior (1)
- context-driven design (1)
- crack growth simulation (1)
- credentials (1)
- cryptography (1)
- curriculum learning (1)
- cyber attack (1)
- dApp (1)
- dApp popularity (1)
- data-driven simulation (1)
- decentralized applications (1)
- deep reinforcement learning (1)
- degradation stages (1)
- detection (1)
- device delay mismatch (1)
- digital shadow (1)
- economic (1)
- edge computing (1)
- efficient training (1)
- elastic–viscoplastic material (1)
- electric battery (1)
- electric machine (1)
- electrolytic cell (1)
- elite (1)
- embedded systems (1)
- energy consumption (1)
- energy efficiency (1)
- energy harvesting (1)
- energy management system (1)
- energy system analysis (1)
- energy systems modelling (1)
- engine-in-the-loop (1)
- enzymatic pretreatment (1)
- enzyme cascade (1)
- evaluation (1)
- exchange magnons (1)
- explainability (1)
- external load (1)
- fairness (1)
- fatigue (1)
- fermentation (1)
- ferromagnetic resonance (1)
- financial impacts (1)
- fingerprinting (1)
- finite element models (1)
- force controlled robot (1)
- fuzzy logic (1)
- generation Y (1)
- generation Z (1)
- glazing systems (1)
- ground penetrating radar (1)
- hardware-in-the-loop (1)
- heat pump (1)
- heating and cooling (1)
- high-speed cameras (1)
- honeycomb seal (1)
- hybrid electric vehicle (1)
- hybrid networks (1)
- identity management (1)
- image classification (1)
- industry sectors (1)
- inertial sensor (1)
- injury (1)
- injury prediction (1)
- inositol phosphates (1)
- interactive visualization (1)
- intermediate domain (1)
- internal combustion engine (1)
- internal load (1)
- inventive design (1)
- inverse dynamics (1)
- inversion (1)
- isocontour lines (1)
- joint torque sensors (1)
- kinetic (1)
- knowledge sharing (1)
- laboratory-based injury risk screening (1)
- laser triangulation (1)
- lattice Boltzmann method (1)
- learning scenarios (1)
- lifelong learning (1)
- lignocellulosic material (1)
- lithium plating (1)
- lithium-ion batteries (1)
- lithium-ion battery (1)
- local grid refinement (1)
- locomotion (1)
- loneliness (1)
- low-cost sensors and devices (1)
- maceration, vessel elements (1)
- magnetic metasurfaces (1)
- magnetic nanostructures (1)
- magneto-acoustics (1)
- magnetoactive smart materials (1)
- magnetresponsive Werkstoffe (1)
- manufacturing industries (1)
- maximal sprinting speed (1)
- measurement (1)
- mechanical properties (1)
- mechanical testing (1)
- media technology (1)
- mental health apps (1)
- metal oxide reduction (1)
- metaverse (1)
- micro-photovoltaic system (1)
- microphone (1)
- millimeter-wave (1)
- mitigations (1)
- mobile applications (1)
- model predictive control (1)
- moving to cloud (1)
- multi-material (1)
- musical history (1)
- neural networks (1)
- neurology (1)
- non linear time-series analysis (1)
- open fitting (1)
- optics and photonics (1)
- orthosis (1)
- osseointegration (1)
- oxygen membrane (1)
- performance (1)
- pharmaceuticals (1)
- phosphate release (1)
- photovoltaic (1)
- phytase (1)
- phytate dephosphorylation (1)
- picosecond ultrasonics (1)
- player monitoring (1)
- pouch cell (1)
- powder filled gripper (1)
- powertrain testing (1)
- precision medicine (1)
- predictive control (1)
- primary authentication (1)
- primary progressive aphasia (1)
- printed electronics (1)
- privacy (1)
- problem-solving (1)
- professional designers (1)
- protective equipment (1)
- pyrolysis (1)
- quality feedback survey and results assessment (1)
- real-time systems (1)
- real-word data (1)
- regional analysis (1)
- remaining useful life (1)
- repeatability (1)
- research-oriented education (1)
- resource efficiency (1)
- responsibility (1)
- road and track running (1)
- robot (1)
- robotics (1)
- robustness (1)
- running shoe (1)
- running shoes (1)
- scalar fields (1)
- secure communication (1)
- sensor and actuator networks (1)
- short-term solar forecast (1)
- sinter ceramics (1)
- skin cancer (1)
- skin cancer detection (1)
- smart materials (1)
- smooth seal (1)
- soccer (1)
- socially assistive robot (1)
- solar module (1)
- sound (1)
- sound localization (1)
- sparse backpropagation (1)
- speech in noise (1)
- speech intelligibility model (1)
- statistical analysis (1)
- steel industry (1)
- strengthening mechanism (1)
- super spikes (1)
- system authenticity (1)
- temperature sensor (1)
- test bench (1)
- test bench coupling (1)
- thermal stress (1)
- thermo-electro-mechanical modeling (1)
- thermomechanical processes (1)
- thickness change (1)
- thickness estimation (1)
- thinned ASIC in foil (1)
- three-dimensional kinematic (1)
- throughput (1)
- titanium fibers (1)
- tobacco mosaic virus (1)
- track running (1)
- transfer learning (1)
- transversal skills (1)
- trend analysis (1)
- trends (1)
- triangulation (1)
- trust (1)
- ultrafast laser interaction with materials (1)
- ultrafast magnetoacoustics (1)
- understandability (1)
- unilateral hearing loss (1)
- unique interdisciplinary international higher education approach (1)
- university students (1)
- vacuum (1)
- vibration (1)
- virtual worlds (1)
- viscoplastic material (1)
- visual qualities (1)
- water treatment (1)
- wearable sensors (1)
- wearable technology (1)
- wood identification (1)
- zero emissions (1)
- zeros (1)
- Überwachtes Lernen (1)
- β-TCP (1)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (86)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (83)
- Fakultät Wirtschaft (W) (50)
- Fakultät Medien (M) (ab 22.04.2021) (29)
- IMLA - Institute for Machine Learning and Analytics (27)
- INES - Institut für nachhaltige Energiesysteme (27)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (20)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (19)
- IfTI - Institute for Trade and Innovation (8)
- POIM - Peter Osypka Institute of Medical Engineering (5)
Open Access
- Open Access (120)
- Closed (119)
- Gold (36)
- Diamond (34)
- Bronze (29)
- Hybrid (17)
- Closed Access (9)
- Grün (4)
The objective of this project is to enhance the operations of a micro-enterprise that deals with food ingredients. The emphasis is on streamlining procedures and executing effective tactics. By utilizing tools like SWOT analysis, evaluations, and strategy development, the company's strengths, weaknesses, opportunities, and threats were assessed. The company developed business-level and functional-level strategies to expedite growth and attain objectives based on the findings. Moreover, precise suggestions were given to minimize the quantity of SKUs and optimize operations. The work highlighted the significance of developing a process map for streamlining operations, boosting efficiency, and elevating customer contentment. Through the implementation of said recommendations and strategies, the company can strategically position itself for success within the highly competitive food ingredients industry.
The interest of scientists to study motion sequences exists in the fields of sports science, clinical analysis and computer animation for quite some time. While in the last decades mainly markerbased motion capture systems have been used to evaluate movements, the interest in markerless systems is growing more and more. Nevertheless, in the field of clinical analysis, markerless methods have not yet proven their value, partly due to a lack of studies evaluating the quality of the obtained data. Therefore, this study aims to validate two markerless motion capture softwares from Simi Reality Motion Systems. The software Simi Shape, which is a mixture of traditional image-based tracking supported by an artificial intelligence net (AI net), and the software Crush, that uses a completely AI-based method. For this purpose, all motion data was recorded with two in-house motion capture systems. One system for recording the movements for a markerbased evaluation as gold standard and one system for markerless tracking. Within a laboratory environment, eight cameras per system were mounted around the area of motion. By placing two cameras in the same position and using the same calibration, deviations in the image data between those for markerbased and markerless tracking were extremely minimal. Based on this data, markerbased tracking was performed using the Simi Motion program, markerless tracking was performed using the Simi Shape software system and the latest software from Simi Reality Motion Systems, Crush. When comparing the markerless data with the markerbased data, an average root mean square error of 0,038 m was calculated for Simi Shape and a deviation of 0,037 m for Crush. In a direct comparison of the two markerless systems, a root mean square error of 0,019 m was scored. Based on these data, conclusions could be drawn about the accuracies of the two markerless systems. The obtained kinematic data of the tracking are in the range of high accuracy, which is limited to a deviation of less than 0,05 m according to the literature.
In 2015, Google engineer Alexander Mordvintsev presented DeepDream as technique to visualise the feature analysis capabilities of deep neural networks that have been trained on image classification tasks. For a brief moment, this technique enjoyed some popularity among scientists, artists, and the general public because of its capability to create seemingly hallucinatory synthetic images. But soon after, research moved on to generative models capable of producing more diverse and more realistic synthetic images. At the same time, the means of interaction with these models have shifted away from a direct manipulation of algorithmic properties towards a predominance of high level controls that obscure the model's internal working. In this paper, we present research that returns to DeepDream to assess its suit-ability as method for sound synthesis. We consider this research to be necessary for two reasons: it tackles a perceived lack of research on musical applications of DeepDream, and it addresses DeepDream's potential to combine data driven and algorithmic approaches. Our research includes a study of how the model architecture, choice of audio data-sets, and method of audio processing influence the acoustic characteristics of the synthesised sounds. We also look into the potential application of DeepDream in a live-performance setting. For this reason, the study limits itself to models consisting of small neural networks that process time-domain representations of audio. These models are resource-friendly enough to operate in real time. We hope that the results obtained so far highlight the attractiveness of Deep-Dream for musical approaches that combine algorithmic investigation with curiosity driven and open ended exploration.
The progress in machine learning has led to advanced deep neural networks. These networks are widely used in computer vision tasks and safety-critical applications. The automotive industry, in particular, has experienced a significant transformation with the integration of deep learning techniques and neural networks. This integration contributes to the realization of autonomous driving systems. Object detection is a crucial element in autonomous driving. It contributes to vehicular safety and operational efficiency. This technology allows vehicles to perceive and identify their surroundings. It detects objects like pedestrians, vehicles, road signs, and obstacles. Object detection has evolved from being a conceptual necessity to an integral part of advanced driver assistance systems (ADAS) and the foundation of autonomous driving technologies. These advancements enable vehicles to make real-time decisions based on their understanding of the environment, improving safety and driving experiences. However, the increasing reliance on deep neural networks for object detection and autonomous driving has brought attention to potential vulnerabilities within these systems. Recent research has highlighted the susceptibility of these systems to adversarial attacks. Adversarial attacks are well-designed inputs that exploit weaknesses in the deep learning models underlying object detection. Successful attacks can cause misclassifications and critical errors, posing a significant threat to the functionality and safety of autonomous vehicles. With the rapid development of object detection systems, the vulnerability to adversarial attacks has become a major concern. These attacks manipulate inputs to deceive the target system, significantly compromising the reliability and safety of autonomous vehicles. In this study, we focus on analyzing adversarial attacks on state-of-the-art object detection models. We create adversarial examples to test the models’ robustness. We also check if the attacks work on a different object detection model meant for similar tasks. Additionally, we extensively evaluate recent defense mechanisms to see how effective they are in protecting deep neural networks (DNNs) from adversarial attacks and provide a comprehensive overview of the most commonly used defense strategies against adversarial attacks, highlighting how they can be implemented practically in real-world situations.
Ultra-low-power passive telemetry systems for industrial and biomedical applications have gained much popularity lately. The reduction of the power consumption and size of the circuits poses critical challenges in ultra-low-power circuit design. Biotelemetry applications like leakage detection in silicone breast implants require low-power-consuming small-size electronics. In this doctoral thesis, the design, simulation, and measurement of a programmable mixed-signal System-on-Chip (SoC) called General Application Passive Sensor Integrated Circuit (GAPSIC) is presented. Owing to the low power consumption, GAPSIC is capable of completely passive operation. Such a batteryless passive system has lower maintenance complexity and is also free from battery-related health hazards. With a die area of 4.92 mm² and a maximum analog power consumption of 592 µW, GAPSIC has one of the best figure-of-merits compared to similar state-of-the-art SoCs. Regarding possible applications, GAPSIC can read out and digitally transmit the signals of resistive sensors for pressure or temperature measurements. Additionally, GAPSIC can measure electrocardiogram (ECG) signals and conductivity.
The design of GAPSIC complies with the International Organization for Standardization (ISO) 15693/NFC (near field communication) 5 standard for radio frequency identification (RFID), corresponding to the frequency range of 13.56 MHz. A passive transponder developed with GAPSIC comprises of an external memory storage and very few other external components, like an antenna and sensors. The passive tag antenna and reader antenna use inductive coupling for communication and energy transfer, which enables passive operation. A passive tag developed with GAPSIC can communicate with an NFC compatible smart device or an ISO 15693 RFID reader. An external memory storage contains the programmable application-specific firmware.
As a mixed-signal SoC, GAPSIC includes both analog and digital circuitries. The analog block of GAPSIC includes a power management unit, an RFID/NFC communication unit, and a sensor readout unit. The digital block includes an integrated 32-bit microcontroller, developed by the Hochschule Offenburg ASIC design center, and digital peripherals. A 16-kilobyte random-access memory and a read-only 16-kilobyte memory constitute the GAPSIC internal memory. For the fabrication of GAPSIC, one poly, six-metal 0.18 µm CMOS process is used.
The design of GAPSIC includes two stages. In the first stage, a standalone RFID/NFC frontend chip with a power management unit, an RFID/NFC communication unit, a clock regenerator unit, and a field detector unit was designed. In the second stage, the rest of the functional blocks were integrated with the blocks of the RFID/NFC frontend chip for the final integration of GAPSIC. To reduce the power consumption, conventional low-power design techniques were applied extensively like multiple power supplies, and the operation of complementary metal-oxide-semiconductor (CMOS) transistors in the sub-threshold region of operation, as well as further innovative circuit designs.
An overvoltage protection circuit, a power rectifier, a bandgap reference circuit, and two low-dropout (LDO) voltage regulators constitute the power management unit of GAPSIC. The overvoltage protection circuit uses a novel method where three stacked transistor pairs shunt the extra voltage. In the power rectifier, four rectifier units are arranged in parallel, which is a unique approach. The four parallel rectifier units provide the optimal choice in terms of voltage drop and the area required.
The communication unit is responsible for RFID/NFC communication and incorporates demodulation and load modulation circuitry. The demodulator circuit comprises of an envelope detector, a high-pass filter, and a comparator. Following a new approach, the bandgap reference circuit itself acts as the load for the envelope detector circuit, which minimizes the circuit complexity and area. For the communication between the reader and the RFID/NFC tag, amplitude-shift keying (ASK) is used to modulate signals, where the smallest modulation index can be as low as 10%. A novel technique involving a comparator with a preset offset voltage effectively demodulates the ASK signal. With an effective die area of 0.7 mm² and power consumption of 107 µW, the standalone RFID/NFC frontend chip has the best figure-of-merits compared to the state-of-the-art frontend chips reported in the relevant literature. A passive RFID/NFC tag developed with the standalone frontend chip, as well as temperature and pressure sensors demonstrate the full passive operational capability of the frontend chip. An NFC reader device using a custom-built Android-based application software reads out the sensor data from the passive tag.
The sensor readout circuit consists of a channel selector with two differential and four single-ended inputs with a programmable-gain instrumentation amplifier. The entire sensor readout part remains deactivated when not in use. The internal memory stores the measured offset voltage of the instrumentation amplifier, where a firmware code removes the offset voltage from the measured sensor signal. A 12-bit successive approximation register (SAR) type analog-to-digital-converter (ADC) based on a charge redistribution architecture converts the measured sensor data to a digital value. The digital peripherals include a serial peripheral interface, four timers, RFID/NFC interfaces, sensor readout unit interfaces, and 12-bit SAR logic.
Two sets of studies with custom-made NFC tag antennas for biomedical applications were conducted to ascertain their compatibility with GAPSIC. The first study involved the link efficiency measurements of NFC tag antennas and an NFC reader antenna with porcine tissue. In a separate experiment, the effect of a ferrite compared to air core on the antenna-coupling factor was investigated. With the ferrite core, the coupling factor increased by four times.
Among the state-of-the-art SoCs published in recent scientific articles, GAPSIC is the only passive programmable SoC with a power management unit, an RFID/NFC communication interface, a sensor readout circuit, a 12-bit SAR ADC, and an integrated 32-bit microcontroller. This doctoral research includes the preliminary study of three passive RFID tags designed with discrete components for biomedical and industrial applications like measurements of temperature, pH, conductivity, and oxygen concentration, along with leakage detection in silicone breast implants. Besides its small size and low power consumption, GAPSIC is suitable for each of the biomedical and industrial applications mentioned above due to the integrated high-performance microcontroller, the robust programmable instrumentation amplifier, and the 12-bit analog-to-digital converter. Furthermore, the simulation and measurement data show that GAPSIC is well suited for the design of a passive tag to monitor arterial blood pressure in patients experiencing Peripheral Artery Disease (PAD), which is proposed in this doctoral thesis as an exemplary application of the developed system.
This study investigates the impact of global payroll outsourcing on organizational efficiency and cost reduction based on the analysis of diverse implications stemming from thirty one (31) survey results. The findings reveal multifaceted challenges and benefitsassociated with outsourcing global payroll processing.
The research also unveils the most benefits of global payroll outsourcing. Notably, there's a consensus on the reduction in time-to-process payroll, cost per payroll processed, and improved payroll accuracy rate. Outsourcing streamlines processes, enhances operational efficiency, and contributes to faster, more accurate financial reporting.
Despite these benefits and challenges, statistical analysis reveals weak correlations between outsourcing global payroll and cost reduction or improved efficiency in various parameters, indicating a lack of a significant relationship. Consequently, the results, suggest no substantial correlation between global payroll outsourcing and enhanced efficiency or cost reduction based on this study's data.
Decarbonisation Strategies in Energy Systems Modelling: APV and e-tractors as Flexibility Assets
(2023)
This work presents an analysis of the impact of introducing Agrophotovoltaic technologies and electric tractors into Germany’s energy system. Agrophotovoltaics involves installing photovoltaic systems in agricultural areas, allowing for dual usage of the land for both energy generation and food production. Electric tractors, which are agricultural machinery powered by electric motors, can also function as energy storage units, providing flexibility to the grid. The analysis includes a sensitivity study to understand how the availability of agricultural land influences Agrophotovoltaic investments, followed by the examination of various scenarios that involve converting diesel tractors to electric tractors. These scenarios are based on the current CO2 emission reduction targets set by the German Government, aiming for a 65% reduction below 1990 levels by 2030 and achieving zero emissions by 2045. The results indicate that approximately 3% of available agricultural land is necessary to establish a viable energy mix in Germany. Furthermore, the expansion of electric tractors tends to reduce the overall system costs and enhances the energy-cost-efficiency of Agrophotovoltaic investments.
Introduction: Subjects with mild to moderate hearing loss today often receive hearing aids (HA) with open-fitting (OF). In OF, direct sound reaches the eardrums with minimal damping. Due to the required processing delay in digital HA, the amplified HA sound follows some milliseconds later. This process occurs in both ears symmetrically in bilateral HA provision and is likely to have no or minor detrimental effect on binaural hearing. However, the delayed and amplified sound are only present in one ear in cases of unilateral hearing loss provided with one HA. This processing alters interaural timing differences in the resulting ear signals.
Methods: In the present study, an experiment with normal-hearing subjects to investigate speech intelligibility in noise with direct and delayed sound was performed to mimic unilateral and bilateral HA provision with OF.
Results: The outcomes reveal that these delays affect speech reception thresholds (SRT) in the unilateral OF simulation when presenting speech and noise from different spatial directions. A significant decrease in the median SRT from –18.1 to –14.7 dB SNR is observed when typical HA processing delays are applied. On the other hand, SRT was independent of the delay between direct and delayed sound in the bilateral OF simulation.
Discussion: The significant effect emphasizes the development of rapid processing algorithms for unilateral HA provision.
CNN-based deep learning models for disease detection have become popular recently. We compared the binary classification performance of eight prominent deep learning models: DenseNet 121, DenseNet 169, DenseNet 201, EffecientNet b0, EffecientNet lite4, GoogleNet, MobileNet, and ResNet18 for their binary classification performance on combined Pulmonary Chest Xrays dataset. Despite the widespread application in different fields in medical images, there remains a knowledge gap in determining their relative performance when applied to the same dataset, a gap this study aimed to address. The dataset combined Shenzhen, China (CH) and Montgomery, USA (MC) data. We trained our model for binary classification, calculated different parameters of the mentioned models, and compared them. The models were trained to keep in mind all following the same training parameters to maintain a controlled comparison environment. End of the study, we found a distinct difference in performance among the other models when applied to the pulmonary chest Xray image dataset, where DenseNet169 performed with 89.38 percent and MobileNet with 92.2 percent precision.
The COVID19 pandemic, a unique and devastating respiratory disease outbreak, has affected global populations as the disease spreads rapidly. Recent Deep Learning breakthroughs may improve COVID19 prediction and forecasting as a tool of precise and fast detection, however, current methods are still being examined to achieve higher accuracy and precision. This study analyzed the collection contained 8055 CT image samples, 5427 of which were COVID cases and 2628 non COVID. The 9544 Xray samples included 4044 COVID patients and 5500 non COVID cases. The most accurate models are MobileNet V3 (97.872 percent), DenseNet201 (97.567 percent), and GoogleNet Inception V1 (97.643 percent). High accuracy indicates that these models can make many accurate predictions, as well as others, are also high for MobileNetV3 and DenseNet201. An extensive evaluation using accuracy, precision, and recall allows a comprehensive comparison to improve predictive models by combining loss optimization with scalable batch normalization in this study. Our analysis shows that these tactics improve model performance and resilience for advancing COVID19 prediction and detection and shows how Deep Learning can improve disease handling. The methods we suggest would strengthen healthcare systems, policymakers, and researchers to make educated decisions to reduce COVID19 and other contagious diseases.
This paper presents the new Deep Reinforcement Learning (DRL) library RL-X and its application to the RoboCup Soccer Simulation 3D League and classic DRL benchmarks. RL-X provides a flexible and easy-to-extend codebase with self-contained single directory algorithms. Through the fast JAX-based implementations, RL-X can reach up to 4.5x speedups compared to well-known frameworks like Stable-Baselines3.
The use of artificial intelligence continues to impact a broad variety of domains, application areas, and people. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. The purpose of this seminar is to understand how these components factor into the holistic view of trust. Further, this seminar seeks to identify design guidelines and best practices for how to build interactive visualization systems to calibrate trust.
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
There is an ongoing debate about the use and scope of Clayton M. Christensen´s idea of disruptive innovation, including the question of whether it is a management buzz phrase or a valuable theory. This discussion considers the general question of how innovation in the field of management theories and concepts finds its way to the different target groups. This conceptual paper combines the different concepts of the creation and dissemination of management trends in a basic framework based on a short review of models for the dissemination of management ideas. This framework allows an analysis of the character of new management ideas like disruptive innovation. By measuring the impact of the theory on the academic sphere using a bibliometric statistic of the number of academic publications on Google scholar and Scopus and a meta-analysis of research papers, we show the significant influence of disruptive innovation beyond pure management fads.
Variable refrigerant flow (VRF) and variable air volume (VAV) systems are considered among the best heating, ventilation, and air conditioning systems (HVAC) thanks to their ability to provide cooling and heating in different thermal zones of the same building. As well as their ability to recover the heat rejected from spaces requiring cooling and reuse it to heat another space. Nevertheless, at the same time, these systems are considered one of the most energy-consuming systems in the building. So, it is crucial to well size the system according to the building’s cooling and heating needs and the indoor temperature fluctuations. This study aims to compare these two energy systems by conducting an energy model simulation of a real building under a semi-arid climate for cooling and heating periods. The developed building energy model (BEM) was validated and calibrated using measured and simulated indoor air temperature and energy consumption data. The study aims to evaluate the effect of these HVAC systems on energy consumption and the indoor thermal comfort of the building. The numerical model was based on the Energy Plus simulation engine. The approach used in this paper has allowed us to reach significant quantitative energy saving along with a high level of indoor thermal comfort by using the VRF system compared to the VAV system. The findings prove that the VRF system provides 46.18% of the annual total heating energy savings and 6.14% of the annual cooling and ventilation energy savings compared to the VAV system.
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.
Learning programming fundamentals is considered as one of the most challenging and complex learning activities. Some authors have proposed visual programming language (VPL) approaches to address part of the inherent complexity [1]. A visual programming language lets users develop programs by combining program elements, like loops graphically rather than by specifying them textually. Visual expressions, spatial arrangements of text and graphic symbols are used either as syntax elements or secondary notation. VPLs are normally used for educational multimedia, video games, system development, and data warehousing/business analytics purposes. For example, Scratch, a platform of Massachusetts Institute of Technology, is designed for kids and after school programs.
Design of mobile software applications is considered as one of the most challenging application domains due to the build in sensors as part of a mobile device, like GPS, camera or Near Field Communication (NFC). Sensors enable creation of context-aware mobile applications in which applications can discover and take advantage of contextual information, such as user location, nearby people and objects, and the current user activity. As a consequence, context-aware mobile applications can sense clues about the situational environment making mobile devices more intelligent, adaptive, and personalized. Such context aware mobile applications seem to be motivating and attractive case studies, especially for programming beginners (“my own first app”).
In this work, we introduce a use-case centered approach as well as clear separation of user interface design and sensor-based program development. We provide an in-depth discussion of a new VPL based teaching method, a step by step development process to enable programming beginners the creation of context aware mobile applications. Finally, we argue that addressing challenges for programming beginners by our teaching approach could make programming teaching more motivating, with an additional impact on the final software quality and scalability.
The key contributions of our study are the following:
- An overview of existing attempts to use VPL approaches for mobile applications
- A use case centered teaching approach based on a clear separation of user interface design and sensor-based program development
- A teaching case study enabling beginners a step by step creation of context-aware mobile applications based on the MIT App Inventor (a platform of Massachusetts Institute of Technology)
- Open research challenges and perspectives for further development of our teaching approach
References:
[1] Idrees, M., Aslam, F. (2022). A Comprehensive Survey and Analysis of Diverse Visual Programming Languages, VFAST Transactions on Software Engineering, 2022, Volume 10, Number 2, pp 47-60.
During pyrolysis, biomass is carbonised in the absence of oxygen to produce biochar with heat and/or electricity as co-products making pyrolysis one of the promising negative emission technologies to reach climate goals worldwide. This paper presents a simplified representation of pyrolysis and analyses the impact of this technology on the energy system. Results show that the use of pyrolysis can allow getting zero emissions with lower costs by making changes in the unit commitment of the power plants, e.g. conventional power plants are used differently, as the emissions will be compensated by biochar. Additionally, the process of pyrolysis can enhance the flexibility of energy systems, as it shows a correlation between the electricity generated by pyrolysis and the hydrogen installation capacity, being hydrogen used less when pyrolysis appears. The results indicate that pyrolysis, which is available on the market, integrates well into the energy system with a promising potential to sequester carbon.
3D Bin Picking with an innovative powder filled gripper and a torque controlled collaborative robot
(2023)
A new and innovative powder filled gripper concept will be introduced to a process to pick parts out of a box without the use of a camera system which guides the robot to the part. The gripper is a combination of an inflatable skin, and a powder inside. In the unjammed condition, the powder is soft and can adjust to the geometry of the part which will be handled. By applying a vacuum to the inflatable skin, the powder gets jammed and transforms to a solid shaped form in which the gripper was brought before applying the vacuum. This physical principle is used to pick parts. The flexible skin of the gripper adjusts to all kinds of shapes, and therefore, can be used to realize 3D bin picking. With the help of a force controlled robot, the gripper can be pushed with a consistent force on flexible positions depending of the filling level of the box. A Kuka LBR iiwa with joint torque sensors in all of its seven axis’ was used to achieve a constant contact pressure. This is the basic criteria to achieve a robust picking process.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions and preferences regarding the suitable visual qualities of SARs in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Our results indicate that Israeli and German designers share similar perceptions of visual qualities and most of the robotics roles. However, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
Recent advances in spiked shoe design, characterized by increased longitudinal stiffness, thicker midsole foams, and reconfigured geometry are considered to improve sprint performance. However, so far there is no empirical data on the effects of advanced spikes technology on maximal sprinting speed (MSS) published yet. Consequently, we assessed MSS via ‘flying 30m’ sprints of 44 trained male (PR: 10.32 s - 12.08 s) and female (PR: 11.56 s - 14.18 s) athletes, wearing both traditional and advanced spikes in a randomized, repeated measures design. The results revealed a statistically significant increase in MSS by 1.21% on average when using advanced spikes technology. Notably, 87% of participants showed improved MSS with the use of advanced spikes. A cluster analysis unveiled that athletes with higher MSS may benefit to a greater extent. However, individual responses varied widely, suggesting the influence of multiple factors that need detailed exploration. Therefore, coaches and athletes are advised to interpret the promising performance enhancements cautiously and evaluate the appropriateness of the advanced spike technology for their athletes critically.
High-tech running shoes and spikes ("super-footwear") are currently being debated in sports. There is direct evidence that distance running super shoes improve running economy; however, it is not well established to which extent world-class performances are affected over the range of track and road running events.
This study examined publicly available performance datasets of annual best track and road performances for evidence of potential systematic performance effects following the introduction of super footwear. The analysis was based on the 100 best performances per year for men and women in outdoor events from 2010 to 2022, provided by the world governing body of athletics (World Athletics).
We found evidence of progressing improvements in track and road running performances after the introduction of super distance running shoes in 2016 and super spike technology in 2019. This evidence is more pronounced for distances longer than 1500 m in women and longer than 5000 m in men. Women seem to benefit more from super footwear in distance running events than men.
While the observational study design limits causal inference, this study provides a database on potential systematic performance effects following the introduction of super shoes/spikes in track and road running events in world-class athletes. Further research is needed to examine the underlying mechanisms and, in particular, potential sex differences in the performance effects of super footwear.
We revisit the quantitative analysis of the ultrafast magnetoacoustic experiment in a freestanding nickel thin film by Kim and Bigot [J.-W. Kim and J.-Y. Bigot, Phys. Rev. B 95, 144422 (2017)] by applying our recently proposed approach of magnetic and acoustic eigenmode decomposition. We show that the application of our modeling to the analysis of time-resolved reflectivity measurements allows for the determination of amplitudes and lifetimes of standing perpendicular acoustic phonon resonances with unprecedented accuracy. The acoustic damping is found to scale as ∝ω2 for frequencies up to 80 GHz, and the peak amplitudes reach 10−3. The experimentally measured magnetization dynamics for different orientations of an external magnetic field agrees well with numerical solutions of magnetoelastically driven magnon harmonic oscillators. Symmetry-based selection rules for magnon-phonon interactions predicted by our modeling approach allow for the unambiguous discrimination between spatially uniform and nonuniform modes, as confirmed by comparing the resonantly enhanced magnetoelastic dynamics simultaneously measured on opposite sides of the film. Moreover, the separation of timescales for (early) rising and (late) decreasing precession amplitudes provide access to magnetic (Gilbert) and acoustic damping parameters in a single measurement.
While most ultrafast time-resolved optical pump-probe experiments in magnetic materials reveal the spatially homogeneous magnetization dynamics of ferromagnetic resonance (FMR), here we explore the magneto-elastic generation of GHz-to-THz frequency spin waves (exchange magnons). Using analytical magnon oscillator equations, we apply time-domain and frequency-domain approaches to quantify the results of ultrafast time-resolved optical pump-probe experiments in free-standing ferromagnetic thin films. Simulations show excellent agreement with the experiment, provide acoustic and magnetic (Gilbert) damping constants and highlight the role of symmetry-based selection rules in phonon-magnon interactions. The analysis is extended to hybrid multilayer structures to explore the limits of resonant phonon-magnon interactions up to THz frequencies.
The technique of laser ultrasonics perfectly meets the need for noncontact, noninvasive, nondestructive mechanical probing of nanometer- to millimeter-size samples. However, this technique is limited to the excitation of low-amplitude strains, below the threshold for optical damage of the sample. In the context of strain engineering of materials, alternative optical techniques enabling the excitation of high-amplitude strains in a nondestructive optical regime are needed. We introduce here a nondestructive method for laser-shock wave generation based on additive superposition of multiple laser-excited strain waves. This technique enables strain generation up to mechanical failure of a sample at pump laser fluences below optical ablation or melting thresholds. We demonstrate the ability to generate nonlinear surface acoustic waves (SAWs) in Nb-SrTiO3 substrates, with associated strains in the percent range and pressures up to 3 GPa at 1 kHz repetition rate and close to 10 GPa for several hundred shocks. This study paves the way for the investigation of a host of high-strain SAW-induced phenomena, including phase transitions in conventional and quantum materials, plasticity and a myriad of material failure modes, chemistry and other effects in bulk samples, thin layers, and two-dimensional materials.
The utilisation of artificial intelligence (AI) is progressively emerging as a significant mechanism for innovation in human resource management (HRM). The capacity to facilitate the transformation of employee performance across numerous responsibilities. AI development, there remains a dearth of comprehensive exploration into the potential opportunities it presents for enhancing workplace performance among employees. To bridge this gap in knowledge, the present work carried out a survey with 300 participants, utilises a fuzzy set-theoretic method that is grounded on the conceptualisation of AI, KS, and HRM. The findings of our study indicate that the exclusive adoption of AI technologies does not adequately enhance HRM engagements. In contrast, the integration of AI and KS offers a more viable HRM approach for achieving optimal performance in a dynamic digital society. This approach has the potential to enhance employees’ proficiency in executing their responsibilities and cultivate a culture of creativity inside the firm.
Purpose
Although start-ups have gained increasing scholarly attention, we lack sufficient understanding of their entrepreneurial strategic posture (ESP) in emerging economies. The purpose of this study is to examine the processes of ESP of new technology venture start-ups (NTVs) in an emerging market context.
Design/methodology/approach
In line with grounded theory guidelines and the inductive research traditions, the authors adopted a qualitative approach involving 42 in-depth semi-structured interviews with Ghanaian NTV entrepreneurs to gain a comprehensive analysis at the micro-level on the entrepreneurs' strategic posturing. A systematic procedure for data analysis was adopted.
Findings
From the authors' analysis of Ghanaian NTVs, the authors derived a three-stage model to elucidate the nature and process of ESP Phase 1 spotting and exploiting market opportunities, Phase II identifying initial advantages and Phase III ascertaining and responding to change.
Originality/value
The study contributes to advancing research on ESP by explicating the process through which informal ties and networks are utilised by NTVs and NTVs' founders to overcome extreme resource constraints and information vacuums in contexts of institutional voids. The authors depart from past studies in demonstrating how such ties can be harnessed in spotting and exploiting market opportunities by NTVs. On this basis, the paper makes original contributions to ESP theory and practice.
Purpose
Although recent literature has examined diverse measures adopted by SMEs to navigate the COVID-19 turbulence, there is a shortage of evidence on how crisis-time strategy creation behaviour and digitalization activities increase (1) sales and (2) cash flow. Thus, predicated on a novel strategy creation perspective, this inquiry aims to investigate the crisis behaviour, sales and cash flow performance of 528 SMEs in Morocco.
Design/methodology/approach
Novel links between (1) aggregate wage cuts, (2) variable operating hours, (3) deferred payment to suppliers, (4) deferred payment to tax authorities and (5) sales performance are developed and tested. A further link between sales performance and cash flow is also examined and the analysis is conducted using a non-linear structural equation modelling technique.
Findings
While there is a significant association between strategy creation behaviours and sales performance, only variable operating hours have a positive effect. Also, sales performance increases cash flow and this relationship is substantially strengthened by e-commerce digitalization and innovation.
Originality/value
Theoretically, to the best of the authors’ knowledge, this is one of the first inquiries to espouse the strategy creation view to explain SMEs' crisis-time behaviour and digitalization. For practical purposes, to supplement Moroccan SMEs' propensity to seek tax deferrals, it is argued that debt and equity support measures are also needed to boost sales performance and cash flow.
In the past ten years, applications of artificial neural networks have changed dramatically. outperforming earlier predictions in domains like robotics, computer vision, natural language processing, healthcare, and finance. Future research and advancements in CNN architectures, Algorithms and applications are expected to revolutionize various industries and daily life further. Our task is to find current products that resemble the given product image and description. Deep learning-based automatic product identification is a multi-step process that starts with data collection and continues with model training, deployment, and continuous improvement. The caliber and variety of the dataset, the design selected, and ongoing testing and improvement all affect the model's effectiveness. We achieved 81.47% training accuracy and 72.43% validation accuracy for our combined text and image classification model. Additionally, we have discussed the outcomes from the other dataset and numerous methods for creating an appropriate model.
As the population grows, so does the amount of biowaste. As demand for energy grows, biogas is a promising solution to the problem. Lignocellulosic materials are challenged of slow degradability due to the presence of polymers such as cellulose, lignin and hemicellulose. There are several pretreatment methods available to enhance the degradability of such materials, including enzymatic pretreatment. In this pretreatment, there are few parameters that can influence the results, the most important being the enzyme to solid ratio and the solid to liquid ratio. During this project, experiments were conducted to determine the optimal conditions for those two factors. It was discovered that a solid to liquid ratio of 31 g of buffer per 1 gram of organic dry matter produced the highest reducing sugar release in flasks when combined with 34 mg of protein per 1 gram of organic dry mass. Additionally, another experiment was carried out to investigate the impact of enzymatic pretreatment on biogas production using artificial biowaste as a substrate. Artificial biowaste produced 577,9 NL/kg oDM, while enzymatically pretreated biowaste produced 639,3 NL/kg oDM. This resulted in a 10,6% rise in cumulative biogas production compared to its use without enzymatic pretreatment. By the conclusion of the investigation, specific cumulative dry methane yields of 364,7 NL/kg oDM and 426,3 NL/kg oDM were obtained from artificial biowaste without and with enzymatic pretreatment, respectively. This resulted in a methane production boost of 16,9%. Additionally in case of the reactors with enzymatically pretreated substrate kinetic constant was lower more than double, where maximum volume of biogas increased, comparing to the reactors without enzymatic pretreatment.
Polyarticulated active prostheses constitute a promising solution for upper limb amputees. The bottleneck for their adoption though, is the lack of intuitive control. In this context, machine learning algorithms based on pattern recognition from electromyographic (EMG) signals represent a great opportunity for naturally operating prosthetic devices, but their performance is strongly affected by the selection of input features. In this study, we investigated different combinations of 13 EMG-derived features obtained from EMG signals of healthy individuals performing upper limb movements and tested their performance for movement classification using an Artificial Neural Network. We found that input data (i.e., the set of input features) can be reduced by more than 50% without any loss in accuracy, while diminishing the computing time required to train the classifier. Our results indicate that input features must be properly selected in order to optimize prosthetic control.
The main focus of this chapter is the theoretical and instrumental processes that underpin densitometric methods widely used in thin-layer chromatography (TLC). Densitometric methods include UV–vis, luminescence, and fluorescence optical measurements as well as infrared and Raman spectroscopic measurements. The chapter is divided in two general parts: a theoretical part and a practical part. The systems for direct radioactivity measurements and the combination of TLC with mass spectrometry are also discussed. All these systems allow measuring an intensity distribution directly on a TLC plate. We call this “in situ detection” because no analyte is removed from the plate.
Recently, photovoltaic (PV) with energy storage systems (ESS) have been widely adopted in buildings to overcome growing power demands and earn financial benefits. The overall energy cost can be optimized by combining a well-sized hybrid PV/ESS system with an efficient energy management system (EMS). Generally, EMS is implemented within the overall functions of the Building Automation System (BAS). However, due to its limited computing resources, BAS cannot handle complex algorithms that aim to optimize energy use in real-time under different operating conditions. Furthermore, islanding the building's local network to maximize the PV energy share represents a challenging task due to the potential technical risks. In this context, this article addresses an improved approach based on upgrading the BAS data analytics capability by means of an edge computing technology. The edge communicates with the BAS low-level controller using a serial communication protocol. Taking advantage of the high computing ability of the edge device, an optimization-based EMS of the PV/ESS hybrid system is implemented. Different testing scenarios have been carried out on a real prototype with different weather conditions, and the results show the implementation feasibility and technical performance of such advanced EMS for the management of building energy resources. It has also been proven to be feasible and advantageous to operate the local energy network in island mode while ensuring system safety. Additionally, an estimated energy saving improvement of 6.23 % has been achieved using optimization-based EMS compared to the classical rule-based EMS, with better ESS constraints fulfillment.
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient 1×1 convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning 3×3 convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera. This is the basis for a substantial approach to automate, for the first time, the identification of hardwood species in microscopic images of fibrous materials by deep learning. Our methodology includes a flexible pipeline for easy annotation of vessel elements. We compare the performance of different neural network architectures and hyperparameters. Our proposed method performs similarly well to human experts. In the future, this will improve controls on global wood fiber product flows to protect forests.
State-of-the-art models for pixel-wise prediction tasks such as image restoration, image segmentation, or disparity estimation, involve several stages of data resampling, in which the resolution of feature maps is first reduced to aggregate information and then sequentially increased to generate a high-resolution output. Several previous works have investigated the effect of artifacts that are invoked during downsampling and diverse cures have been proposed that facilitate to improve prediction stability and even robustness for image classification. However, equally relevant, artifacts that arise during upsampling have been less discussed. This is significantly relevant as upsampling and downsampling approaches face fundamentally different challenges. While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling. Blurring is therefore not an option and dedicated operations need to be considered. In this work, we are the first to explore the relevance of context during upsampling by employing convolutional upsampling operations with increasing kernel size while keeping the encoder unchanged. We find that increased kernel sizes can in general improve the prediction stability in tasks such as image restoration or image segmentation, while a block that allows for a combination of small-size kernels for fine details and large-size kernels for artifact removal and increased context yields the best results.
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
(2023)
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings. Yet, previous work showed that even slight mistakes during sampling, leading to aliasing, can be directly attributed to the networks' lack in robustness. To address such issues and facilitate simpler and faster adversarial training, [12] recently proposed FLC pooling, a method for provably alias-free downsampling - in theory. In this work, we conduct a further analysis through the lens of signal processing and find that such current pooling methods, which address aliasing in the frequency domain, are still prone to spectral leakage artifacts. Hence, we propose aliasing and spectral artifact-free pooling, short ASAP. While only introducing a few modifications to FLC pooling, networks using ASAP as downsampling method exhibit higher native robustness against common corruptions, a property that FLC pooling was missing. ASAP also increases native robustness against adversarial attacks on high and low resolution data while maintaining similar clean accuracy or even outperforming the baseline.
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be. To facilitate such study, several challenges need to be addressed, most importantly: (i) We need to provide an effective way for models to learn large filters (potentially as large as the input data) without increasing their memory consumption during training or inference, (ii) the study of filter sizes has to be decoupled from other effects such as the network width or number of learnable parameters, and (iii) the employed convolution operation should be a plug-and-play module that can replace any conventional convolution in a Convolutional Neural Network (CNN) and allow for an efficient implementation in current frameworks. To facilitate such models, we propose to learn not spatial but frequency representations of filter weights as neural implicit functions, such that even infinitely large filters can be parameterized by only a few learnable weights. The resulting neural implicit frequency CNNs are the first models to achieve results on par with the state-of-the-art on large image classification benchmarks while executing convolutions solely in the frequency domain and can be employed within any CNN architecture. They allow us to provide an extensive analysis of the learned receptive fields. Interestingly, our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters practically translate into well-localized and relatively small convolution kernels in the spatial domain.
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate label information is a challenging task. Existing methodologies often entail a compromise between variety and constraint levels for attacks and sometimes even both. In a first step towards a more holistic robustness evaluation of image classification models, we introduce an attack method based on image solarization that is conceptually straightforward yet avoids jeopardizing the global structure of natural images independent of the intensity. Through comprehensive evaluations of multiple ImageNet models, we demonstrate the attack's capacity to degrade accuracy significantly, provided it is not integrated into the training augmentations. Interestingly, even then, no full immunity to accuracy deterioration is achieved. In other settings, the attack can often be simplified into a black-box attack with model-independent parameters. Defenses against other corruptions do not consistently extend to be effective against our specific attack.
Project website: https://github.com/paulgavrikov/adversarial_solarization
Entity Matching (EM) defines the task of learning to group objects by transferring semantic concepts from example groups (=entities) to unseen data. Despite the general availability of image data in the context of many EM-problems, most currently available EM-algorithms solely rely on (textual) meta data. In this paper, we introduce the first publicly available large-scale dataset for "visual entity matching", based on a production level use case in the retail domain. Using scanned advertisement leaflets, collected over several years from different European retailers, we provide a total of ~786k manually annotated, high resolution product images containing ~18k different individual retail products which are grouped into ~3k entities. The annotation of these product entities is based on a price comparison task, where each entity forms an equivalence class of comparable products. Following on a first baseline evaluation, we show that the proposed "visual entity matching" constitutes a novel learning problem which can not sufficiently be solved using standard image based classification and retrieval algorithms. Instead, novel approaches which allow to transfer example based visual equivalent classes to new data are needed to address the proposed problem. The aim of this paper is to provide a benchmark for such algorithms.
Information about the dataset, evaluation code and download instructions are provided under https://www.retail-786k.org/.
For the treatment of bone defects, biodegradable, compressive biomaterials are needed as replacements that degrade as the bone regenerates. The problem with existing materials has either been their insufficient mechanical strength or the excessive differences in their elastic modulus, leading to stress shielding and eventual failure. In this study, the compressive strength of CPC ceramics (with a layer thickness of more than 12 layers) was compared with sintered β-TCP ceramics. It was assumed that as the number of layers increased, the mechanical strength of 3D-printed scaffolds would increase toward the value of sintered ceramics. In addition, the influence of the needle inner diameter on the mechanical strength was investigated. Circular scaffolds with 20, 25, 30, and 45 layers were 3D printed using a 3D bioplotter, solidified in a water-saturated atmosphere for 3 days, and then tested for compressive strength together with a β-TCP sintered ceramic using a Zwick universal testing machine. The 3D-printed scaffolds had a compressive strength of 41.56 ± 7.12 MPa, which was significantly higher than that of the sintered ceramic (24.16 ± 4.44 MPa). The 3D-printed scaffolds with round geometry reached or exceeded the upper limit of the compressive strength of cancellous bone toward substantia compacta. In addition, CPC scaffolds exhibited more bone-like compressibility than the comparable β-TCP sintered ceramic, demonstrating that the mechanical properties of CPC scaffolds are more similar to bone than sintered β-TCP ceramics.
Differentiation between human and non-human objects can increase efficiency of human-robot collaborative applications. This paper proposes to use convolutional neural networks for classifying objects in robotic applications. The body temperature of human beings is used to classify humans and to estimate the distance to the sensor. Using image classification with convolutional neural networks it is possible to detect humans in the surroundings of a robot up to five meters distance with low-cost and low-weight thermal cameras. Using transfer learning technique we trained the GoogLeNet and MobilenetV2. Results show accuracies of 99.48 % and 99.06 % respectively.
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
(2023)
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification.Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the "LSUN-Bedroom" dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.The code for our experiments is provided at https://github.com/deepfake-study/deepfake-multiLID.