000 Allgemeines, Informatik, Informationswissenschaft
Refine
Year of publication
Document Type
- Bachelor Thesis (139)
- Master's Thesis (44)
- Conference Proceeding (35)
- Article (reviewed) (18)
- Part of a Book (13)
- Article (unreviewed) (13)
- Book (9)
- Contribution to a Periodical (7)
- Doctoral Thesis (2)
- Periodical Part (2)
Conference Type
- Konferenzartikel (31)
- Konferenzband (2)
- Konferenz-Abstract (1)
- Konferenz-Poster (1)
Keywords
- Künstliche Intelligenz (18)
- Internet der Dinge (11)
- Marketing (11)
- Webentwicklung (11)
- Maschinelles Lernen (10)
- IT-Sicherheit (9)
- Informatik (9)
- Datenqualität (8)
- Gamification (8)
- JavaScript (8)
Institute
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (108)
- Fakultät Medien (M) (ab 22.04.2021) (69)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (49)
- Fakultät Wirtschaft (W) (23)
- ACI - Affective and Cognitive Institute (20)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (19)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (13)
- IMLA - Institute for Machine Learning and Analytics (10)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (3)
- IfTI - Institute for Trade and Innovation (2)
Open Access
- Closed Access (133)
- Open Access (88)
- Closed (58)
- Diamond (12)
- Bronze (5)
- Hybrid (4)
- Gold (2)
Most machine learning methods require careful selection of hyper-parameters in order to train a high performing model with good generalization abilities. Hence, several automatic selection algorithms have been introduced to overcome tedious manual (try and error) tuning of these parameters. Due to its very high sample efficiency, Bayesian Optimization over a Gaussian Processes modeling of the parameter space has become the method of choice. Unfortunately, this approach suffers from a cubic compute complexity due to underlying Cholesky factorization, which makes it very hard to be scaled beyond a small number of sampling steps. In this paper, we present a novel, highly accurate approximation of the underlying Gaussian Process. Reducing its computational complexity from cubic to quadratic allows an efficient strong scaling of Bayesian Optimization while outperforming the previous approach regarding optimization accuracy. First experiments show speedups of a factor of 162 in single node and further speed up by a factor of 5 in a parallel environment.
Classification of TRIZ Inventive Principles and Sub-Principles for Process Engineering Problems
(2019)
The paper proposes a classification approach of 40 Inventive Principles with an extended set of 160 sub-principles for process engineering, based on a thorough analysis of 155 process intensification technologies, 200 patent documents, 6 industrial case studies applying TRIZ, and other sources. The authors define problem-specific sub-principles groups as a more precise and productive ideation technique, adaptable for a large diversity of problem situations, and finally, examine the anticipated variety of ideation using 160 sub-principles with the help of MATCEM-IBD fields.
Machine Learning als Schlüsseltechnologie für Digitalisierung: Wie funktioniert maschinelles Lernen?
(2019)
Data Science
(2019)
Data Science steht derzeit wie kein anderer Begriff für die Auswertung großer Datenmengen mit analytischen Konzepten des Machine Learning oder der künstlichen Intelligenz. Nach der bewussten Wahrnehmung der Big Data und dabei insbesondere der Verfügbarmachung in Unternehmen sind Technologien und Methoden zur Auswertung dort gefordert, wo klassische Business Intelligence an ihre Grenzen stößt.
Dieses Buch bietet eine umfassende Einführung in Data Science und deren praktische Relevanz für Unternehmen. Dabei wird auch die Integration von Data Science in ein bereits bestehendes Business-Intelligence-Ökosystem thematisiert. In verschiedenen Beiträgen werden sowohl Aufgabenfelder und Methoden als auch Rollen- und Organisationsmodelle erläutert, die im Zusammenspiel mit Konzepten und Architekturen auf Data Science wirken. Neben den Grundlagen werden unter anderem folgende Themen behandelt:
- Data Science und künstliche Intelligenz
- Konzeption und Entwicklung von Data-driven Products
- Deep Learning
- Self-Service im Data-Science-Umfeld
- Data Privacy und Fragen zur digitalen Ethik
- Customer Churn mit Keras/TensorFlow und H2O
- Wirtschaftlichkeitsbetrachtung bei der Auswahl und Entwicklung von Data Science
- Predictive Maintenance
- Scrum in Data-Science-Projekten
Zahlreiche Anwendungsfälle und Praxisbeispiele geben Einblicke in die aktuellen Erfahrungen bei Data-Science-Projekten und erlauben dem Leser einen direkten Transfer in die tägliche Arbeit.
Einleitung
(2019)
In diesem Beitrag werden grundlegende Aspekte und Methoden der Data Science erläutert. Nach dem Vorgehensmodell CRISP-DM sind in den Phasen Data Unterstanding und Data Preparation vor allem Verfahren der Datenselektion, Datenvorverarbeitung und der explorativen Datenanalyse anzuwenden. Beim Modeling, der Hauptaufgabe der Data Science, kann man überwachte und unüberwachte Methoden sowie Reinforcement Learning unterscheiden. Auf die Evaluation der Güte eines Modells anhand von Qualitätsmaßen wird anschließend eingegangen. Der Beitrag schließt mit einem Ausblick auf weitere Themen wie Cognitive Computing.
Walking interfaces offer advantages in navigation of VE systems over other types of locomotion. However, VR helmets have the disadvantage that users cannot see their immediate surroundings. Our publication describes the prototypical implementation of a virtual environment (VE) system, capable of detecting possible obstacles using an RGB-D sensor. In order to warn users of potential collisions with real objects while they are moving throughout the VE tracking area, we designed 4 different visual warning metaphors: Placeholder, Rubber Band, Color Indicator and Arrow. A small pilot study was carried out in which the participants had to solve a simple task and avoid any arbitrarily placed physical obstacles when crossing the virtual scene. Our results show that the Placeholder metaphor (in this case: trees), compared to the other variants, seems to be best suited for the correct estimation of the position of obstacles and in terms of the ability to evade them.
Many sectors, like finance, medicine, manufacturing, and education, use blockchain applications to profit from the unique bundle of characteristics of this technology. Blockchain technology (BT) promises benefits in trustability, collaboration, organization, identification, credibility, and transparency. In this paper, we conduct an analysis in which we show how open science can benefit from this technology and its properties. For this, we determined the requirements of an open science ecosystem and compared them with the characteristics of BT to prove that the technology suits as an infrastructure. We also review literature and promising blockchain-based projects for open science to describe the current research situation. To this end, we examine the projects in particular for their relevance and contribution to open science and categorize them afterwards according to their primary purpose. Several of them already provide functionalities that can have a positive impact on current research workflows. So, BT offers promising possibilities for its use in science, but why is it then not used on a large-scale in that area? To answer this question, we point out various shortcomings, challenges, unanswered questions, and research potentials that we found in the literature and identified during our analysis. These topics shall serve as starting points for future research to foster the BT for open science and beyond, especially in the long-term.