Refine
Year of publication
- 2019 (260) (remove)
Document Type
- Conference Proceeding (83)
- Article (reviewed) (45)
- Article (unreviewed) (41)
- Part of a Book (39)
- Book (20)
- Report (9)
- Contribution to a Periodical (8)
- Doctoral Thesis (5)
- Patent (5)
- Letter to Editor (2)
- Working Paper (2)
- Other (1)
Conference Type
- Konferenzartikel (69)
- Konferenz-Abstract (9)
- Sonstiges (3)
- Konferenzband (2)
Language
- German (137)
- English (121)
- Multiple languages (1)
- Russian (1)
Has Fulltext
- no (260) (remove)
Is part of the Bibliography
- yes (260)
Keywords
- Heart rhythm model (5)
- Machine Learning (5)
- Modeling and simulation (5)
- Human Computer Interaction (4)
- Virtual Reality (4)
- Augmented Reality (3)
- Education in Optics and Photonics (3)
- Herzrhythmusmodell (3)
- Informatik (3)
- Learning Analytics (3)
Institute
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (96)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (73)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (43)
- Fakultät Wirtschaft (W) (39)
- INES - Institut für nachhaltige Energiesysteme (13)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (12)
- CRT - Campus Research & Transfer (7)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (7)
- ACI - Affective and Cognitive Institute (6)
- IMLA - Institute for Machine Learning and Analytics (6)
Open Access
- Closed Access (157)
- Open Access (82)
- Bronze (10)
- Closed (1)
- Diamond (1)
Current training methods for deep neural networks boil down to very high dimensional and non-convex optimization problems which are usually solved by a wide range of stochastic gradient descent methods. While these approaches tend to work in practice, there are still many gaps in the theoretical understanding of key aspects like convergence and generalization guarantees, which are induced by the properties of the optimization surface (loss landscape). In order to gain deeper insights, a number of recent publications proposed methods to visualize and analyze the otimization surfaces. However, the computational cost of these methods are very high, making it hardly possible to use them on larger networks. In this paper, we present the GradVis Toolbox, an open source library for efficient and scalable visualization and analysis of deep neural network loss landscapes in Tesorflow and PyTorch. Introducing more efficient mathematical formulations and a novel parallelization scheme, GradVis allows to plot 2d and 3d projections of optimization surfaces and trajectories, as well as high resolution second order gradient information for large networks.