Volltext-Downloads (blau) und Frontdoor-Views (grau)
  • search hit 36 of 389
Back to Result List

Alternative optimization methods for training of large deep neural networks

  • Due to its performance, the field of deep learning has gained a lot of attention, with neural networks succeeding in areas like Computer Vision (CV), Neural Language Processing (NLP), and Reinforcement Learning (RL). However, high accuracy comes at a computational cost as larger networks require longer training time and no longer fit onto a single GPU. To reduce training costs, researchers areDue to its performance, the field of deep learning has gained a lot of attention, with neural networks succeeding in areas like Computer Vision (CV), Neural Language Processing (NLP), and Reinforcement Learning (RL). However, high accuracy comes at a computational cost as larger networks require longer training time and no longer fit onto a single GPU. To reduce training costs, researchers are looking into the dynamics of different optimizers, in order to find ways to make training more efficient. Resource requirements can be limited by reducing model size during training or designing more efficient models that improve accuracy without increasing network size. This thesis combines eigenvalue computation and high-dimensional loss surface visualization to study different optimizers and deep neural network models. Eigenvectors of different eigenvalues are computed, and the loss landscape and optimizer trajectory are projected onto the plane spanned by those eigenvectors. A new parallelization method for the stochastic Lanczos method is introduced, resulting in faster computation and thus enabling high-resolution videos of the trajectory and secondorder information during neural network training. Additionally, the thesis presents the loss landscape between two minima along with the eigenvalue density spectrum at intermediate points for the first time. Secondly, this thesis presents a regularization method for Generative Adversarial Networks (GANs) that uses second-order information. The gradient during training is modified by subtracting the eigenvector direction of the biggest eigenvalue, preventing the network from falling into the steepest minima and avoiding mode collapse. The thesis also shows the full eigenvalue density spectra of GANs during training. Thirdly, this thesis introduces ProxSGD, a proximal algorithm for neural network training that guarantees convergence to a stationary point and unifies multiple popular optimizers. Proximal gradients are used to find a closed-form solution to the problem of training neural networks with smooth and non-smooth regularizations, resulting in better sparsity and more efficient optimization. Experiments show that ProxSGD can find sparser networks while reaching the same accuracy as popular optimizers. Lastly, this thesis unifies sparsity and neural architecture search (NAS) through the framework of group sparsity. Group sparsity is achieved through ℓ2,1-regularization during training, allowing for filter and operation pruning to reduce model size with minimal sacrifice in accuracy. By grouping multiple operations together, group sparsity can be used for NAS as well. This approach is shown to be more robust while still achieving competitive accuracies compared to state-of-the-art methodsshow moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Doctoral Thesis
Zitierlink: https://opus.hs-offenburg.de/8234
Bibliografische Angaben
Title (English):Alternative optimization methods for training of large deep neural networks
Author:Avraam Chatzimichailidis
Advisor:Janis Keuper
Referee:Janis KeuperStaff MemberORCiDGND, Nicolas R. Gauger
Year of Publication:2023
Date of final exam:2023/03/29
Publishing Institution:Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau (RPTU)
Granting Institution:Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau (RPTU)
Page Number:xiv, 178
DOI:https://doi.org/10.26204/KLUEDO/7241
URL:https://kluedo.ub.rptu.de/frontdoor/deliver/index/docId/7241/file/PhD_Thesis_Avraam_Chatzimichailidis.pdf
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Forschung / IMLA - Institute for Machine Learning and Analytics
Institutes:Bibliografie
DDC classes:000 Allgemeines, Informatik, Informationswissenschaft
GND Keyword:Deep learning
Tag:Neural Architecture Search; Optimization
Formale Angaben
Relevance:Dissertation
Open Access: Open Access 
 Diamond 
Licence (German):License LogoCreative Commons - CC BY - Namensnennung 4.0 International