Sari la conținut

Conferința științifica FMI-IMAR, vineri 19.07 2024, ora 15:00, prof. Arnulf Jentzen

Vineri, 19.07.2024, ora 15, sala Google (214)

Prof. Arnulf Jentzen, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen) & University of Münster va susține următoarea prelegere:

Title: Stochastic gradient descent optimization methods with adaptive learning rates

Abstract: Deep learning algorithms – typically consisting of a class of deep neural networks trained by a stochastic gradient descent (SGD) optimization method – are nowadays the key ingredients in many artificial intelligence (AI) systems and have revolutionized our ways of working and living in modern societies. For example, SGD methods are used to train powerful large language models (LLMs) such as versions of ChatGPT and Gemini, SGD methods are employed to create successful generative AI based text-to-image creation models such as Midjourney, DALL-E, and Stable Diffusion, but SGD methods are also used to train DNNs to approximately solve scientific models such as partial differential equation (PDE) models from physics and biology and optimal control and stopping problems from engineering. It is known that the plain vanilla standard SGD method fails to converge even in the situation of several convex optimization problems if the learning rates are bounded away from zero. However, in many practical relevant training scenarios, often not the plain vanilla standard SGD method but instead adaptive SGD methods such as the RMSprop and the Adam optimizers, in which the learning rates are modified adaptively during the training process, are employed. This naturally rises the question whether such adaptive optimizers, in which the learning rates are modified adaptively during the training process, do converge in the situation of non-vanishing learning rates. In this talk we answer this question negatively by proving that adaptive SGD methods such as the popular Adam optimizer fail to converge to any possible random limit point if the learning rates are asymptotically bounded away from zero. Moreover, we propose and study a learning-rate-adaptive approach for SGD optimization methods in which the learning rate is adjusted based on empirical estimates for the values of the objective function of the considered optimization problem (the function that one intends to minimize). In particular, we propose a learning-rate-adaptive variant of the Adam optimizer and implement it in case of several ANN learning problems, particularly, in the context of deep learning approximation methods for PDEs such as deep Kolmogorov methods (DKMs), physics-informed neural networks (PINNs), and deep Ritz methods (DRMs). In each of the presented learning problems the proposed learning-rate-adaptive variant of the Adam optimizer faster reduces the value of the objective function than the Adam optimizer with the default learning rate. For a simple class of quadratic minimization problems we also rigorously prove that a learning-rate-adaptive variant of the SGD optimization method converges to the minimizer of the considered minimization problem.

References:
[1] Steffen Dereich, Robin Graeber, & Arnulf Jentzen, Non-convergence of Adam and other adaptive stochastic gradient descent optimization methods for non-vanishing learning rates, arXiv:2407.08100 (2024), 54 pages, https://arxiv.org/abs/2407.08100
[2] Steffen Dereich, Arnulf Jentzen, & Adrian Riekert, Learning rate adaptive stochastic gradient descent optimization methods: numerical simulations for deep learning methods for partial differential equations and convergence analyses, arXiv:2406.14340 (2024), 68 pages, https://arxiv.org/abs/2406.14340

Short bio:

Brief bio:
Arnulf Jentzen (*November 1983) is appointed as a presidential chair professor at the Chinese University of Hong Kong, Shenzhen (since 2021) and as a full professor at the University of Münster (since 2019). In 2004 he started his undergraduate studies in mathematics at Goethe University Frankfurt in Germany, in 2007 he received his diploma degree at this university, and in 2009 he completed his PhD in mathematics at this university. The core research topics of his research group are machine learning approximation algorithms, computational stochastics, numerical analysis for high dimensional partial differential equations (PDEs), stochastic analysis, and computational finance. Currently, he serves in the editorial boards of several scientific journals such as the Annals of Applied Probability, the Journal of Machine Learning, the SIAM Journal on Scientific Computing, the SIAM Journal on Numerical Analysis, and the SIAM/ASA Journal on Uncertainty Quantification. His research activities has been recognized through several major awards such as the Felix Klein Prize of the European Mathematical Society (EMS) (2020), an ERC Consolidator Grant from the European Research Council (ERC) (2022), the Joseph F. Traub Prize for Achievement in Information-Based Complexity (2022), and a Frontier of Science Award in Mathematics (jointly with Jiequn Han and Weinan E) by the International Congress of Basic Science (ICBS) (2024). Further details on the activities of his research group can be found at the webpage http://www.ajentzen.de.

(prezentare în cadrul Seminarului de Probabilitati si statistica)

Etichete: