Yujia Feng, Nikita Izmailov
Obtaining Estimates for the Rate of Convergence of the Gradient Descent via Machine Learning
Abstract. The aim of this study is to propose a novel approach for estimating the convergence rate of optimization algorithms using gradient descent as an example, Specifically, we aim to establish the decay law of the function value depending on the parameters of the optimized function, the starting point of the iteration process and the number of iterations. The proposed technique allows one to avoid complicated theoretical calculations that usually arise in convergence analysis and to use a trained ML model instead of a theoretical estimate for the convergence rate of a studied algorithm.
Keywords: convergence rate, machine learning, CatBoost model, optimization, gradient descent
Download PDF
DOI: