Global Convergence of Conjugate Gradient Methods without Line Search
Received:October 27, 2017  Revised:June 06, 2018
Key Words: unconstrained optimization   conjugate gradient method   line search   global convergence  
Fund Project:Supported by the National Natural Science Foundation of China (Grant No.11761014), the Natural Science Foundation of Guangxi Zhuang Autonomous Region (Grant No.2017GXNSFAA198243), Guangxi Basic Ability Improvement Project for the Middle-Aged and Young Teachers of Colleges and Universities (Grant Nos.2017KY0068; KY2016YB069), Guangxi Higher Education Undergraduate Course Teaching Reform Project (Grant No.2017JGB147).
Author NameAffiliation
Cuiling CHEN College of Mathematics and Statistics, Guangxi Normal University, Guangxi 541004, P. R. China
School of Computing and Information, University of Pittsburgh, Pittsburgh 15238, U. S. A. 
Yu CHEN College of Mathematics and Statistics, Guangxi Normal University, Guangxi 541004, P. R. China 
Hits: 1365
Download times: 945
Abstract:
      In this paper, a new steplength formula is proposed for unconstrained optimization, which can determine the step-size only by one step and avoids the line search step. Global convergence of the five well-known conjugate gradient methods with this formula is analyzed, and the corresponding results are as follows: (1) The DY method globally converges for a strongly convex $LC^1$ objective function; (2) The CD method, the FR method, the PRP method and the LS method globally converge for a general, not necessarily convex, $LC^1$ objective function.
Citation:
DOI:10.3770/j.issn:2095-2651.2018.05.011
View Full Text  View/Add Comment