next up previous
Next: About this document ...

Hans De Sterck
Extending GMRES to Nonlinear Optimization, with Application to Tensor Optimization

Hans De Sterck
Department of Applied Mathematics
200 University Avenue West
Waterloo
ON
N2L 3G1
Canada
hdesterck@uwaterloo.ca

GMRES is a powerful method to accelerate convergence of simple iterative solvers for linear systems, like Jacobi or Gauss-Seidel (and other preconditioning processes). We extend the concept of preconditioned GMRES to the general class of nonlinear optimization problems, proposing a general mechanism to accelerate simple iterative methods for nonlinear optimization problems that works in a genuinely nonlinear way.

Each iteration of the method consists of three steps. In the first step, a tentative new iterate is generated by a stand-alone one-step process (which can be nonlinear). In the second step, an accelerated iterate is generated by a nonlinear GMRES approach, recombining previous iterates in an optimal way, and essentially using the stand-alone one-step process as a preconditioner. In particular, the nonlinear extension of GMRES is used that was proposed by Washio and Oosterlee in [ETNA Vol. 15 (2003), pp. 165-185] for nonlinear partial differential equation problems (which is itself related to other existing acceleration methods for nonlinear equation systems). In the third step, a line search is performed for globalization.

The resulting nonlinear GMRES (N-GMRES) optimization algorithm is applied to a set of standard test problems for nonlinear optimization, using steepest descent as a universally applicable preconditioner (corresponding to the identity preconditioner in GMRES for linear systems). Performance of steepest-descent preconditioned N-GMRES is shown to be competitive with standard nonlinear conjugate gradient (N-CG) and limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization methods for the model problems considered. A simple global convergence proof is provided for the N-GMRES optimization algorithm with steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes.

Finally, the performance of the N-GMRES optimization algorithm is also illustrated for the problem of computing a canonical rank-R tensor approximation that has minimal distance to a given tensor in the Frobenius norm, where the canonical rank-R tensor consists of the sum of R rank-one tensors. For this application, we use alternating least-squares (ALS) minimization as the nonlinear preconditioning process, and we find that ALS-preconditioned N-GMRES is significantly faster than N-CG and L-BFGS for this difficult nonlinear optimization problem. This illustrates how the N-GMRES optimization framework allows for the use of powerful problem-dependent nonlinear preconditioners, and we anticipate that this strategy may lead to efficient numerical methods for a variety of nonlinear optimization problems.




next up previous
Next: About this document ...
root 2012-02-20