next up previous
Next: About this document ...

Keiichi Morikuni
Statioary inner-iteration preconditioned GMRES method for least squares problems

The Graduate University for Advanced Studies
2-1-2 Hitotsubashi
Chiyoda-ku
Tokyo 101-8430 Japan
morikuni@nii.ac.jp
Ken Hayami

Consider solving large sparse linear least squares problems

$\displaystyle \min_{\boldsymbol{x} \in \mathbf{R}^n}{\left\Vert \boldsymbol{b} - A \boldsymbol{x} \right\Vert _2},$ (1)

where $ A \in \mathbf{R}^{m \times n}$ , $ \boldsymbol{b} \in \mathbf{R}^m$ , and $ A$ is not necessarily of full rank. The least squares problem ([*]) is equivalent to the following normal equation

$\displaystyle A^\mathsf{T} A \boldsymbol{x} = A^\mathsf{T} \boldsymbol{b}.$ (2)

The standard iterative methods are the (preconditioned) CGLS [M. R.
Hestenes and E. Stiefel, J. Research Nat. Bur. Standards, 49 (1952), pp. 409-436] and LSQR [C. C. Paige and M. A. Saunders, ACM Trans. Math. Software, 8 (1982), pp. 43-71] methods. However, since $ \kappa_2 (A^\mathsf{T}A) = {\kappa_2 (A)}^2$ , iterative methods may be slow to converge. Here, $ \kappa_2 (A) = \sigma_\mathrm{max} / \sigma_\mathrm{min}$ is the condition number, where $ \sigma_\mathrm{max}$ and $ \sigma_\mathrm{min}$ are the largest and smallest positive singular values of $ A$ . Hence, when iterative methods are used, good preconditioners are necessary to achieve better convergence.

In this paper, we propose using stationary inner-iteration preconditioners for the generalized minimal residual method (GMRES) for solving least squares problems, and present theoretical justifications for using the inner iteration as preconditioners even for rank-deficient matrices. For the preconditioners, the Jacobi overrelaxation and successive overrelaxation (SOR)-type iterative methods designed for least squares problems are employed. These inner iterations are efficient in terms of computational work and memory. A significant advantage of such inner-iteration preconditioners is that one can avoid explicitly computing and storing the preconditioning matrix. Numerical experiments for large sparse least squares problems including ill-conditioned and rank-deficient ones show that the proposed methods outperform previous methods.

The main motivation for proposing the new preconditioners is to reduce CPU time and memory significantly, and to broaden the scope of problems that can be solved. Note that, previously, GMRES preconditioned by the robust incomplete factorization (RIF) [M. Benzi and M. Tuma, SIAM J. Sci. Comp., 25 (2003), pp. 499-512] was comparable with, but not definitely superior to, reorthogonalized CGLS with RIF in terms of CPU time for ill-conditioned problems [K. Hayami, J.-F. Yin and T. Ito, SIAM J. Matrix Anal. Appl., 31 (2010), pp. 2400-2430]. Moreover, RIF for least squares problems will break down for rank-deficient matrices. We aim to improve on these points.

The left-preconditioned GMRES method for least squares problems proposed in [Hayami et al., 2010] is called the BA-GMRES method. It applies GMRES to $ \displaystyle \min_{\boldsymbol{x} \in
\mathbf{R}^n}\Vert B \boldsymbol{b} - B A \boldsymbol{x} \Vert _2$ , where $ B \in
\mathbf{R}^{n \times m}$ denotes the preconditioning matrix. Conditions for the preconditioner $ B$ for BA-GMRES were given in [Hayami et al., 2010].

Instead of explicitly forming and storing an preconditioning matrix $ B$ , we propose applying several steps of a certain stationary iterative method for solving the normal equation ([*]) of the form

$\displaystyle \boldsymbol{x}^{(l+1)} = M^{-1} N \boldsymbol{x}^{(l)} + M^{-1} A...
...dsymbol{b}, \quad l = 0, 1, \dots, \quad \boldsymbol{x}^{(0)} = \boldsymbol{0},$ (3)

inside BA-GMRES, where $ A^\mathsf{T} A = M - N$ , $ M$ is nonsingular, and $ l$ is the number of inner iterations. Define the iteration matrix by $ H = M^{-1} N$ . Tanabe [Numer. Math., 22 (1974), pp. 349-359] defined the following.

Definition 1   $ H$ is convergent if $ \displaystyle \lim_{l \rightarrow \infty} H^l$ exists, and divergent otherwise.

Then, we obtain the following theorem for the stationary inner-iteration BA-GMRES method.

Theorem 1   Assume that $ H$ is convergent. Then, BA-GMRES with the inner-iteration preconditioning of the form ([*]) determines a least squares solution of $ \displaystyle
\min_{\boldsymbol{x} \in \mathbf{R}^{n}} \Vert \boldsymbol{b} - A
\boldsymbol{x} \Vert _2$ without breakdown for all $ \boldsymbol{b} \in \mathbf{R}^m$ .

According to [A. Dax, SIAM Review, 32 (1990), pp. 611-635], the Cimmino-NR and NR-SOR methods [Y. Saad, 2nd ed., SIAM, Philadelphia, 2003] give a convergent iteration matrix so that these methods can be used as the inner iterations for BA-GMRES.

Moreover, we show that the stationary inner-iteration BA-GMRES residual bound depends on the spectral radius of $ H$ .

Finally, we will present numerical results that show BA-GMRES preconditioned by the NR-SOR inner iteration outperforms conventional methods for ill-conditioned, rank-deficient, and practical problems. A strategy for tuning the parameters for the NR-SOR inner iteration will be also proposed and shown to be effective.




next up previous
Next: About this document ...
root 2012-02-20