===affil2: ===firstname: Nikolaos, N M ===firstname4: ===firstname3: ===lastname2: Louka ===lastname: Missirlis ===firstname5: ===affil6: ===lastname3: ===email: nmis@di.uoa.gr ===lastname6: ===affil5: ===otherauths: ===lastname4: ===affil4: ===lastname7: ===affil7: ===firstname7: ===postal: Department of Informatics and Telecommunications, University of Athens, Panepistimiopolis, 15784 Ilisia, Athens, Greece ===firstname6: ===ABSTRACT: Let $A \in \mathbb{R}^{m\times m}$ be a symmetric positive definite matrix and $B\in\mathbb{R}^{m\times n}$ be a matrix of full column rank, where $m\geq n$. Then, the augmented linear system is of the form %\cite{ArHurUz58},\cite{BaiParlWang2005},\cite{BraPasVas77},\cite{ElmGol94},\cite{GolWuYuan2001} \begin{equation} \tilde Au=b\label{eq:01} \end{equation} where \begin{equation}\label{eq:02} \tilde A=\left( \begin{array}{cc} A & B\\ -B^{T} & 0 \end{array} \right), \;\; u= \left( \begin{array}{cc} x\\ y \end{array} \right),\;\; b= \left( \begin{array}{cc} b_1\\ -b_2 \end{array} \right) \end{equation} with $B^T$ be the transpose of the matrix B. When A and B are large and sparse, iterative methods for solving (\ref{eq:01}) are effective and more attractive than direct methods, because of storage requirements and preservation of sparsity. %Such systems arise in areas of computational fluid dynamics \cite{Glow84}, constrained optimization \cite{Wright92}, image processing \cite{Hall79}, finite element approximations and elsewhere \cite{BenGolLie2005}. \\ \noindent The difficulty in applying iterative methods such as the successive overrelaxation (SOR) method \cite{Young} to system (\ref{eq:01}) is the singularity of the block diagonal part of the coefficient matrix. Some methods have been developed to overcome this difficulty such as the Uzawa and the Preconditioned Uzawa method \cite{ArHurUz58}. In 2001, Golub et al. \cite{GolWuYuan2001} generalized the Uzawa and the Preconditioned Uzawa method for the augmented system (\ref{eq:01}) by introducing an additional acceleration parameter and produced the SOR-like method. Other known methods, competitive and specially attractive for solving (\ref{eq:01}) are the preconditioned MINRES and GMRES methods \cite{FiRaSiWa98}. In case where the matrix A is symmetric and positive definite, the Preconditioned Conjugate Gradient (PCG) method \cite{HestSt52} is the most suitable. In 2003, Li, Evans and Zhang \cite{LiEvansZha2003} proved that the PCG method is at least as accurate as the SOR-like method. In 2005, Bai et al. \cite{BaiParlWang2005} proposed the Generalized SOR (GSOR) method and proved that it has the same rate of convergence but less complexity than the PCG method. Furthermore, the Generalized Modified Extrapolated SOR (GMESOR) method, a generalization of the GSOR method, was also proposed for further study. The latter is a generalization of the GSOR method as it uses one additional parameter. \\ \noindent In this paper, we study the Generalized Modified Extrapolated SOR (GMESOR) method and the Generalized Modified Preconditioned Simultaneous Displacement (GMPSD) method. The latter is derived by using as preconditioning matrix the product of a lower and upper triangular part of $\tilde A$ \cite{Louka}, \cite{LoukMis2}. %Furthermore, as a by-product of the analysis of the aforementioned methods and for different values of their parameters, we study the Generalized Extrapolated SOR (GESOR), the Generalized Modified PSD with three parameters (GMPSD(3)), the Generalized Modified SSOR (GMSSOR), the Generalized PSD (GPSD), the Generalized SSOR (GSSOR) methods such as the backward schemes of the GSOR, GMESOR and GESOR methods, that is the GBSOR, GMEBSOR and GEBSOR methods, respectively. In an attempt to generalize our theory, we introduce an additional parameter $a$ in the structure of the matrix $\tilde A$, with the hope of increasing the rate of convergence for the aforementioned methods. Our starting point, for studying these methods, is the derivation of functional relations which relate the eigenvalues of their iteration matrices with those of the key matrix $J=Q^{-1}B^TA^{-1}B$. Assuming that the matrix Q is symmetric positive or negative definite matrix, the eigenvalues of the matrix $J$ are real and either positive if the matrix Q is positive definite or negative if the matrix Q is negative definite. Under these assumptions we find sufficient conditions for the convergence of the aforementioned methods and determine the optimum values of their parameters. For the determination of the optimum values of the parameters in GMESOR and GMPSD methods we followed a similar approach to Varga \cite{Var1} p. 110, for the optimum $\omega$ of SOR. %\\ %The paper is organized as follows. In section 2 we study the convergence analysis of the GMESOR method. In particular, we find sufficient conditions for GMESOR to converge under the assumption that the eigenvalues of the $J$ matrix are real. We also determine optimum values for its parameters. A similar convergence analysis for the GMPSD method is developed in section 3. In section 4, we present our numerical results and finally in section 5 we state our remarks and conclusions. %It is important to note that we did a big effort to study our methods and to determine their optimum parameters following a similar approach as in \cite{BaiParlWang2005}. However, because of the complexity of this approach, it was necessary to follow a geometric approach \cite{Var1}, pp. 110. Our main difference is that in our methods we have to optimize two parameters instead of one in \cite{Var1}. In the following, we will use this geometric approach to determine the optimum values of the aforementioned methods easier than in \cite{BaiParlWang2005}. %In this paper, we propose a unified approach for determining optimum parameters in iterative methods for solving the augmented linear system (\ref{eq:01}). We develop the convergence analysis of the GMESOR and Generalized Modified Preconditioned Simultaneous Displacement (GMPSD) methods and we find that both methods have the same rate of convergence as the Preconditioned Conjugate Gradient (PCG) method. To study their convergence, it was necessary to find their functional equations which relate the eigenvalues of their iteration matrices with those of the key matrix J. We also study some other methods by changing the number of parameters, such as the Generalized Extrapolated SOR (GESOR) method, the Generalized Modified PSD method with three parameters (GMPSD(3)), the Generalized PSD method with two parameters (GPSD) and the Generalized Modified SSOR method (GMSSOR), or by modifying the iteration matrix such as the backward schemes of GSOR, GESOR and GMESOR methods, the GBSOR method, the GEBSOR method and the GMEBSOR method, respectively. %The main result of our analysis is that all these methods have the same rate of convergence with the PCG method. This result proves that the introduction of parameters in an iterative method have the same effect in the increase of the rate of convergence as the Conjugate Gradient (CG) method. %\noindent \begin{thebibliography}{4} \bibitem{ArHurUz58} K. Arrow, L. Hurwicz and H. Uzawa, Studies in Nonlinear Programming, Stanford University Press, Stanford, 1958. \bibitem{BaiParlWang2005} Z.-Z. Bai, B. N. Parlett and Z.-Q. Wang, On generalized succesive overrelaxation methods for augmented linear systems, Numer. Math. 102, (2005), 1-38. %\bibitem{BraPasVas77} J. H. Bramble, J. E. Pasciak and A. T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problems, SIAM J. Numer. Anal. 34, (1977), 1072-1092. %\bibitem{BenGolLie2005} M. Benzi, G. H. Golub and J. Liesen, Numerical solution of saddle point problems, Acta Numerica, (2005), 1-137. %\bibitem{ElmGol94} H. C. Elman and G. H. Golub, Inexact and preconditioned Uzawa algorithms for saddle point problems, SIAM J. Numer. Anal. 31, (1994), 1645-1661. % %\bibitem{EvMis1} D. J. Evans and N. M. Missirlis, The preconditioned simultaneous displacement method (PSD method) for elliptic difference equations, Mathematics and Computers in Simulation 22, (1980), 256-263. \bibitem{FiRaSiWa98} B. Fischer, R. Ramage, D. J. Silvester, A. J. Wathen, Minimum residual methods for augmented systems, BIT 38, (1998), 527-543. %\bibitem{Glow84} R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer Series in Computational Physics, Springer, New York, 1984. \bibitem{GolWuYuan2001} G. H. Golub, X. Wu and J.-Y Yuan, SOR-like methods for augmented systems, BIT 41, (2001), 71-85. %\bibitem{Hall79} E. L. Hall, Computer Image Processing and Recognition, Academic Press, New York, 1979. \bibitem{HestSt52} M. R. Hestenes and E. Stiefel, Methods of Conjugate Gradients for Solving Linear Systems, Journal of Research of the National Bureau of Standards, vol. 49, No. 6, (1952), 409-436. %\bibitem{LiLiEv1998} C.-J. Li, B.-J. Li, and D. J. Evans, A generalized successive overrelaxation method for least squares problems, BIT 38, (1998), 347-356. \bibitem{LiEvansZha2003} C.-J. Li, Z. Li, D. J. Evans and T. Zhang, A note on an SOR-like method for augmented systems, IMA J. Numer. Anal. 23, (2003), 581-592. \bibitem{Louka} M. A. Louka, Iterative Methods for solving linear systems, Ph.D thesis (in Greek), 2011. \bibitem{LoukMis2} M. A. Louka, N. M. Missirlis and F. I. Tzaferis, Is modified PSD equivalent to modified SOR for two-cyclic matrices? Lin. Alg. and its Appl., Vol. 432, No. 11, (2010), 2798-2815. %\bibitem{MisEv2} N. M. Missirlis and D. J. Evans, The modified preconditioned simultaneous displacement (MPSD) method, Mathematics and Computers in Simulation, Vol. 26, (1984), 257-262. % %\bibitem{Teman84} R. Teman, Navier-Stokes Equations: Theory and Numerical Analysis, Vol 2 of Studies in Mathematics and its Applications, 3rd Ed., North Holland, Amsterdam, 1984. \bibitem{Var1} R. S. Varga, Matrix Iterative Analysis, Prentice-Hall, Inc. Englewood Cliffs, N.J., 1962. %\bibitem{Wright92} M. H. Wright, Interior methods for constrained optimization, Acta Numerica, Vol 1., Cambridge University Press, (1992), 341-407. %\bibitem{Wright97} S. J. Wright, Primal-Dual Interior Point Methods. SIAM, Philadelphia, PA, 1997. %\bibitem{YePs90} A. Yeyios and A. Psimarni, Convergence Analysis of the Modified SOR (MSOR) Method, Intern. J. Computer Math., Vol. 35, (1990), 231-244. \bibitem{Young} D. M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, 1971. \end{thebibliography} ===affil3: ===lastname5: ===affilother: ===title: Preconditioning augmented linear systems ===firstname2: Maria