===affil2: ===firstname: Nikolaos, N M ===firstname4: ===firstname3: ===lastname2: Louka ===lastname: Missirlis ===firstname5: ===affil6: ===lastname3: ===email: nmis@di.uoa.gr ===lastname6: ===affil5: ===otherauths: ===lastname4: ===affil4: ===lastname7: ===affil7: ===firstname7: ===postal: Department of Informatics and Telecommunications, University of Athens, Panepistimiopolis, 15784 Ilisia, Athens, Greece ===firstname6: ===ABSTRACT: Let $A \in \mathbb{R}^{m\times m}$ be a symmetric positive definite matrix and $B\in\mathbb{R}^{m\times n}$ be a matrix of full column rank, where $m\geq n$. Then, the augmented linear system is of the form %\cite{ArHurUz58},\cite{BaiParlWang2005},\cite{BraPasVas77},\cite{ElmGol94},\cite{GolWuYuan2001} \begin{equation} \ca u=b\label{eq:01} \end{equation} where \begin{equation}\label{eq:02} \ca=\left( \begin{array}{cc} A & B\\ -B^{T} & 0 \end{array} \right), \;\; u= \left( \begin{array}{cc} x\\ y \end{array} \right),\;\; b= \left( \begin{array}{cc} b_1\\ -b_2 \end{array} \right) \end{equation} with $B^T$ be the transpose of the matrix B. When A and B are large and sparse, iterative methods for solving (\ref{eq:01}) are effective and more attractive than direct methods, because of storage requirements and preservation of sparsity. %Such systems arise in areas of computational fluid dynamics \cite{Glow84}, constrained optimization \cite{Wright92}, image processing \cite{Hall79}, finite element approximations and elsewhere \cite{BenGolLie2005}. \\ \noindent The difficulty in applying iterative methods such as the successive overrelaxation (SOR) method \cite{Young} to system (\ref{eq:01}) is the singularity of the block diagonal part of the coefficient matrix. Some methods have been developed to overcome this difficulty such as the Uzawa and the Preconditioned Uzawa method \cite{ArHurUz58}. In 2001, Golub et al. \cite{GolWuYuan2001} generalized the Uzawa and the Preconditioned Uzawa method for the augmented system (\ref{eq:01}) by introducing an additional acceleration parameter and produced the SOR-like method. Other known methods, competitive and specially attractive for solving (\ref{eq:01}) are the preconditioned MINRES and GMRES methods \cite{FiRaSiWa98}. In case where the matrix A is symmetric and positive definite, the Preconditioned Conjugate Gradient (PCG) method \cite{HestSt52} is the most suitable. In 2003, Li, Evans and Zhang \cite{LiEvansZha2003} proved that the PCG method is at least as accurate as the SOR-like method. In 2005, Bai et al. \cite{BaiParlWang2005} proposed the Generalized SOR (GSOR) method and proved that it has the same rate of convergence but less complexity than the PCG method. Furthermore, the Generalized Modified Extrapolated SOR (GMESOR) method, a generalization of the GSOR method, was also proposed for further study. The latter is a generalization of the GSOR method as it uses one additional parameter. \\ \noindent In this paper, we study the Generalized Modified Extrapolated SOR (GMESOR) method and the Generalized Modified Preconditioned Simultaneous Displacement (GMPSD) method. The latter is derived by using as preconditioning matrix the product of a lower and upper triangular part of $\ca$ \cite{Louka}, \cite{LoukMis2}. %Furthermore, as a by-product of the analysis of the aforementioned methods and for different values of their parameters, we study the Generalized Extrapolated SOR (GESOR), the Generalized Modified PSD with three parameters (GMPSD(3)), the Generalized Modified SSOR (GMSSOR), the Generalized PSD (GPSD), the Generalized SSOR (GSSOR) methods such as the backward schemes of the GSOR, GMESOR and GESOR methods, that is the GBSOR, GMEBSOR and GEBSOR methods, respectively. In an attempt to generalize our theory, we introduce an additional parameter $a$ in the structure of the matrix $\ca$, with the hope of increasing the rate of convergence for the aforementioned methods. Our starting point, for studying these methods, is the derivation of functional relations which relate the eigenvalues of their iteration matrices with those of the key matrix $J=Q^{-1}B^TA^{-1}B$. Assuming that the matrix Q is symmetric positive or negative definite matrix, the eigenvalues of the matrix $J$ are real and either positive if the matrix Q is positive definite or negative if the matrix Q is negative definite. Under these assumptions we find sufficient conditions for the convergence of the aforementioned methods and determine the optimum values of their parameters. For the determination of the optimum values of the parameters in GMESOR and GMPSD methods we followed a similar approach to Varga \cite{Var1} p. 110, for the optimum $\omega$ of SOR. %\\ %The paper is organized as follows. In section 2 we study the convergence analysis of the GMESOR method. In particular, we find sufficient conditions for GMESOR to converge under the assumption that the eigenvalues of the $J$ matrix are real. We also determine optimum values for its parameters. A similar convergence analysis for the GMPSD method is developed in section 3. In section 4, we present our numerical results and finally in section 5 we state our remarks and conclusions. %It is important to note that we did a big effort to study our methods and to determine their optimum parameters following a similar approach as in \cite{BaiParlWang2005}. However, because of the complexity of this approach, it was necessary to follow a geometric approach \cite{Var1}, pp. 110. Our main difference is that in our methods we have to optimize two parameters instead of one in \cite{Var1}. In the following, we will use this geometric approach to determine the optimum values of the aforementioned methods easier than in \cite{BaiParlWang2005}. %In this paper, we propose a unified approach for determining optimum parameters in iterative methods for solving the augmented linear system (\ref{eq:01}). We develop the convergence analysis of the GMESOR and Generalized Modified Preconditioned Simultaneous Displacement (GMPSD) methods and we find that both methods have the same rate of convergence as the Preconditioned Conjugate Gradient (PCG) method. To study their convergence, it was necessary to find their functional equations which relate the eigenvalues of their iteration matrices with those of the key matrix J. We also study some other methods by changing the number of parameters, such as the Generalized Extrapolated SOR (GESOR) method, the Generalized Modified PSD method with three parameters (GMPSD(3)), the Generalized PSD method with two parameters (GPSD) and the Generalized Modified SSOR method (GMSSOR), or by modifying the iteration matrix such as the backward schemes of GSOR, GESOR and GMESOR methods, the GBSOR method, the GEBSOR method and the GMEBSOR method, respectively. %The main result of our analysis is that all these methods have the same rate of convergence with the PCG method. This result proves that the introduction of parameters in an iterative method have the same effect in the increase of the rate of convergence as the Conjugate Gradient (CG) method.\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\section{The Generalized Modified Preconditioned Simultaneous Displacement (GMPSD) method} % %% %%In this paper we consider different types of preconditioning the augmented linear system (\ref{eq:01}) with the hope of increasing the convergence rate of the induced iterative methods. %%\\ \noindent %Let the coefficient matrix $\ca$ of (\ref{eq:01}) be defined by the splitting %\begin{equation}\label{eq:03} %\ca=\cd-\cl-\cu %\end{equation} %where %\begin{equation}\cd= %\left( %\begin{array}{ccc} %A && 0 \\ %0 && Q \\ %\end{array}\right),\ %\cl=\left( %\begin{array}{ccc} %0 & &0 \\ %B^T & &aQ \\ %\end{array}\right),\ %\cu=\left( %\begin{array}{ccc} %0 & &-B \\ %0 & &(1-a)Q \\ %\end{array}\right), %\label{eq:04} %\end{equation} %with $Q\in\mathbb{R}^{n \times n}$ be a prescribed nonsingular and %symmetric matrix and ${a\in\mathbb{R}}$. Furthermore, we denote by T, the diagonal matrix $T=diag(\tau_1 I_m, \tau_2 I_n)$ with $\tau_1,\tau_2\in \mathbb{R}-{\{0\}},\; I_m \in\mathbb{R}^{m \times m}$ and $I_n\in\mathbb{R}^{n \times n}$ be identity matrices. %\\ \noindent For the numerical solution of (\ref{eq:01}), we consider the following iterative scheme %\begin{equation}\left( %\begin{array}{c} %x^{(k+1)} \\ %y^{(k+1)}\\ %\end{array}\right)=\ch{(\tau_1, \tau_2)} %\left( %\begin{array}{c} %x^{(k)} \\ %y^{(k)}\\ %\end{array}\right)+\eta{(\tau_1, \tau_2)} %\left(\begin{array}{c} %b_1 \\ %-b_2\\ %\end{array}\right) %\label{eq:05} %\end{equation} %where %\begin{equation}\label{eq:06} %\ch{(\tau_1, \tau_2)}=I-R^{-1}T\ca, \;\;\eta{(\tau_1, \tau_2)}=R^{-1}Tb, %\end{equation} %R is a nonsingular matrix to be defined and $I=diag(I_m, I_n)$. %In this section, we let the preconditioning matrix $R$ be the product of the lower times the upper triangular part of $\ca$ in an attempt to obtain a better approximation of $\ca$ and consequently an increase in the rate of convergence of the corresponding iterative method. Let %%We will consider different forms for the preconditioned matrix R. We begin by letting R to be the parametrized diagonal and lower triangular part of $\ca$, that is %\begin{equation}\label{eq:21} %R=(\cd-\Omega\cl)\cd^{-1}(\cd-\Omega\cu). %\end{equation} %From (\ref{eq:06}) and (\ref{eq:21}) it follows that the iteration matrix of (\ref{eq:05}) is %\begin{equation}\label{eq:22} %\cg{(\tau_1,\tau_2,\omega_1,\omega_{2},a)}=I-(\cd-\Omega \cu)^{-1}D(\cd-\Omega \cl)^{-1}T\ca %\end{equation} %whereas $\eta(\tau_1, \tau_2)$ corresponds to %\begin{equation}\label{eq:23} %\gamma{(\tau_1,\tau_2,\omega_1,\omega_{2},a)}=(\cd-\Omega \cu)^{-1}D(\cd-\Omega \cl)^{-1}Tb. %\end{equation} %Note that this method has four parameters $\tau_1, \tau_2, \omega_1\; \mbox{and}\; \omega_2$ instead of three in the GMESOR method. %The iterative scheme given by (\ref{eq:05}), (\ref{eq:22}) and (\ref{eq:23}) will be referred to as the Generalized Modified Preconditioned Simultaneous Displacement (GMPSD) method. For $(\cd-\Omega \cu)^{-1}D(\cd-\Omega \cl)^{-1}$ to exist we require %\begin{equation}\label{eq:5_06bbbblu} %\det[(\cd-\Omega\cl)\cd^{-1}(\cd-\Omega\cu)]\neq 0. %\end{equation} %Because of (\ref{eq:04}) %\begin{equation}\label{eq:5_06bbbclu} %R=(\cd-\Omega\cl)\cd^{-1}(\cd-\Omega\cu)=\left( %\begin{array}{cc} %A & -\omega_1 B\\ %-\omega_2 B^T & (1-a\omega_2)[1-(1-a)\omega_2] Q %\end{array}\right). %\end{equation} %Therefore, %\begin{equation*}%\label{eq:5_10} %\det(\cd-\Omega\cl)\cd^{-1}(\cd-\Omega\cu)=(1-a\omega_2)^{n}[1-(1-a)\omega_2]^{n}\det{A}\det{Q}\neq 0 %\end{equation*} %or %\begin{equation}\label{eq:5_100} %a\neq \frac12 \;\;\mbox{and}\;\; \omega_2\neq 2 %\end{equation} %since the matrix $A$ is symmetric positive definite and the matrix $Q$ is nonsingular. %\\\\ %\noindent {\sc The GMPSD Method:}{\em{ Let $Q\in\mathbb{R}^{n\times n}$ be a nonsingular and symmetric matrix. %Given initial vectors $x^{(0)}\in\mathbb{R}^m$ and %$y^{(0)}\in\mathbb{R}^n$, and relaxation factors $\tau_1,\ \tau_2\neq 0, \ %\omega_1, \omega_2, a\in\mathbb{R}$ with $a\neq \frac12 \;\; \mbox{and}\;\; \omega_2\neq 2$. %For $k=0,1,2,...$ until the %iteration sequence %$\{({x^{(k)}}^T, {y^{(k)}}^T)^T\}$ is convergent, compute %\\ \newline %%\vspace{0.3cm} %\hspace{0.5cm}\footnotesize{${y^{(k+1)}=y^{(k)}+ %\frac{1}{(1-a\omega_2)[1-(1-a)\omega_2]}Q^{-1}\left\{B^T[(\tau_2-\tau_1\omega_2)x^{(k)}+\tau_1\omega_2A^{-1}(b-By^{(k)})]+\tau_2b_2\right\}}$} %\\ %\hspace{0.3cm}$ {x^{(k+1)}=(1-\tau_1)x^{(k)}+ A^{-1}\left\{B\left[(\omega_1-\tau_1)y^{(k)}-\omega_1y^{(k+1)}\right]+\tau_1b_1\right\}}, %$ \\ \newline %\noindent where Q is an approximate (preconditioning) %matrix of the Schur complement matrix $B^TA^{-1}B$.}} %\\\\ %Note that in the above algorithm we first compute $y^{(k+1)}$ and then $x^{(k+1)}$, whereas in the GMESOR method we had the reverse computations. %\noindent In addition, comparing the algorithmic structures of the GMPSD method and the GSOR method, we see that both methods have exactly the same computational complexity. %This is due to the fact that $\cl(\omega_1, \omega_2)\equiv \cg(\omega_1, \omega_2, \omega_1, \omega_2, 0)$, where $\cl(\omega_1, \omega_2)$ is the iteration matrix of the GSOR method. %For different values of the parameters of the GMPSD method we obtain various new methods. %If $\tau=\tau_1=\tau_2$ we have the Generalized Modified PSD method with three parameters %(GMPSD(3)) whereas if $\tau=\tau_1=\tau_2=\hat\omega$, where $\hat\omega=\omega_1+\omega_2-\omega_1\omega_2$ we have the GMSSOR method. Also, if $\tau=\tau_1=\tau_2$ and $\omega=\omega_1=\omega_2$ we have the GPSD method and if $\tau=\tau_1=\tau_2=\hat\omega$, where $\hat\omega=\omega(2-\omega)$ with $\omega=\omega_1=\omega_2$ we have the GSSOR method. %Therefore, if $a=0$, the GMPSD method has less complexity than the PCG method, since the same holds for the GSOR method \cite{BaiParlWang2005}. %\begin{cor} %\\ \noindent If $\omega_2=0$ then the algorithmic form of the GMPSD method simplifies to %%\begin{eqnarray*} %%y^{(k+1)}=y^{(k)}+\tau_2 Q^{-1}(B^T x^{(k)}+b_2)&\\ %%x^{(k+1)}=(1-\tau_1)x^{(k)}+\tau_1 A^{-1}(b_1-By^{(k+1)}) %%\end{eqnarray*} %\\ \newline %%\vspace{0.3cm} %\hspace{0.5cm}$y^{(k+1)}=y^{(k)}+\tau_2 Q^{-1}(B^T x^{(k)}+b_2)$ %\\ %\hspace{0.3cm}$ x^{(k+1)}=(1-\tau_1)x^{(k)}+\tau_1 A^{-1}(b_1-By^{(k+1)}). %$ \\ \newline The above GMPSD form is the same as that of the GSOR algorithm if we use $\cd-\Omega \cu$ instead of $\cd-\Omega \cl$ in the GSOR preconditioning matrix, since $\cm(\omega_1, \omega_2, a)\equiv\cg(\tau_1, \tau_2,\tau_1,0,a)$. %%\end{cor} %\noindent In the following theorem we find the functional relationship for the GMPSD method between the eigenvalues $\lambda$ of the iteration matrix $\cg(\tau_1, \tau_2, \omega_1, \omega_2, a)$ and the eigenvalues $\mu$ of the matrix J. %\begin{theo}\label{theo:func_rel2} %Let $A\in\mathbb{R}^{m\times m}$ be symmetric positive definite, $B\in\mathbb{R}^{m\times n}$ be of full column rank and $Q\in\mathbb{R}^{n\times n}$ be %nonsingular and symmetric. If $\lambda\neq 1-\tau_1$ is an eigenvalue of the matrix %$\cg(\tau_1, \tau_2, \omega_1, \omega_2, a)$ and if $\mu$ satisfies %\footnotesize{\begin{equation} %\lambda^2+\lambda\left(\tau_1-2+\frac{\tau_1\omega_2+\tau_2\omega_1-\tau_1\omega_1\omega_2}{(1-a\omega_2)[1-(1-a)\omega_2]}\mu\right)+1-\tau_1+\frac{\tau_1\tau_2-\tau_1\omega_2-\tau_2\omega_1+\tau_1\omega_1\omega_2}{(1-a\omega_2)[1-(1-a)\omega_2]}\mu=0, \label{eq:24} %\end{equation}} %\noindent where $a\neq \frac12 \; \mbox{and}\; \omega_2\neq 2$, then $\mu$ is an eigenvalue of the key matrix $J=Q^{-1}B^TA^{-1}B$. %Conversely, if $\mu$ is an eigenvalue of J and if %$\lambda\neq1-\tau_1$ satisfies (\ref{eq:24}), then $\lambda$ is an %eigenvalue of $\cg(\tau_1, \tau_2, \omega_1, \omega_2, a)$. In addition, $\lambda=1-\tau_1$ is an %eigenvalue of $\cg(\tau_1, \tau_2, \omega_1, \omega_2, a)$ (if $m>n$) with the corresponding %eigenvector $(x^T,0)^T$, where $x\in\cn(B^T)$. %\end{theo} %\proof Following a similar approach as in the proof of Theorem \ref{theo:func_rel} using now the iteration matrix $\cg(\tau_1,\tau_2, \omega_1, \omega_2, a)$, given by (\ref{eq:22}), %we find the functional relationship (\ref{eq:24}).\qed %\begin{corollary}Under the hypothesis of theorem \ref{theo:func_rel2}, %\ \newline \noindent %1. The nonzero eigenvalues of the iteration matrix $\cg(\tau, \omega_1, \omega_2, a)$ of the %GMPSD(3) method are given by $\lambda=1-\tau$ or if $a\neq \frac12 \;\; \mbox{and}\;\; \omega_2\neq 2$ by %\begin{equation} %\lambda^2+\lambda\left({\tau-2}+\frac{\tau\hat\omega}{(1-a\omega_2)[1-(1-a)\omega_2]}\mu\right) +1-\tau+\frac{\tau(\tau-\hat\omega)}{(1-a\omega_2)[1-(1-a)\omega_2]}\mu=0 %\label{eq:26} %\end{equation} %where %\begin{equation}\label{eq:26b} %\hat\omega=\omega_1+\omega_2-\omega_1\omega_2. %\end{equation} %2. The nonzero eigenvalues of the iteration matrix $\mathcal{S}(\omega_1, \omega_2, a)$ of the %GMSSOR method are given by $\lambda=1-\hat\omega$ or if $a\neq \frac12 \;\; \mbox{and}\;\; \omega_2\neq 2$ by %%If $\tau=\tau_1=\tau_2=\hat\omega$, then we have the GMSSOR method. Then, %\begin{equation} %\lambda^2+\lambda\left({\hat\omega-2}+\frac{\hat\omega^2}{(1-a\omega_2)[1-(1-a)\omega_2]}\mu\right)+1-\hat\omega=0 \label{eq:27} %\end{equation} %where $\hat\omega$ is given by (\ref{eq:26b}).\\ \noindent %3. The nonzero eigenvalues of the iteration matrix $\cg(\tau, \omega, a)$ of the %GPSD method are given by $\lambda=1-\tau$ or if $a\neq \frac12 \;\; \mbox{and}\;\; \omega\neq 2$ by %%If $\tau=\tau_1=\tau_2$ and $\omega=\omega_1=\omega_2$, and we have the GPSD method. Then, %\begin{equation} %\lambda^2+\lambda\left({\tau-2}+\frac{\tau\hat\omega}{(1-a\omega)[1-(1-a)\omega]}\mu\right) +1-\tau+\frac{\tau(\tau-\hat\omega)}{(1-a\omega)[1-(1-a)\omega]}\mu=0 %\label{eq:28} %\end{equation} %where now %\begin{equation}\label{eq:28b} %\hat\omega=\omega(2-\omega) %\end{equation} %%and %%\begin{equation}\label{eq:28c} %% a\omega\neq 1 \;\;\mbox{and} \;\; (1-a)\omega\neq 1 %%\end{equation} %4. The nonzero eigenvalues of the iteration matrix $\mathcal{S}(\omega, a)$ of the %GSSOR method are given by $\lambda=1-\hat\omega$ or if $a\neq \frac12 \;\; \mbox{and}\;\; \omega\neq 2$ by %%If $\tau=\tau_1=\tau_2=\hat\omega$ and $\omega=\omega_1=\omega_2$, then we have the GSSOR method. Then %\begin{equation} %\lambda^2+\lambda\left({\hat\omega-2}+\frac{\hat\omega^2}{(1-a\omega)[1-(1-a)\omega]}\mu\right)+1-\hat\omega=0 \label{eq:29} %\end{equation} %where $\hat\omega$ is given by (\ref{eq:28b}). %\end{corollary} %\proof The iteration matrix $\cg(\tau,\omega_1,\omega_2, a)$ of the GMPSD(3) is obtained by letting $\tau=\tau_1=\tau_2$ in $\cg(\tau_1, \tau_2, \omega_1, \omega_2, a)$ given by (\ref{eq:22}). %Using the matrix $\cg(\tau_1, \tau_2, \omega_1, \omega_2, a)$ and following a similar approach as in the proof of Theorem \ref{theo:func_rel2} we find the functional relationship (\ref{eq:26}). Similarly, we find (\ref{eq:27}), (\ref{eq:28}) and (\ref{eq:29}).\qed %\subsection{Convergence} %If the matrix $Q$ is positive definite and $a=0$ sufficient conditions for the GMPSD method to converge are given by the following theorem. %\begin{theo}\label{theo:conv1bb} %\footnotesize{Let $A\in\mathbb{R}^{m\times m}$ and $Q\in\mathbb{R}^{n\times n}$ be %symmetric positive definite and $B\in\mathbb{R}^{m\times n}$ be of %full column rank. %Denote the minimum and the maximum eigenvalues of the matrix $J=Q^{-1}B^TA^{-1}B$ by $\mu_{min}$ and $\mu_{max}$, respectively. Then, %$\rho({\cg}{(\tau_1,\tau_2,\omega_1,\omega_2)})<1$ if the %parameters $\tau_1,\tau_2,\omega_1\; \mbox{and}\; \omega_2$ lie in any case of Table \ref{tab10}} with $0<\tau_1<2$ and %where %\begin{table}[htbp] %%\begin{center} %\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} %\hline % $\mbox{Cases}$&$\omega_2 - \mbox{Domain}$ &$\omega_1 - \mbox{Domain}$ % &${\tau_2 - \mbox{Domain} }$\\ % \hline\hline %1& $ \omega_{21}^*<\omega_2<\omega_{22}^*(\mu_{max}) $& $\omega_{11}^*(\mu_{max})<\omega_1<\omega_{12}^*(\mu_{min})$ & % \\ % \cline{1-3} 2&$\omega_2<\omega_{21}^* $ % & &$0<\tau_2<{\frac{4\tau_1 }{4+\tau_1^2\mmax} %}$ % \\ \cline{1-2} \cline{4-4} 3& $\omega_2<\omega_{22}^*(\mu_{max}) $ % & $\omega_{12}^*(\mu_{min})<\omega_1<\omega_{11}^*(\mu_{max})$ & ${\frac{4\tau_1}{4+\tau_1^2\mmin} %}<\tau_2$ % \\ \cline{1-2} \cline{4-4} 4&$1<\omega_2<\omega_{22}^*(\mmin)$ % & &$\tau_2<0$ % \\ \hline %\end{tabular} %%\vspace{0.1cm} %\caption{Sufficient conditions for the % GMPSD method to converge. } \label{tab10} %%\end{center} %\end{table} %\begin{equation}\label{eq:conv_gmpsd} %\begin{array}{ll} %\omega_{11}^*(\mu)=\frac{\tau_1(2\omega_2-\tau_2)}{2(\tau_1\omega_2-\tau_2)}+\frac{(\tau_1-2)(1-\omega_2)}{\tau_1\omega_2-\tau_2}\frac{1}{\mu}, \;\;\;&\;\;\; \omega_{12}^*(\mu)= \frac{\tau_1(\omega_2-\tau_2)}{\tau_1\omega_2-\tau_2}+\frac{\tau_1(1-\omega_2)}{\tau_1\omega_2-\tau_2}\frac{1}{\mu}, \\ %\omega_{21}^*=\frac{\tau_2}{\tau_1},\;\;\;& \;\;\;\omega_{22}^*(\mu)=1-\frac{\tau_1\tau_2\mu}{4}. %\end{array} %\end{equation} %\end{theo} %\proof Recall that $\lambda=1-\tau_1\neq 0$ is an eigenvalue of $\cg{(\tau_1,\tau_2,\omega_1,\omega_{2})}$ and if $\lambda\neq 1-\tau_1$ then the eigenvalues of $\cg{(\tau_1,\tau_2,\omega_1,\omega_{2})}$ are given by (\ref{eq:24}). If $\lambda=1-\tau_1\neq 0$, then the GMPSD method is convergent if $|\lambda|<1$, that is $|1-\tau_1|<1$, or %\begin{equation} \label{eq:45} %0<\tau_1<2. %\end{equation} %If $\lambda\neq 1-\tau_1$, then (\ref{eq:24}) holds and by Lemma 2.1 page 171 of \cite{Young}, it follows that the GMPSD method is convergent if and only if (\ref{eq:32}) holds, where %\begin{equation} \label{eq:46} %c=1-\tau_1+\frac{\tau_1\omega_1\omega_2-\tau_1\omega_2-\tau_2\omega_1+\tau_1\tau_2}{1-\omega_2}\mu %\end{equation} %and %\begin{equation} \label{eq:47} %b=2-\tau_1+\frac{\tau_1\omega_1\omega_2-\tau_1\omega_2-\tau_2\omega_1}{1-\omega_2}\mu. %\end{equation} %Recall that (\ref{eq:35}) holds. From the second inequality of (\ref{eq:32}), because of (\ref{eq:47}), we have %% %% and (\ref{eq:47}) we have that (\ref{eq:45}) holds and that %\begin{equation} \label{eq:48} %0<\frac{\tau_1\tau_2\mu}{2(1-\omega_2)}<1+c. %\end{equation} %Combining (\ref{eq:35}) and (\ref{eq:48}) it follows that %\begin{equation} \label{eq:49} %0<\frac{\tau_1\tau_2\mu}{2(1-\omega_2)}<1+c<2. %\end{equation} %In order for (\ref{eq:49}) to hold we must have that %\begin{equation*} %\label{eq:49} %0<\frac{\tau_1\tau_2\mu}{2(1-\omega_2)}<2, %\end{equation*} %or because of (\ref{eq:45}) %\begin{equation} \label{eq:50} %0<\frac{\tau_2}{1-\omega_2}<\frac{4}{\tau_1\mu}. %\end{equation} %Inequalities (\ref{eq:49}), because of (\ref{eq:46}), become %\begin{equation} \label{eq:51} %\frac{\tau_1(2\omega_2-\tau_2)\mu}{2(1-\omega_2)}+\tau_1-2<\omega_1\frac{\tau_1\omega_2-\tau_2}{1-\omega_2}\mu<\tau_1+\frac{\tau_1(\omega_2-\tau_2)\mu}{1-\omega_2}. %\end{equation} %In the sequel we distinguish the following two cases to study (\ref{eq:51}). Case I: $\tau_2>0$ and $1-\omega_2>0$ and Case II: $\tau_2<0$ and $1-\omega_2<0$. In addition, we distinguish the following two subcases for each of the above cases. (i): $\tau_1\omega_2-\tau_2>0$ and (ii): $\tau_1\omega_2-\tau_2<0$. Next, we will study only the subcase (i) of Case I, since the other cases can be treated similarly. %\noindent For this case, we have that %\begin{equation} \label{eq:52} %\frac{\tau_2}{\tau_1}<\omega_2<1, \;\;\; \mbox{if}\;\;\; 0<\tau_2<\tau_1 %\end{equation} %and from the second part of (\ref{eq:50}) %\begin{equation} \label{eq:53} %\omega_2<1-\frac{\tau_1\tau_2\mu}{4}. %\end{equation} %From (\ref{eq:52}) and (\ref{eq:53}) it follows that %\begin{equation*} %\label{eq:54} %\frac{\tau_2}{\tau_1}<\omega_2<\min{\left\{1, 1-\frac{\tau_1\tau_2\mu}{4}\right\}},\;\;\; 0<\tau_2<\tau_1 %\end{equation*} %or %\begin{equation} \label{eq:54} %\frac{\tau_2}{\tau_1}<\omega_2<1-\frac{\tau_1\tau_2\mu}{4},\;\;\; 0<\tau_2<\tau_1 %\end{equation} %which holds if $\frac{\tau_2}{\tau_1}<1-\frac{\tau_1\tau_2\mu}{4}$. Therefore, we have that (\ref{eq:54}) holds if %\begin{equation} \label{eq:55} %\omega_{21}^*<\omega_2<\omega_{22}^*(\mu), \;\;\; 0<\tau_2<\frac{4\tau_1}{4+\tau_1^2\mu} %\end{equation} %where $\omega_{21}^*, \omega_{22}^*(\mu)$ are given by (\ref{eq:conv_gmpsd}). Furthermore, from (\ref{eq:51}), we have that %\begin{equation} \label{eq:56} %\omega_{11}^*(\mu)<\omega_1<\omega_{12}^*(\mu) %\end{equation} %where $\omega_{11}^*(\mu), \omega_{12}^*(\mu)$ are given by (\ref{eq:conv_gmpsd}). Studying the monotonicity of $\omega_{22}^*(\mu), \omega_{11}^*(\mu)$ and $\omega_{12}^*(\mu)$ with respect to $\mu$ we have that %$sign\frac{\partial\omega_{22}^*(\mu)}{\partial\mu}=-1$, $sign\frac{\partial\omega_{11}^*(\mu)}{\partial\mu}=+1$ and $sign\frac{\partial\omega_{12}^*(\mu)}{\partial\mu}=+1$. Hence, case 1 of Table \ref{tab10} is proved. Treating similarly subcase (ii) of Case I and subcases (i) and (ii) for Case II, we can prove the rest of the cases in Table \ref{tab10}.\qed %\\ \noindent The convergence conditions for the GMPSD(3) are given by the following. %\begin{corollary}\label{cor:conv1b} %Under the hypothesis of Theorem \ref{theo:conv1bb} then %${\rho({\cg}{(\tau,\omega_1,\omega_2)})<1}$ if %\begin{equation}\label{eq:57} % 0<\tau<2,\ \ \omega_2<\omega_2^*(\mu_{max}) \ \ \mbox{and}\ \ % \omega_{13}^*(\mu_{max})<\omega_1<\omega_{14}^*(\mu_{max}) %\end{equation} %where %\begin{equation}\label{eq:58} %\begin{array}{cccc} %\omega_2^*(\mu)=1-\frac{\tau^2\mu}{4},\;\;& \omega_{13}^*(\mu)=\frac{\tau-\omega_2}{1-\omega_2}-\frac{1}{\mu},\;\;&\omega_{14}^*(\mu)=\frac{2-\tau}{\tau\mu}+\frac{\tau-2\omega_2}{2(1-\omega_2)}. %\end{array} %\end{equation} %\end{corollary} %\proof Letting $a=0$ in the functional relationship (\ref{eq:26}) and following a similar approach as in the proof of Theorem \ref{theo:conv1bb}.\qed %%In this case, where $\tau=\tau_1=\tau_2$, the eigenvalues of ${\cg}{(\tau,\omega_1,\omega_2,0)}$ are given by (\ref{eq:26}). Next, we can follow an analogous analysis as in the proof of theorem \ref{theo:conv1bb}, to verify (\ref{eq:57}) and (\ref{eq:58}).\qed %\\ \noindent Note that analogous results hold when $Q\in\mathbb{R}^{n\times n}$ is symmetric negative definite. % % %\section{Remarks and Conclusions} %In this paper we studied the convergence analysis of various generalized iterative methods for the solution of the augmented linear system (\ref{eq:01}) when the coefficient matrix $\ca$ is of the form (\ref{eq:02}). We assumed that $A\in \mathbb{R}^{m\times m}$ was a symmetric positive definite matrix and $B\in\mathbb{R}^{m\times n}$ was a matrix of full column rank, where $m\geq n$, in order to have a unique solution, whereas $Q$ was a symmetric positive or negative definite matrix. %Under these assumptions we were able to find %sufficient conditions for the GMESOR and GMPSD iterative methods as well as for their counterparts to converge and determine their optimum rate of convergence in the general case where $a\neq 0$. From our analysis it is shown that all these methods are equivalent since they have the same spectral radius, which is given by (\ref{eq:4u2ux}). It is also shown that the introduction of the parameter $a$ in the structure of $\ca$ and hence in the preconditioning matrix $R$ does not have any impact in the convergence rate of these methods as one might have expected. Furthermore, all these methods have the same spectral radius as the Preconditioned Conjugate Gradient (PCG) method but less complexity. Therefore, it will be interesting to study the behavior of the GSOR, GMESOR and GMPSD methods in problems where the PCG method is the best solver. Note that the GMPSD method has a similar behavior as the Modified PSD (MPSD) method for two-cyclic matrices \cite{LoukMis2}. Indeed, in \cite{LoukMis2} we proved the equivalence of MPSD and MSOR methods in case the eigenvalues of the Jacobi matrix are either all real or all imaginary. An interesting research direction is the study of all these methods in case of nonsymmetric augmented linear systems where now the eigenvalues of the matrix $J$ are complex. \noindent \begin{thebibliography}{4} \bibitem{ArHurUz58} K. Arrow, L. Hurwicz and H. Uzawa, Studies in Nonlinear Programming, Stanford University Press, Stanford, 1958. \bibitem{BaiParlWang2005} Z.-Z. Bai, B. N. Parlett and Z.-Q. Wang, On generalized succesive overrelaxation methods for augmented linear systems, Numer. Math. 102, (2005), 1-38. %\bibitem{BraPasVas77} J. H. Bramble, J. E. Pasciak and A. T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problems, SIAM J. Numer. Anal. 34, (1977), 1072-1092. %\bibitem{BenGolLie2005} M. Benzi, G. H. Golub and J. Liesen, Numerical solution of saddle point problems, Acta Numerica, (2005), 1-137. %\bibitem{ElmGol94} H. C. Elman and G. H. Golub, Inexact and preconditioned Uzawa algorithms for saddle point problems, SIAM J. Numer. Anal. 31, (1994), 1645-1661. % %\bibitem{EvMis1} D. J. Evans and N. M. Missirlis, The preconditioned simultaneous displacement method (PSD method) for elliptic difference equations, Mathematics and Computers in Simulation 22, (1980), 256-263. \bibitem{FiRaSiWa98} B. Fischer, R. Ramage, D. J. Silvester, A. J. Wathen, Minimum residual methods for augmented systems, BIT 38, (1998), 527-543. %\bibitem{Glow84} R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer Series in Computational Physics, Springer, New York, 1984. \bibitem{GolWuYuan2001} G. H. Golub, X. Wu and J.-Y Yuan, SOR-like methods for augmented systems, BIT 41, (2001), 71-85. %\bibitem{Hall79} E. L. Hall, Computer Image Processing and Recognition, Academic Press, New York, 1979. \bibitem{HestSt52} M. R. Hestenes and E. Stiefel, Methods of Conjugate Gradients for Solving Linear Systems, Journal of Research of the National Bureau of Standards, vol. 49, No. 6, (1952), 409-436. %\bibitem{LiLiEv1998} C.-J. Li, B.-J. Li, and D. J. Evans, A generalized successive overrelaxation method for least squares problems, BIT 38, (1998), 347-356. \bibitem{LiEvansZha2003} C.-J. Li, Z. Li, D. J. Evans and T. Zhang, A note on an SOR-like method for augmented systems, IMA J. Numer. Anal. 23, (2003), 581-592. \bibitem{Louka} M. A. Louka, Iterative Methods for solving linear systems, Ph.D thesis (in Greek), 2011. \bibitem{LoukMis2} M. A. Louka, N. M. Missirlis and F. I. Tzaferis, Is modified PSD equivalent to modified SOR for two-cyclic matrices? Lin. Alg. and its Appl., Vol. 432, No. 11, (2010), 2798-2815. %\bibitem{MisEv2} N. M. Missirlis and D. J. Evans, The modified preconditioned simultaneous displacement (MPSD) method, Mathematics and Computers in Simulation, Vol. 26, (1984), 257-262. % %\bibitem{Teman84} R. Teman, Navier-Stokes Equations: Theory and Numerical Analysis, Vol 2 of Studies in Mathematics and its Applications, 3rd Ed., North Holland, Amsterdam, 1984. \bibitem{Var1} R. S. Varga, Matrix Iterative Analysis, Prentice-Hall, Inc. Englewood Cliffs, N.J., 1962. %\bibitem{Wright92} M. H. Wright, Interior methods for constrained optimization, Acta Numerica, Vol 1., Cambridge University Press, (1992), 341-407. %\bibitem{Wright97} S. J. Wright, Primal-Dual Interior Point Methods. SIAM, Philadelphia, PA, 1997. %\bibitem{YePs90} A. Yeyios and A. Psimarni, Convergence Analysis of the Modified SOR (MSOR) Method, Intern. J. Computer Math., Vol. 35, (1990), 231-244. \bibitem{Young} D. M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, 1971. \end{thebibliography} ===affil3: ===lastname5: ===affilother: ===title: Preconditioning augmented linear systems ===firstname2: Maria