next up previous
Next: About this document ...

Hilari C. Tiedeman
Multilevel Schur Complement Preconditioning for Multi-Physics Simulations

Southern Methodist University
Department of Mathematics
PO Box 750156
Dallas
Texas 75275-0156
htiedeman@smu.edu
Daniel R. Reynolds

Advection-diffusion PDEs are prevalent in models of many physical applications in science and engineering. In this talk, we focus on a scalar-valued advection-diffusion-reaction equation of the form

$\displaystyle \partial_t u = \nabla\cdot(\beta u)+\mu\nabla\cdot(D\nabla u)+f,
$

where $ \beta$ is a vector-valued advection coefficient that may depend on $ u$ , $ D$ is a matrix of diffusion coefficients, and $ f(u,t)$ is a forcing term. Within different parameter regimes, there exist optimally scalable solvers for problems of this type, however no single solver yet applies well within all regimes of physical interest.

Domain decomposition methods display nearly optimal parallel scalability within the advection-dominated regime, but typically do not scale well for diffusion-dominated problems. The converse occurs when using multigrid methods. Our goal is to find one method that is scalable in both regimes, through using a hybrid approach combining restrictive additive Schwarz (domain decomposition) and geometric multigrid.

Our approach begins with a Schur complement formulation of the linearized implicit system ($ Au=f$ ) as with the FETI (Farhat and Roux, 1991), BDD (Mandel, 1993), and BDDC (Dohrmann, 2003) algorithms, where the unknowns $ u$ are split into two sets: those residing in a domain interior, $ u_I$ , and those residing at inter-processor boundaries, $ u_\Gamma$ . With this decomposition, one may similarly decompose the full Jacobian matrix into four associated blocks,

$\displaystyle A = \left[\begin{array}{cc}
A_{I,I} & A_{I,\Gamma} \\
A_{\Gamma,I} & A_{\Gamma,\Gamma} \\
\end{array}\right].
$

Since the $ A_{I,I}$ matrix is itself block-diagonal, with each block corresponding to the Jacobian matrix dependencies within a single processor, we can apply $ A^{-1}_{I,I}$ using a standard sparse direct solver. Elimination of this block results in a Schur complement system for the interface nodes,

$\displaystyle \left(A_{\Gamma,\Gamma}-A_{\Gamma,I}A^{-1}_{I,I}A_{I,\Gamma}\right)u_\Gamma=g_\Gamma,
$

where $ g_\Gamma$ is formed during the first elimination step, and the Schur complement matrix is given by $ S =
A_{\Gamma,\Gamma}-A_{\Gamma,I}A^{-1}_{I,I}A_{I,\Gamma}$ .

While traditional domain-decomposition methods do not solve this global Schur complement system directly (and instead solve an interface system based on only a small fraction of unknowns), we solve the full interface system using a multilevel technique similar to multigrid. Here, our fine grid problem consists of the entire Schur complement (interface) system. We then proceed through a traditional set of V-cycle iterations, where at each level we obtain an increasingly coarse subset of the full interface system. However, due to the $ A^{-1}_{I,I}$ term in $ S$ , all residual corrections are evaluated in a matrix-free fashion, using FGMRES as a smoother.

After presenting the details of our algorithm, we present simulation results using the Ranger supercomputer at TACC. We investigate problems in both the advection- and diffusion-dominated regimes, and examine scalability of both iteration count and wall-clock time.




next up previous
Next: About this document ...
root 2012-02-20