next up previous
Next: About this document ...

Julianne Chung
Computing Optimal Low-Rank Regularized Inverse Matrices for Inverse Problems

Department of Mathematics
McBryde
Virginia Tech
225 Stanger Street
Blacksburg
VA 24061
jmchung@vt.edu
Matthias Chung

Inverse problems arise in scientific applications such as biomedical imaging, computer graphics, computational biology, and geophysics, and computing accurate solutions to inverse problems can be both mathematically and computationally challenging.

Assume that $ {\bf A} \in \mathbb{R}^{m \times n}$ and $ {\bf b} \in
\mathbb{R}^{m}$ are given, then a linear inverse problem can be written as

$\displaystyle {\bf A} \boldsymbol{\xi} + \boldsymbol{\delta} = {\bf b},$ (1)

where $ \boldsymbol{\xi}\in\mathbb{R}^n$ is the desired solution and $ \boldsymbol{\delta}\in\mathbb{R}^m$ is additive noise. We assume the underlying problem (1) is ill-posed. A main challenge of ill-posed inverse problems is that small errors in the data may result in large errors in the reconstruction. In order to obtain meaningful reconstructions, regularization is needed to stabilize the inversion process. This is typically done by incorporating prior knowledge of the unknown solution and of the noise in the data. Here, we incorporate probabilistic information and assume that the true parameters $ \boldsymbol{\xi} \in
\boldsymbol{\Xi}$ and the noise $ \boldsymbol{\delta} \in
\boldsymbol{\Delta}$ are realizations of random variables from some (eventually unknown) probability distributions $ \mathcal{P}_{\boldsymbol{\Xi}}$ and $ \mathcal{P}_{\boldsymbol{\Delta}},$ both with finite second moments.

In this work, we are interested in finding a low rank optimal regularized inverse matrix $ {\bf\hat Z} \in \mathbb{R}^{n \times m}$ that gives a small reconstruction error. That is, $ \rho({\bf\hat Z} {\bf
b} - \boldsymbol{\xi})$ should be small for some given error measure $ \rho:\mathbb{R}^n \to \mathbb{R}^+_0$ , e.g., $ \rho({\bf x}) =
\left\Vert{\bf x}\right\Vert _{p}$ . The particular choice of $ \rho,$ $ \mathcal{P}_{\boldsymbol{\Xi}},$ and $ \mathcal{P}_{\boldsymbol{\Delta}}$ determine the regularization matrix $ {\bf\hat Z}$ . Notice that once $ \bf
\hat Z$ is found we can efficiently compute $ \boldsymbol{\xi}$ by simple matrix-vector multiplication $ \boldsymbol{\xi} = \bf\hat Z b$ . Our approach is especially suitable for large scale problems where (1) is solved repeatedly for various $ \bf b$ .

In this talk, we focus on efficient approaches to numerical compute optimal low-rank regularized inverse matrices. In real-life applications, probability distributions $ \mathcal{P}_{\boldsymbol{\Xi}}$ and $ \mathcal{P}_{\boldsymbol{\Delta}}$ are typically not known explicitly. However, in many applications, calibration or training data are readily available, and this data can be used to compute a good regularization matrix. Let $ {\bf b}^{(k)}= {\bf A} \boldsymbol{\xi}^{(k)} +
\boldsymbol{\delta}^{(k)}$ for $ k = 1,\ldots, K$ , where $ \boldsymbol{\xi}^{(k)}$ and $ \boldsymbol{\delta}^{(k)}$ are independently drawn from the corresponding probability distributions. Then the goal is to solve the empirical Bayes risk minimization problem,

$\displaystyle \min_{{\rm rank}({\bf Z})\leq r} \frac{1}{K} \sum_{k = 1}^K \rho({\bf Z b}^{(k)} - \boldsymbol{\xi}^{(k)}) .$ (2)

By using an empirical Bayes risk framework to compute an optimal regularized inverse matrix directly, we are able to avoid including $ \bf A$ in the problem formulation and instead learn the necessary information from training data. Once the matrix is computed, a simple matrix-vector multiplication is required to solve the inverse problem. In this talk, we discuss an algorithm that uses a rank update technique to efficiently calculate an optimal low rank regularized inverse matrix $ \bf
\hat Z$ .




next up previous
Next: About this document ...
Copper Mountain 2014-02-24