Next: About this document ...
Julianne Chung
Computing Optimal Low-Rank Regularized Inverse Matrices for Inverse Problems
Department of Mathematics
McBryde
Virginia Tech
225 Stanger Street
Blacksburg
VA 24061
jmchung@vt.edu
Matthias Chung
Inverse problems arise in scientific applications such as biomedical
imaging, computer graphics, computational biology, and geophysics, and
computing accurate solutions to inverse problems can be both
mathematically and computationally challenging.
Assume that
and
are given, then a linear inverse problem can be
written as
|
(1) |
where
is the desired solution and
is additive noise.
We assume the underlying problem (1) is
ill-posed. A main challenge of ill-posed inverse problems is that
small errors in the data may result in large errors in the
reconstruction. In order to obtain meaningful reconstructions,
regularization is needed to stabilize the inversion process. This is
typically done by incorporating prior knowledge of the unknown
solution and of the noise in the data. Here, we incorporate probabilistic
information and assume that the true parameters
and the noise
are realizations of random variables from some
(eventually unknown) probability distributions
and
both with finite second moments.
In this work, we are interested in finding a low rank optimal
regularized inverse matrix
that gives a small reconstruction error. That is,
should be small for some given error measure
, e.g.,
. The particular choice of
and
determine the regularization matrix
. Notice that once
is found we can efficiently compute
by simple
matrix-vector multiplication
. Our
approach is especially suitable for large scale problems
where (1) is solved repeatedly for various
.
In this talk, we focus on efficient approaches to numerical compute
optimal low-rank regularized inverse matrices. In real-life applications,
probability distributions
and
are typically not known explicitly.
However, in many applications, calibration or training data are readily
available, and this data can be used to compute a good regularization
matrix.
Let
for
, where
and
are
independently drawn from the corresponding probability distributions.
Then the goal is to solve the empirical Bayes risk minimization
problem,
|
(2) |
By using an empirical Bayes risk framework to compute an optimal
regularized inverse matrix directly, we are able to avoid
including
in the problem formulation and instead learn the
necessary information from training data. Once the matrix is computed, a
simple matrix-vector multiplication is required to solve the inverse
problem. In this talk, we discuss an algorithm that uses a rank update
technique to efficiently calculate an optimal low rank regularized
inverse matrix
.
Next: About this document ...
Copper Mountain
2014-02-24