Iterative methods are often used to solve inverse problems, but many iterations may be needed to obtain a reasonable solution due to slow convergence of the method. Preconditioners have been proposed for accelerating the convergence of iterative methods, but inverse problems, in particular ill-posed inverse problems, are notoriously difficult to precondition. Optimal regularized inverse matrices were recently proposed and studied for computing solutions to inverse problems, but in this talk, we investigate the use of optimal regularized inverse matrices as preconditioners for solving inverse problems. We consider an alternating optimization update approach for Bayes risk minimization and show that our approach can not only accelerate convergence of iterative methods, but also stabilize the inversion since it inherently incorporates regularization. Some theoretical results provide insight into the approach, and numerical results on image processing examples demonstrate the benefits of the preconditioner.