We are concerned with the solution of
On many emerging computer architectures, the use of single precision arithmetic (by which we mean working with 32-bit floating-point numbers) is faster than using double precision. In fact on the Cell processor, single precision working can be more than ten times as fast as using double precision [2]. In addition, half the storage is required when using single precision and the movement of data between memory hierarchies and cache and processing units is much reduced. However, in many applications, a higher accuracy is required than single precision (with a value of machine precision around ) or the matrix can be so ill-conditioned that single precision working is unable to obtain accuracy to even one significant figure .... that is the results are meaningless.
We show how the selective use of double precision post-processing can enable solutions with a backward error (scaled residual) of double precision accuracy (around ) even when the factorization is computed in single precision We show that the use of iterative refinement in double precision may fail when the matrix is ill-conditioned and then show that, even for such badly behaved matrices, the use of FGMRES [3] can produce answers to the desired level of accuracy, namely that the solution process using FGMRES is backward stable at the level of double precision. In [1], we prove that, under realistic assumptions on the matrix and the factorization, the computation in mixed precision, where the LU factorization is computed in single precision and the FGMRES iteration in double precision, gives a backward stable algorithm.
We perform an extensive series of tests using MATLAB on random dense matrices constructed to have specific condition numbers and singular value distribution. We compute the key quantities that appear in our theoretical analysis and show that these support our theory. As one main future development of this work is to study the effective solution of large sparse systems on multicore architectures, we then perform numerical experiments on a set of sparse matrices using a combination of Fortran and MATLAB.
We show the results of runs on some rather ill-conditioned sparse matrices in
Table . In this case we use restarted FGMRES since the cost of
keeping
too many vectors can be high for these larger dimensioned systems. We
note that although iterative refinement essentially converges on the first
three examples, it is still not as accurate as FGMRES and requires many more
iterations than FGMRES.
On the last example, the convergence of iterative refinement was so slow that
we stopped after 53 iterations.