Iterative solvers for eigenvalue computation are widely used in physics and numerical applications. As for linear solvers, it usually represents a major computing time in the overall application performance. Thus for many years now, these kind of numerical kernels have been optimized on different architectures like massively parallel computers. For a few years now, new emerging technologies arised based on accelerators, like CELL or GPGPU. But with these new multicore and hybrid architectures, new problems emerged like arithmetic accuracy, efficient programming and hybrid parallelization. In this talk, we will study the numerical behavior of heterogeneous systems such as CPU with GPU or IBM Cell processors for some orthogonalization processes for the Arnoldi method, which are crucials for eigensolvers. We focus on the influence of the different floating arithmetic handling of these accelerators with Gram-Schmidt orthogonalizations (classical, modified or re-orthogonalized versions) using single and double precision. We will present performance results on dense and sparse matrices and will discuss about hybrid parallelization mixing classical message passing with GPGPU highly multithreaded parallelization.