The Sparse Inverse Covariance Estimation problem arises in many statistical applications in Machine Learning and Signal Processing. In this problem, the inverse of a covariance matrix of a multivariate normal distribution is estimated, assuming that it is sparse. An l-1 regularized log-determinant optimization problem is solved to compute such matrices. Because of memory limitations, most existing algorithms are unable to handle large scale instances of this problem. In this talk we present a new block-coordinate descent approach for solving the problem for large-scale data sets. Our method treats the sought matrix block-by-block using quadratic approximations (Newton's method), and we show that this approach has advantages over existing methods in several aspects. Numerical experiments demonstrate the potential of this approach, especially for large-scale problems.