We propose statistically optimal and computationally efficient dimensionality reduction techniques for finite-dimensional goal-oriented Bayesian Gaussian linear inverse problems where the QoI is a linear function of the inversion parameters. In particular, we study approximations of posterior covariance of the QoI as a low-rank negative update of the prior covariance of the QoI and prove optimality of the update with respect to the natural geodesic distance on the manifold of symmetric and positive definite matrices. If we assume exact knowledge of the posterior mean, then our optimality results extend to optimality in distribution with respect to the Kullback-Leibler divergence and the Hellinger distance between the associated distributions. We also propose approximations of the posterior mean of the QoI as a low-rank linear function of the data and prove optimality of the approximation with respect to the Bayes risk for squared-error loss weighted by the posterior precision matrix of the QoI. These optimal approximations avoid the explicit computation of the full posterior distribution of the parameters and focus entirely on directions that are well informed by the data and that are relevant to the QoI. These directions are obtained from the leading generalized eigenvectors of a suitable matrix pencil and stem from a careful balance between all the ingredients of the goal oriented inverse problem: prior information, forward model, measurement noise and ultimate goals. We illustrate the theory using a high-dimensional inverse problem in heat transfer.