===affil2: ARM Ltd., R\&D, Cambridge CB1 9NJ, UK ===firstname: Jakub ===firstname4: ===firstname3: ===lastname2: Kershaw ===lastname: Mare{\v c}ek ===firstname5: ===affil6: ===lastname3: ===email: jakub.marecek@ed.ac.uk ===lastname6: ===affil5: ===otherauths: ===lastname4: ===affil4: ===lastname7: ===affil7: ===firstname7: ===postal: School of Mathematics The University of Edinburgh James Clerk Maxwell Building The King's Buildings Edinburgh EH9 3JZ ===firstname6: ===ABSTRACT: \newcommand{\R}{\ensuremath{\mathbbm{R}}} \newcommand{\N}{\ensuremath{\mathbbm{N}}} \newcommand{\prob}[2]{\underset{#1}{\rm Prob}\left[{#2}\right]} \newcommand{\norm}[1]{\left\| {#1} \right\|} \DeclareMathOperator{\st}{s. t.} \DeclareMathOperator{\kron}{\,\otimes\,} \DeclareMathOperator{\diag}{diag} Least squares rank among the most fundamental problems in Scientific Computing, Signal Processing, and Statistics. Over the years, numerous variants of the problem have been studied. A variant, which has attracted much attention in recent years, is: {\sc Box-constrained Integer Least Squares (BILS)}: Given integers $m, n > 0$, a subset of integers $Q$, matrix $H \in \R^{m \times n}$, and $y \in \R^m$, return $x\in Q^n$ minimising $\norm{y - Hx}^2$. The solving {\sc BILS} to optimality is hard, in a number of precise senses. Notably, it is NP-hard to approximate within any constant factor \cite{MR1462727}. We present an iterative solver for the problem, based on novel convex programming relaxations. Such relaxations strengthen the least squares relaxation, obtained by omitting the integrality constraints. In each iteration, a small number of violated valid inequalities are added, and the relaxation is reoptimised. There have been proposed numerous convex programming relaxations of {\sc BILS}. (See Table~\ref{tab:relaxations} in the Appendix.) They seem to be, however, either too expensive to compute (e.g. high-dimensional semidefinite programming), or too weak (e.g. linear programming). We propose a hierarchy of progressively stronger relaxations in second order-cone programming (SOCP): \begin{align} \min c^T x \\ \st \left\| A_i x + b_i \right\|_2 & \leq c_i^T x + d_i, \quad i = 1,\dots,m \notag \\ Ax & = b, \notag \end{align} Notice that solving a SOCP is only moderately more expensive than solving a linear program, but allows one to express the least-squares objective. Our relaxations are based on the reformulation attributed to Atamt{\" u}rk: \begin{align} & \min_{x \in Q^n} \norm{y - Hx} \label{eqn:ils} \\ = & \min_{x \in Q^n} x^T H^T H x - 2 y^T H x + y^T y \\ = & \min_{x \in Q^n, t \in \R^n} t^T t \st Hx - y \le t, y - Hx \le t \\ = & \min_{x \in Q^n, t_0 \in \R, t \in \R^n} t_0 \st |y - Hx| \le t, t^T t \le t_0. \label{eqn:reformulated} \end{align} By removing the constraint $x \in Q^n$ in the original problem (\ref{eqn:ils}), we would obtain the usual least squares relaxation. By removing the same constraint in the reformulated problem (\ref{eqn:reformulated}), we obtain the single-cone SOCP relaxation: \begin{align} \label{socp} z_r = \min_{x \in R^n, t_0 \in \R, t \in \R^n } t_0 \st |Hx - y| \le t, t^T t \le t_0. \end{align} This relaxation (\ref{socp}) can be strengthened by the addition of violated valid inequalities (``cuts''), including variants of Chv{\'a}tal-Gomory cuts \cite{Cezik2005} and mixed-integer rounding (MIR) cuts \cite{Atamturk2007}. Let us define the Conic Mixed Integer Rounding (MIR) function. For $0 \le f \le 1$, the conic rounding function $\phi_f : \R \to \R$ is: $$\phi_f(a) = \begin{cases} (1 - 2f)n - (a - n) & \text{if $n \le a \le n + f$,} \\ (1 - 2f)n + (a - n) - 2f &\text{if $n + f \le a \le n + 1$.} \end{cases}$$ The extension to vectors is element-wise. For pure integer programs (\ref{socp}) and $\alpha \ne 1$, the conic rounding cut hence is: $$\sum_{j = 1}^{n} \phi_{f_{\alpha}}(a_j / \alpha) - \phi_{f_{\alpha}}(b / \alpha) \le t / |\alpha|. $$ It is easy to see that for $\alpha \ne 1$ and $f_{\alpha} = b/\alpha - \left\lfloor {b / \alpha} \right\rfloor$, conic rounding cuts are valid for pure integer programs (\ref{socp}). With each cut added, we obtain a new relaxation. The choice of a cut has to take into account its violation and numerical safety \cite{Cook2009}. After adding at most three cuts in a row, the newly obtain relaxation is optimised, possibly using the previous solution. The optimisation of the second-order cone program is, again, performed using an iterative method, such as the primal-dual interior point method. The complexity of each iteration of the interior point method is dominated the complexity of a matrix inversion of a $(m+c) \times (n+ m + 1)$ matrix, where $n,m$ are the dimensions of the original matrix $H$, and $c$ is the number of cuts added. The appendix details promising computational performance of this approach. This approach lead to the development of low-power application specific integrated circuits for the problem, which is of great importance in mobile communications. There are systolic digital designs for much of the interior point method already available. This could be seen as an extension of Boyd's notion of real-time convex programming to real-time integer convex programming. ===affil3: ===lastname5: ===affilother: ===title: Iterative Methods for Integer Least Squares ===firstname2: Danny