CM2024: 18TH COPPER MOUNTAIN CONFERENCE ON ITERATIVE METHODS
PROGRAM FOR WEDNESDAY, APRIL 17TH
Days:
previous day
next day
all days

View: session overviewtalk overview

07:00-08:15Breakfast [Copper Conference Center]
08:00-10:05 Session 8A: Efficient Training of Neural Networks [Bighorn B]
Chair:
08:00
Progress in Layer-Parallel Neural Networks Training and Inference
PRESENTER: Eric Cyr

ABSTRACT. Training of deep neural networks is computationally costly due to slow convergence of optimizers like stochastic gradient descent, coupled with limited parallelism in the forward and backward propagation dimension in certain dimensions. The later point implies that there is an upper bound on the strong scaling of these methods using readily available data and model parallelism. This talk presents recent advances in a layer-parallel approach that distributes layers across processors to overcome the inherent serialization in forward and backward propagation. Results are presented with and without GPU acceleration for the Tiny ImageNet and MNIST image classification data sets, as well as recurrent neural networks. Overall, layer-parallel enables fast training of DNNs, both in a strong and weak scaling context. Several new advances in applying layer-parallel algorithms are also detailed. Integration of layer-parallel with data-parallel algorithms is presented for the first time, showing the computational advantages of the combination. Standard deep learning techniques, like batch-normalization, are developed for layer-parallel training. Finally, a new approach combining layer-parallel with spatial coarsening to accelerate training for 3D image classification shows roughly a 10x speedup over serial execution.

08:25
Efficient Subproblem Solves for Convergent Neural Network Training
PRESENTER: Aurya Javeed

ABSTRACT. Neural networks with piecewise linear activations (like ReLUs) are nonconvex and nonsmooth. The stochastic gradient algorithms used to train them do not have convergence guarantees. In contrast, Cui, He, and Pang have a training algorithm for such networks that converges to a directional stationary point. Their approach pulls apart the neural network, not unlike parallel in time methods. Their key computational kernels are dual subproblems that they solve with a combination of projected gradient and semi-smooth Newton. We present promising results when instead solving these subproblems with a quadratically-convergent proximal Newton method by Baraldi and Kouri.

08:50
Training Neural Networks with PyRol: Algorithms and Examples
PRESENTER: Robert Baraldi

ABSTRACT. The Rapid Optimization Library (ROL) is a high-performance C++ library for large scale numerical optimization. Recently, we have developed PyROL, a python interface that has been used to solve both problems in PDE-constrained optimization and scientific machine learning (SciML). We utilize PyROLs many resources to overcome training barriers present in SciML problems, such as nonsmooth activation functions. We examine performance using a variety of methods, including constrained optimization and reduced space approaches with parallel in time capabilities, and provide tutorials for broader use.

09:15
Deep learning without global optimization by random Fourier features
PRESENTER: Owen Davis

ABSTRACT. Deep neural networks are conventionally trained using global gradient based optimization algorithms, and while this variety of neural network training has been empirically successful, it is not without drawbacks. Global gradient based optimization is expensive, and the simultaneous optimization of many network parameters can be inconsistent and especially difficult when training data is limited. Additionally, neural networks trained with gradient based optimizers are known to suffer from spectral bias[3]; they struggle to learn high frequency and multiscale features of the target function with reasonable computational cost. Furthermore, neural network training is notoriously black box; it is difficult to monitor training progress and the eventual learned parameters are rarely interpretable. To address these shortcomings, we propose a global optimization free training algorithm with error control for random Fourier features residual networks[2]. These neural networks use complex exponential activation functions, their trainable parameters are frequencies and amplitudes, and they have been shown to have similar theoretical approximation properties to ReLU networks[1]. The developed training algorithm leverages an adaptive Markov Chain Monte Carlo procedure, which is facilitated by the ability to derive a-priori optimal frequency distributions at each layer of the network. In addition to avoiding global optimization, we show that using our training algorithm we consistently observe the only available approximation rate for random Fourier features residual networks. We further exhibit the ability to learn target functions with very high and very low frequencies of very different scales with low network complexity. We show that this multiscale learning is highly interpretable, that the network learns different scales of the target function at different layers, and that the learned frequency parameters are the true frequencies of the target function. Finally, we exhibit the ability to approximate discontinuous target functions without Gibbs phenomena despite the use of a sinusoidal approximation basis.

[1] O. Davis, G. Geraci, and M. Motamed. “Approximation error and complexity bounds for ReLU networks on low regular function spaces”. In: (2024). Submitted. [2] A. Kammonen et al. “Smaller generalization error derived for a deep residual neural network compared with shallow networks”. In: IMA Journal of Numerical Analysis 43.5 (2023), pp. 2585–2632. [3] Nasim Rahaman et al. “On the spectral bias of neural networks”. In: International Conference on Machine Learning. PMLR. 2019, pp. 5301–5310.

09:40
A Damped Block Gauss-Newton Method for Shallow ReLU Neural Network
PRESENTER: Zhiqiang Cai

ABSTRACT. In this talk, we introduce a damped block Gauss-Newton (dBGN) method for solving least squares problems using a shallow ReLU neural network. By categorizing the weights/bias of the hidden and output layers of the network as nonlinear and linear parameters, the method iterates back and forth between the nonlinear and linear parameters. The nonlinear parameters are updated by a damped Gauss-Newton method, and the linear ones are updated by a linear solver. Moreover, at the Gauss-Newton step, a special form of the Gauss-Newton matrix is derived for the shallow ReLU neural network and is used for efficient computations. It is shown that the corresponding mass and Gauss-Newton matrices in the respective linear and nonlinear steps are symmetric and positive definite under reasonable assumptions.

The dBGN method was tested for several one and two dimensional least-squares problems which are difficult for commonly used training algorithms in machine learning such as BFGS and ADAM. The loss curves for all four test problems clearly show that the dBGN out-performs those methods by a very large margin. This conclusion is further enhanced by examining ability and efficiency of the methods on moving the breaking hyper-planes (points for one dimension and lines for two dimensions). The breaking hyper-planes are determined by the nonlinear parameters (weights and bias of the hidden layer).

08:00-10:05 Session 8B: Eigensolvers, Optimization, FE for Fourth-order Operators [Bighorn C]
08:00
Hybrid Quantum-Classical Krylov Methods for Eigenvalue Computation

ABSTRACT. Classical algorithms based on Krylov subspaces are among the most successful tools in the field of numerical linear algebra. More recently, the emergence of quantum subspace methods has resulted in a promising class of hybrid quantum-classical algorithms for eigenvalue approximation, e.g., in the fields of condensed matter physics and electronic structure theory. These quantum Krylov methods harness quantum computers to generate the subspace basis states and retrieve the projected problem through quantum measurement. However, the classical computation of the Ritz values often poses challenges due to the ill-conditioning of the associated generalized eigenvalue problem. In this talk, we present an overview of quantum subspace algorithms from a numerical linear algebra perspective. We discuss strategies to avoid solving an ill-conditioned eigenvalue problem, explain their implementation on quantum computers, and provide numerical and theoretical evidence of their convergence. Furthermore, we present strategies to improve the robustness, especially in the presence of noise.

08:25
Model error estimates for weak constraint 4DVar with representers
PRESENTER: Jodi Mead

ABSTRACT. Four dimensional variational data assimilation (4D-Var) combines a dynamic model with observational data globally using least squares optimization. Weak constraint 4DVar assumes there are errors in the model, and this can be computationally prohibitive. In this work we use the representer method (Bennet, 1992) to reduce the search space for the optimal estimates from the state space to the data space. These optimal estimates fit observations to the dynamic model within specified error variances. Rather than use heuristics to estimate prior errors, we view the weak constraint as regularization. This allows us to use Tikhonov regularization parameter selection methods to estimate prior errors. We will present efficient methods to estimate model error using the L-curve, Generalized Cross Validation (GCV) and the Chi-squared method. We will show numerical results using the transport equation with simulated measurements and obtain optimal state estimates.

08:50
Online Bayesian Optimization of Polynomial-Multigrid Cycles for Flux Reconstruction
PRESENTER: Sambit Mishra

ABSTRACT. In this study, we present a novel strategy for dynamically optimizing polynomial multigrid cycles to accelerate convergence within the dual-time stepping formulation of the artificial compressibility method. To accomplish this a Gaussian process model is developed using Bayesian optimization to efficiently sample possible cycles to minimize run-time. To allow the use of conventional optimization methods we developed fractional smoothing steps, moving the optimization from a discrete space to a continuous space. Initially a static, offline, approach was developed and optimal cycles were found for two flow past cylinder test cases with Re = 200 and Re = 500; however, when exchanging optimal cycles between the different test cases there was significant degradation in speedup. Towards this, a dynamic, online, approach was developed where cycles are optimized during a simulation. The performance of the resulting optimal cycles gave similar speed up to the offline approach whilst achieving a net reduction in run-time. Again testing the optimization strategy on the flow past a cylinder, this yielded candidates with mean speed-ups of about 3.0x and 2.1x respectively. Finally, testing online optimization on a turbulent flow past a cylinder at Re = 3900 resulted in an overall speed-up of 1.9x.

09:15
Computing H2-conforming finite element approximations without having to implement C1-elements
PRESENTER: Charles Parker

ABSTRACT. Fourth-order elliptic problems arise in a variety of applications from thin plates to phase separation to liquid crystals. A conforming Galerkin discretization requires a finite dimensional subspace of H2, which in turn means that conforming finite element subspaces are C1-continuous. In contrast to standard H1-conforming C0-elements, C1-elements, particularly those of high order, are less understood from a theoretical perspective and are not implemented in many existing finite element codes. In this talk, we address the implementation of the elements. In particular, we present algorithms that compute C1-finite element approximations to fourth-order elliptic problems and which only require elements with at most C0-continuity. The algorithms are suitable for use in almost all standard finite element packages. Iterative methods and preconditioners for the subproblems in the algorithm will also be presented.

10:25-12:30 Session 9A: Machine Learning Efficient Iterative Methods [Bighorn B]
10:25
Learning optimal iterative methods
PRESENTER: Nicolas Nytko

ABSTRACT. High-performance iterative solvers require judicious selection of parameters or other auxiliary inputs, such as preconditioners, in order to achieve a low time-to-solution. For example, weighted relaxation, momentum-based methods, and even sparse approximate inverses all require some parameter tuning to achieve convergence. Optimal selection of these may not always be feasible or computationally tractable, especially on complex problems. In this work, we show how existing numerical optimization methods can be used to tune parameters and even algorithms themselves for numerically solving PDEs. Additionally, we present results suggesting that the same task can be done, with some special considerations, by training neural networks on the graph representation of the PDE.

10:50
Graph Neural Network to Predict Coarse Space Functions for Two-level Domain Decomposition Methods

ABSTRACT. For the robustness and scalability of two-level domain decomposition methods, one of the critical components is the coarse space. Various methods, such as spectral coarse spaces, have been proposed to enhance the coarse space functions for heterogeneous problems. Though they are robust, the computational overheads associate with such approach is often rather high. In this talk, we investigate the potential of graph neural networks to predict effective coarse space functions, using only the algebraic information available through the local subdomain matrices.

11:15
Fast iterative solver for neural network method: 1D diffusion problems

ABSTRACT. In this talk, we introduce a damped block Newton method (dBN) for the Ritz neural network (NN) approximation to 1D diffusion problems. The networks are shallow with ReLU activation functions. The resulting minimization problem is separated into its linear and non-linear components. The linear parameters depend on inverting the coefficient matrix (A), and the non-linear parameters are addressed with a Newton method. We show theoretically that A has a condition number of O(n^2). Following this, direct inverse formulas for A and for the Hessian of the non-linear block are presented and used to construct dBN. The result is a fast iterative solver, capable of outperforming BFGS. Adding in adaptive network enhancement (ANE), we get the adaptive damped block Newton method (AdBN) which appears to succeed in avoiding local minima and achieves an impressive convergence rate. Numerical examples demonstrate the ability of dBN and AdBN not only to move mesh points quickly and efficiently but also that an nearly optimal convergence rate can be achieved for AdBN.

11:40
Fast iterative solver for neural network method: 1D general elliptic problems and data fitting
PRESENTER: Cesar Herrera

ABSTRACT. In this talk, we study the damped block Newton method (dBN) for general 1D elliptic PDEs and data fitting problems. Whereas the finite element method has a well-conditioned mass matrix with O(1), the neural network method has a mass matrix (M) which is more ill conditioned than even the coefficient matrix (A). Theoretically, we prove that the condition number for M is O(n^4). Further, an algorithm to solve M inverse with O(n) operations is provided. M inverse is used to solve the linear coefficients across both problem types. For the non-linear coefficients, we seek to use a Newton type method. Algorithms for approximating and solving for the inverse of the Hessian (H) will be discussed. Numerical results will be reported.

12:05
Physics-guided Full Waveform Inversion using Encoder-Solver Convolutional Neural Networks
PRESENTER: Matan Goren

ABSTRACT. Full Waveform Inversion (FWI) is an inverse problem for estimating wave velocity distribution in a given domain, based on observed data measurements on the boundaries. It appears in seismic exploration of oil and gas reservoirs, and medical imaging tasks such as ultrasound scans. The inversion process is computationally demanding because we are typically required to solve multiple forward problems, either in time or frequency domains, to simulate data that is then iteratively fit to the observed data. We consider FWI in the frequency domain, where the Helmholtz equation is used to model the forward problem, and its repeated solution is by far the main computational bottleneck of the inversion process. To ease this cost, we integrate a learning process of an Encoder-Solver preconditioner that is based on convolutional neural networks (CNNs). The Encoder-Solver is trained to effectively precondition a linear system defined by the discretized Helmholtz operator and given velocity medium parameters. Then, by re-training the CNN between the iterations of the FWI optimization process, the Encoder-Solver is adapted to the iteratively evolving velocity medium as part of the inversion. Without retraining, the performance of the solver deteriorates as the medium changes, while using our light retraining procedures, we obtain the forward simulations effectively throughout the whole process. We demonstrate our approach to solving FWI problems using 2D geophysical models with high-frequency data.

10:25-12:30 Session 9B: Mixed Precision Algorithms [Bighorn C]
10:25
Newton's Method in Three Precisions

ABSTRACT. We describe a three precision variant of Newton's method for nonlinear equations. We evaluate the nonlinear residual in double precision, store the Jacobian matrix in single precision, and solve the equation for the Newton step with iterative refinement with a factorization in half precision. We analyze the method as an inexact Newton method. This analysis shows that, except for very poorly conditioned Jacobians, the number of nonlinear iterations needed is the same that one would get if one stored and factored the Jacobian in double precision. In many ill-conditioned cases one can use the low precision factorization as a preconditioner for a GMRES iteration. That approach can recover fast convergence of the nonlinear iteration. We present an example to illustrate the results.

10:50
Mixed Precision Iterative Refinement for Structured Inverse Problems
PRESENTER: James Nagy

ABSTRACT. In recent years a substantial amount of work has been done on developing mixed-precision algorithms for linear systems, methods that can exploit capabilities of modern GPU architectures. However, very little work has been done for ill-conditioned problems that arise from large-scale inverse problems. Special considerations, which normally do not arise when solving well-conditioned problems, such as incorporating regularization into the developed methods, need to be considered. In this talk we consider a preconditioned GMRES-based iterative refinement method for least squares problems involving ill-conditioned Toeplitz and block Toeplitz matrices. For the preconditioner, Toeplitz structure is exploited to efficiently compute low-precision QR factorizations, where Tikhonov regularization can be incorporated with essentially no additional computational cost. Deconvolution and image deblurring examples are used to illustrate the effectiveness of the approach.

11:15
Effective Approximate Preconditioners for Linear Inverse Problems
PRESENTER: Lucas Onisk

ABSTRACT. Many problems in science and engineering give rise to linear systems of equations that are commonly referred to as large-scale linear discrete ill-posed problems. These problems arise for instance, from the discretization of Fredholm integral equations of the first kind. The matrices that define these problems are typically severely ill-conditioned and may be rank deficient. Because of this, the solution of linear discrete ill-posed problems may not exist or be extremely sensitive to perturbations caused by error in the available data. These difficulties can be reduced by applying regularization to iterative refinement type methods which may be viewed as a preconditioned Landweber method. Using a filter factor analysis, we demonstrate that low precision matrix approximants can be useful in the construction of these preconditioners.

11:40
H-LSLU: An Inner Product Free Hybrid Krylov Method for Rectangular Large-Scale Inverse Problems
PRESENTER: Ariana N. Brown

ABSTRACT. In this study we introduce two new Krylov subspace methods for solving rectangular large-scale linear inverse problems: the first approach is a modification of the Hessenberg iterative algorithm that is based off an LU factorization and is therefore referred to as the least squares LU (LSLU) method. The second approach incorporates Tikhonov regularization in an efficient manner; we call this the Hybrid LSLU (H-LSLU) method. Both methods are inner product free, making them advantageous for high performance computing and mixed precision arithmetic. Theoretical results and extensive numerical results suggest that H-LSLU is effective in solving large-scale inverse problems and has comparable performance with existing iterative projection methods.

12:05
Half precision wave simulation

ABSTRACT. The gap between processor speed and memory speed has become wider and wider since the 80s. On modern hardware, the speed of memory access is often the limiting factor for execution time for many scientific and industrial applications, particularly for those related to PDE discretizations that exploit sparsity. This motivates us to explore the possibility of operating at half precision to reduce memory footprint and hence utilize the memory bandwidth more effectively. The effect of reduced range and accuracy as well as the performance benefit with half precision need to be evaluated on a case by case basis. Specifically, we study the viability of half precision simulations for time dependent wave equations. Potential pitfalls of naively switching to half precision will be illustrated, including nonphysical oscillations and energy loss. An effective remedy in the form of compensated sum will be presented, which is shown to be able to restore the simulation quality to a satisfactory level. Performance benefits of operating at half precision will also be illustrated. Numerical evidence are collected from using both software-emulated half precision and hardware-supported half precision on modern GPUs. Information provided here should be beneficial to practitioners who seek to reduce wall-clock time or energy consumption by switching to half precision and interesting to researchers in the broad area of numerical analysis. Given that modern GPUs often have much more Silicons dedicated to low precision units than high precision units, the evaluation presented here is particularly relevant at this time.

16:30-18:35 Session 10A: ML / AI Accelerated Numerical Algorithms [Bighorn B]
16:30
Physics-informed augmentations to nonlinear vector autoregressive models for the prediction of dynamical systems
PRESENTER: Samuel Hocking

ABSTRACT. There has been significant recent interest in adapting machine learning techniques to solve ordinary and partial differential equations (ODEs and PDEs). Training these models is classically a data-fitting task, but knowledge of the physics of the problem, e.g., the expression of the differential equation, can be used to supplement the training objective. Such augmented models are generally referred to as physics-informed scientific machine learning. In this paper, we focus on one class of models called nonlinear vector autoregression (NVAR) and propose methods to promote stability by adding local support to the nonlinear activation functions and improve performance by incorporating ODE information and correcting noisy training data. Then, we evaluate the ability of the augmented NVAR model to predict solutions to various ODE systems, such as the undamped spring, Lotka-Volterra predator-prey nonlinear model, and the chaotic Lorenz system. Preliminary findings show that for large values of the regularization parameter, the best models place more weight on ODE-informed training than when regularization is chosen to be small. Since regularization is used to prevent model overfitting, this suggests that incorporating ODE information to a well-regularized model can improve predictive performance while retaining generalization.

16:55
Evolving Algebraic Multigrid Methods Using Grammar-Guided Genetic Programming

ABSTRACT. Multigrid methods despite being known to be asymptotically optimal algorithms, depend on the careful selection of their individual components for efficiency. Also, they are mostly restricted to standard cycle types like V-, F-, and W-cycles. We use grammar rules to generate arbitrary-shaped cycles, wherein the smoothers and their relaxation weights are chosen independently at each step within the cycle. We call this a flexible multigrid cycle. These flexible cycles are used in Algebraic Multigrid (AMG) methods with the help of grammar rules and optimized using genetic programming. The flexible AMG methods are implemented in the software library of hypre, and the programs are optimized separately for two cases: a standalone AMG solver for a 3D anisotropic problem and an AMG preconditioner with conjugate gradient for a multiphysics code. We observe that the optimized flexible cycles provide higher efficiency and better performance than the standard cycle types.

17:20
Large scale Scattering using the HINTS
PRESENTER: Adar Kahana

ABSTRACT. The use of Machine learning for numerical solvers is growing fast. Recently, we proposed the Hybrid Iterative Numerical Transferrable Solver, that combines classical numerical solvers with Scientific Machine Learning (SciML), in the form of operator learning, enhancing the performance in terms of computational time and number of iterations. In this talk we will introduce a new study of using the HINTS to solve a complex boundary value Helmholts problem. This problem is challenging in the numerical perspective since it is non symmetric positive definite, and the physical dynamics are applied from a scattered wave from the top boundary. In addition, the scatterer is often introducing singularities. From the SciML perspective the challenges involve a complex valued neural network and splitting the dynamics into two neural network - scattering wave and interior solver. We will show how the proposed mathematical formulation and two network architecture can solve the problem and the HINTS enhancing the performance, achieving faster convergence.

16:30-18:35 Session 10B: Low-rank Approximations [Bighorn C]
Chair:
16:30
A New Dynamic Low Rank Method for Thermal Radiative Transfer

ABSTRACT. In this talk, we discuss a new Dynamic Low Rank (DLR) method for the Thermal Radiative Transfer (TRT) equations.

The Thermal Radiative Transfer (TRT) equations model the energy exchange between electromagnetic radiation and a background material, and their solution forms the bulk of the runtime and memory requirements for modeling Interial Confinement Fusion. In fact, they comprise a set of high-dimensional, stiff, nonlinear PDEs that couple the distribution function for the energy density of the photon field---which describes the radiation energy density at a given point in space, passing through a given direction, and with a given frequency---with the material energy, and their computation is therefore exceedingly expensive.

The Dynamic Low Rank (DLR) method offers an attractive methodology for reducing the cost associated with solving the high-dimensional TRT equations. The basic idea of DLR methods is to represent the photon distribution function as an SVD-like product of dynamically evolving, quasi-optimal spatial and angular basis functions. Although there has been promising recent work on leveraging DLR for linear transport problems, we have developed a version of DLR that is tailored for TRT. In particular, the method is implicit-in-time (without the typical DLR time-splitting), preserves the diffusion limit, is formulated with positive-definite operators, and allows efficient AMG solvers. We demonstrate the efficiency and robustness of our method on a standard challenging benchmark problem.

16:55
Low Rank Krylov-based Implicit Time Integrators for Nonlinear Fokker-Planck Kinetic Models
PRESENTER: Hamad El Kahza

ABSTRACT. We propose an efficient implicit time integration algorithm for a low-rank representation of multi-scale kinetic PDEs motivated by the multi-species Leonard-Bernstein-Fokker-Planck (LBFP). Starting from a solution ansatz expressed as a linear combination of basis vectors, our approach dynamically adapts the basis in time by employing a Krylov-like basis augmentation, thus avoiding the need to solve for evolution equations for the basis, resulting in significant efficiency gains vs. other approaches proposed in the literature [1]. Our algorithm begins by finding a suitable low-rank approximation space generated from extended Krylov subspaces [2], and solve for the coefficients governing the composition of the bases. An implicit timestep solve is performed on this reduced basis by solving a reduced Sylvester equation. Through a series of numerical experiments with the LBFP problem, we demonstrate the efficiency of our low-rank algorithm, producing results comparable to those of the classical full-rank integrator while resulting in significant acceleration in performance. We also demonstrate the algorithm's ability to preserve the system's macroscopic quantities of interest up to machine precision; notably the mass, momentum, and energy.

References:

[1] A rank-adaptive robust integrator for dynamical low-rank approximation. Ceruti, G., Kusch, J. & Lubich, C. s.l. : Bit Numer Math 62, 1149–1174, 2022, Bit Numer Math 62, pp. 1149–1174. [2] A new iterative method for solving large-scale Lyapunov matrix equations. Simoncini, Valeria. s.l. : SIAM Journal on Scientific Computing, 2007, Vol. 29.3, pp. 1268-1288.

17:20
Higher order and implicit dynamical low-rank algorithms

ABSTRACT. Dynamical low-rank algorithms have developed into an efficient way to solve high-dimensional problems ranging from plasma physics to quantum mechanics. Those methods are attractive because they reduce a high-dimensional problem into a set of lower-dimensional equations and thus can overcome the curse of dimensionality. This, for example, enables 6D Vlasov simulation on a desktop computer that otherwise would require a large supercomputer. Mathematically, dynamical low-rank methods project the true dynamics onto a low-rank manifold (the approximation space). To obtain a robust (i.e. well behaved) time integrator care has to be taken. Until very recently the literature has mostly focused on first-order and explicit methods. In this talk, we consider our recent work that allows us to construct higher-order and implicit robust dynamical low-rank schemes. Since each 'stage' of these methods solves a lower dimensional PDE it can be readily combined with standard iterative methods. We also provide some numerical examples from the field of plasma physics and radiative transfer.

17:45
A dynamical low-rank collocation method for nonlinear tensor differential equations

ABSTRACT. We present a new method for computing the solution to a nonlinear tensor differential equation with dynamical low-rank approximation. The idea of dynamical low-rank approximation is to project the differential equation onto the tangent space of a low-rank tensor manifold at each time. Traditional dynamical low-rank methods employ an orthogonal projection onto the tangent space, which is challenging to compute for nonlinear differential equations. We propose an alternative projection onto the tensor manifold tangent space that is easily computed for nonlinear differential equations and satisfies the differential equation at a set of carefully selected indices. To select these indices, we devise a new algorithm based on the discrete empirical interpolation method (DEIM) that parameterizes any tensor train and its tangent space with a tensor cross interpolant. We demonstrate the proposed method with applications to tensor differential equations arising from the discretization of partial differential equations.

18:10
Tensor Networks for Solving the Time-Independent Boltzmann Neutron Transport Equation
PRESENTER: Mario Ortega

ABSTRACT. Tensor network techniques, known for their low-rank approximation ability that breaks the curse of dimensionality, are emerging as a foundation of new mathematical methods for ultra-fast numerical solutions of high-dimensional Partial Differential Equations (PDEs). Here, we present a mixed Tensor Train (TT)/Quantized Tensor Train (QTT) approach for the numerical solution of time-independent eigenvalue Boltzmann Neutron Transport equations (BNTEs) in Cartesian geometry. The eigenvalue formulations of the BNTE are used to determine the criticality of nuclear systems, defined as the state in which a nuclear chain reaction is self-sustaining. Determination of the criticality of a system is necessary in the design of nuclear reactors and critical experiments. Discretizing a realistic three-dimensional (3D) BNTE by (i) diamond differencing, (ii) multigroup-in-energy, and (iii) discrete ordinate collocation leads to large generalized eigenvalue problems that generally require a matrix-free approach and large computer clusters. Starting from this discretization, we construct a TT representation of the PDE fields and discrete operators, followed by a QTT representation of the TT cores and solving the tensorized generalized eigenvalue problem in a fixed-point scheme with tensor network optimization techniques. We validate our approach by applying it to two examples of 3D eigenvalue neutron transport problems. We compare the QTT eigenvalues and eigenvectors to the eigenpairs obtained using the Los Alamos National Laboratory neutron transport PARallel TIme-dependent SN (PARTISN) solver. We demonstrate that our TT/QTT method, executed on a standard desktop computer, leads to full access of the discrete operators with yottabyte compression, and more than 7500 times speedup with a discrepancy of less than 10e-5 when compared to the PARTISN solution.