===firstname: Johnathan M. ===firstname3: Cui ===affil6: ===lastname3: Tiangang ===email: bardsleyj@mso.umt.edu ===keyword_other2: ===lastname6: ===affil5: Eniram, Ltd. ===lastname4: Marzouk ===lastname7: ===affil7: ===postal: Math Sciences University of Montana 32 Campus Drive ===ABSTRACT: We focus on applications arising in inverse problems, where uncertainty quantification typically requires sampling from the Bayesian posterior density function defined by the assumed physical model, measurement error model, and prior probability density function. In this talk, our focus is on sampling from the posterior when the prior is of L1-type. Such priors include the total variation prior and the Besov $B_{1,1}^s\;$ space priors, and when they are assumed in Bayesian inverse problems, the posterior density function is non-Gaussian and high-dimensional, making the sampling problem difficult. To address this challenge, we extend the Randomize-then-Optimize (RTO) method, which was recently developed for posterior sampling on nonlinear inverse problems with a Gaussian prior. The extension requires a variable transformation that changes the L1-type prior to a Gaussian prior. In this talk, we will present the RTO method and its extension to the L1-type prior case via the variable transformation. Several numerical experiments will also be presented to illustrate the method. ===affil3: ExxonMobil Upstream Research ===title: Uncertainty Quantification for Inverse Problems with L1-type Priors ===affil2: Massachusetts Institute of Technology ===lastname2: Wang ===firstname4: Youssef ===keyword1: Inverse problems, regularization ===workshop: no ===lastname: Bardsley ===firstname5: Antti ===keyword2: Uncertainty quantification/PDEs with random data ===otherauths: ===affil4: Massachusetts Institute of Technology ===competition: no ===firstname7: ===firstname6: ===keyword_other1: ===lastname5: Solonen ===affilother: ===firstname2: Zheng