24 results on '"Lu, Xiliang"'
Search Results
2. Numerical Recovery of the Diffusion Coefficient in Diffusion Equations from Terminal Measurement
- Author
-
Jin, Bangti, Lu, Xiliang, Quan, Qimeng, and Zhou, Zhi
- Subjects
Mathematics - Numerical Analysis - Abstract
In this work, we investigate a numerical procedure for recovering a space-dependent diffusion coefficient in a (sub)diffusion model from the given terminal data, and provide a rigorous numerical analysis of the procedure. By exploiting decay behavior of the observation in time, we establish a novel H{\"o}lder type stability estimate for a large terminal time $T$. This is achieved by novel decay estimates of the (fractional) time derivative of the solution. To numerically recover the diffusion coefficient, we employ the standard output least-squares formulation with an $H^1(\Omega)$-seminorm penalty, and discretize the regularized problem by the Galerkin finite element method with continuous piecewise linear finite elements in space and backward Euler convolution quadrature in time. Further, we provide an error analysis of discrete approximations, and prove a convergence rate that matches the stability estimate. The derived $L^2(\Omega)$ error bound depends explicitly on the noise level, regularization parameter and discretization parameter(s), which gives a useful guideline of the \textsl{a priori} choice of discretization parameters with respect to the noise level in practical implementation. The error analysis is achieved using the conditional stability argument and discrete maximum-norm resolvent estimates. Several numerical experiments are also given to illustrate and complement the theoretical analysis., Comment: 22 pages, 2 figures
- Published
- 2024
3. On the robustness of double-word addition algorithms
- Author
-
Yang, Yuanyuan, Lyu, XinYu, He, Sida, Lu, Xiliang, Qi, Ji, and Li, Zhihao
- Subjects
Mathematics - Numerical Analysis - Abstract
We demonstrate that, even when there are moderate overlaps in the inputs of sloppy or accurate double-word addition algorithms in the QD library, these algorithms still guarantee error bounds of $O(u^2(|a|+|b|))$ in faithful rounding. Furthermore, the accurate algorithm can achieve a relative error bound of $O(u^2)$ in the presence of moderate overlaps in the inputs when rounding function is round-to-nearest. The relative error bound also holds in directed rounding, but certain additional conditions are required. Consequently, in double-word multiplication and addition operations, we can safely omit the normalization step of double-word multiplication and replace the accurate addition algorithm with the sloppy one. Numerical experiments confirm that this approach nearly doubles the performance of double-word multiplication and addition operations, with negligible precision costs. Moreover, in directed rounding mode, the signs of the errors of the two algorithms are consistent with the rounding direction, even in the presence of input overlap. This allows us to avoid changing the rounding mode in interval arithmetic. We also prove that the relative error bound of the sloppy addition algorithm exceeds $3u^2$ if and only if the input meets the condition of Sterbenz's Lemma when rounding to nearest. These findings suggest that the two addition algorithms are more robust than previously believed.
- Published
- 2024
4. Current density impedance imaging with PINNs
- Author
-
Duan, Chenguang, Jiao, Yuling, Lu, Xiliang, and Yang, Jerry Zhijian
- Subjects
Mathematics - Numerical Analysis ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
In this paper, we introduce CDII-PINNs, a computationally efficient method for solving CDII using PINNs in the framework of Tikhonov regularization. This method constructs a physics-informed loss function by merging the regularized least-squares output functional with an underlying differential equation, which describes the relationship between the conductivity and voltage. A pair of neural networks representing the conductivity and voltage, respectively, are coupled by this loss function. Then, minimizing the loss function provides a reconstruction. A rigorous theoretical guarantee is provided. We give an error analysis for CDII-PINNs and establish a convergence rate, based on prior selected neural network parameters in terms of the number of samples. The numerical simulations demonstrate that CDII-PINNs are efficient, accurate and robust to noise levels ranging from $1\%$ to $20\%$.
- Published
- 2023
5. Deep Neural Network Approximation of Composition Functions: with application to PINNs
- Author
-
Duan, Chenguang, Jiao, Yuling, Lu, Xiliang, Yang, Jerry Zhijian, and Yuan, Cheng
- Subjects
Mathematics - Numerical Analysis ,Physics - Computational Physics ,68T07, 65N99 - Abstract
In this paper, we focus on approximating a natural class of functions that are compositions of smooth functions. Unlike the low-dimensional support assumption on the covariate, we demonstrate that composition functions have an intrinsic sparse structure if we assume each layer in the composition has a small degree of freedom. This fact can alleviate the curse of dimensionality in approximation errors by neural networks. Specifically, by using mathematical induction and the multivariate Faa di Bruno formula, we extend the approximation theory of deep neural networks to the composition functions case. Furthermore, combining recent results on the statistical error of deep learning, we provide a general convergence rate analysis for the PINNs method in solving elliptic equations with compositional solutions. We also present two simple illustrative numerical examples to demonstrate the effect of the intrinsic sparse structure in regression and solving PDEs., Comment: There are errors in the crucial Lemma 3.1, which is a result from our previous work that has not undergone peer review. During the refinement of this manuscript, one of our colleagues pointed out a potential mistake in the proof of this result, indicating that certain corrections are needed to ensure its correctness. To uphold academic rigor, we decide to withdraw the paper at this time
- Published
- 2023
6. Stochastic mirror descent method for linear ill-posed problems in Banach spaces
- Author
-
Jin, Qinian, Lu, Xiliang, and Zhang, Liuying
- Subjects
Mathematics - Numerical Analysis ,Mathematics - Optimization and Control - Abstract
Consider linear ill-posed problems governed by the system $A_i x = y_i$ for $i =1, \cdots, p$, where each $A_i$ is a bounded linear operator from a Banach space $X$ to a Hilbert space $Y_i$. In case $p$ is huge, solving the problem by an iterative regularization method using the whole information at each iteration step can be very expensive, due to the huge amount of memory and excessive computational load per iteration. To solve such large-scale ill-posed systems efficiently, we develop in this paper a stochastic mirror descent method which uses only a small portion of equations randomly selected at each iteration steps and incorporates convex regularization terms into the algorithm design. Therefore, our method scales very well with the problem size and has the capability of capturing features of sought solutions. The convergence property of the method depends crucially on the choice of step-sizes. We consider various rules for choosing step-sizes and obtain convergence results under {\it a priori} early stopping rules. In particular, by incorporating the spirit of the discrepancy principle we propose a choice rule of step-sizes which can efficiently suppress the oscillations in iterates and reduce the effect of semi-convergence. Furthermore, we establish an order optimal convergence rate result when the sought solution satisfies a benchmark source condition. Various numerical simulations are reported to test the performance of the method.
- Published
- 2022
- Full Text
- View/download PDF
7. Imaging Conductivity from Current Density Magnitude using Neural Networks
- Author
-
Jin, Bangti, Li, Xiyao, and Lu, Xiliang
- Subjects
Mathematics - Numerical Analysis ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Conductivity imaging represents one of the most important tasks in medical imaging. In this work we develop a neural network based reconstruction technique for imaging the conductivity from the magnitude of the internal current density. It is achieved by formulating the problem as a relaxed weighted least-gradient problem, and then approximating its minimizer by standard fully connected feedforward neural networks. We derive bounds on two components of the generalization error, i.e., approximation error and statistical error, explicitly in terms of properties of the neural networks (e.g., depth, total number of parameters, and the bound of the network parameters). We illustrate the performance and distinct features of the approach on several numerical experiments. Numerically, it is observed that the approach enjoys remarkable robustness with respect to the presence of data noise., Comment: 29 pp, 9 figures (several typos are corrected in the new version), to appear at Inverse Problems
- Published
- 2022
- Full Text
- View/download PDF
8. Convergence Rate Analysis of Galerkin Approximation of Inverse Potential Problem
- Author
-
Jin, Bangti, Lu, Xiliang, Quan, Qimeng, and Zhou, Zhi
- Subjects
Mathematics - Numerical Analysis - Abstract
In this work we analyze the inverse problem of recovering the space-dependent potential coefficient in an elliptic / parabolic problem from distributed observation. We establish novel (weighted) conditional stability estimates under very mild conditions on the problem data. Then we provide an error analysis of a standard reconstruction scheme based on the standard output least-squares formulation with Tikhonov regularization (by an $H^1$-seminorm penalty), which is then discretized by the Galerkin finite element method with continuous piecewise linear finite elements in space (and also backward Euler method in time for parabolic problems). We present a detailed analysis of the discrete scheme, and provide convergence rates in a weighted $L^2(\Omega)$ for discrete approximations with respect to the exact potential. The error bounds are explicitly dependent on the noise level, regularization parameter and discretization parameter(s). Under suitable conditions, we also derive error estimates in the standard $L^2(\Omega)$ and interior $L^2$ norms. The analysis employs sharp a priori error estimates and nonstandard test functions. Several numerical experiments are given to complement the theoretical analysis., Comment: 23 pages, 4 figures
- Published
- 2022
- Full Text
- View/download PDF
9. Imaging Anisotropic Conductivities from Current Densities
- Author
-
Liu, Huan, Jin, Bangti, and Lu, Xiliang
- Subjects
Mathematics - Numerical Analysis ,Mathematics - Analysis of PDEs - Abstract
In this paper, we propose and analyze a reconstruction algorithm for imaging an anisotropic conductivity tensor in a second-order elliptic PDE with a nonzero Dirichlet boundary condition from internal current densities. It is based on a regularized output least-squares formulation with the standard $L^2(\Omega)^{d,d}$ penalty, which is then discretized by the standard Galerkin finite element method. We establish the continuity and differentiability of the forward map with respect to the conductivity tensor in the $L^p(\Omega)^{d,d}$-norms, the existence of minimizers and optimality systems of the regularized formulation using the concept of H-convergence. Further, we provide a detailed analysis of the discretized problem, especially the convergence of the discrete approximations with respect to the mesh size, using the discrete counterpart of H-convergence. In addition, we develop a projected Newton algorithm for solving the first-order optimality system. We present extensive two-dimensional numerical examples to show the efficiency of the proposed method., Comment: 32 pages, 10 figures, to appear at SIAM Journal on Imaging Sciences
- Published
- 2022
10. Sample-Efficient Sparse Phase Retrieval via Stochastic Alternating Minimization
- Author
-
Cai, Jian-Feng, Jiao, Yuling, Lu, Xiliang, and You, Juntao
- Subjects
Mathematics - Numerical Analysis - Abstract
In this work we propose a nonconvex two-stage \underline{s}tochastic \underline{a}lternating \underline{m}inimizing (SAM) method for sparse phase retrieval. The proposed algorithm is guaranteed to have an exact recovery from $O(s\log n)$ samples if provided the initial guess is in a local neighbour of the ground truth. Thus, the proposed algorithm is two-stage, first we estimate a desired initial guess (e.g. via a spectral method), and then we introduce a randomized alternating minimization strategy for local refinement. Also, the hard-thresholding pursuit algorithm is employed to solve the sparse constraint least square subproblems. We give the theoretical justifications that SAM find the underlying signal exactly in a finite number of iterations (no more than $O(\log m)$ steps) with high probability. Further, numerical experiments illustrates that SAM requires less measurements than state-of-the-art algorithms for sparse phase retrieval problem.
- Published
- 2021
- Full Text
- View/download PDF
11. Analysis of Deep Ritz Methods for Laplace Equations with Dirichlet Boundary Conditions
- Author
-
Duan, Chenguang, Jiao, Yuling, Lai, Yanming, Lu, Xiliang, Quan, Qimeng, and Yang, Jerry Zhijian
- Subjects
Mathematics - Numerical Analysis - Abstract
Deep Ritz methods (DRM) have been proven numerically to be efficient in solving partial differential equations. In this paper, we present a convergence rate in $H^{1}$ norm for deep Ritz methods for Laplace equations with Dirichlet boundary condition, where the error depends on the depth and width in the deep neural networks and the number of samples explicitly. Further we can properly choose the depth and width in the deep neural networks in terms of the number of training samples. The main idea of the proof is to decompose the total error of DRM into three parts, that is approximation error, statistical error and the error caused by the boundary penalty. We bound the approximation error in $H^{1}$ norm with $\mathrm{ReLU}^{2}$ networks and control the statistical error via Rademacher complexity. In particular, we derive the bound on the Rademacher complexity of the non-Lipschitz composition of gradient norm with $\mathrm{ReLU}^{2}$ network, which is of immense independent interest. We also analysis the error inducing by the boundary penalty method and give a prior rule for tuning the penalty parameter., Comment: arXiv admin note: substantial text overlap with arXiv:2103.13330; text overlap with arXiv:2109.01780
- Published
- 2021
12. A rate of convergence of Physics Informed Neural Networks for the linear second order elliptic PDEs
- Author
-
Jiao, Yuling, Lai, Yanming, Li, Dingwei, Lu, Xiliang, Wang, Fengru, Wang, Yang, and Yang, Jerry Zhijian
- Subjects
Mathematics - Numerical Analysis - Abstract
In recent years, physical informed neural networks (PINNs) have been shown to be a powerful tool for solving PDEs empirically. However, numerical analysis of PINNs is still missing. In this paper, we prove the convergence rate to PINNs for the second order elliptic equations with Dirichlet boundary condition, by establishing the upper bounds on the number of training samples, depth and width of the deep neural networks to achieve desired accuracy. The error of PINNs is decomposed into approximation error and statistical error, where the approximation error is given in $C^2$ norm with $\mathrm{ReLU}^{3}$ networks (deep network with activations function $\max\{0,x^3\}$) and the statistical error is estimated by Rademacher complexity. We derive the bound on the Rademacher complexity of the non-Lipschitz composition of gradient norm with $\mathrm{ReLU}^{3}$ network, which is of immense independent interest., Comment: arXiv admin note: text overlap with arXiv:2103.13330
- Published
- 2021
- Full Text
- View/download PDF
13. Convergence Rate Analysis for Deep Ritz Method
- Author
-
Duan, Chenguang, Jiao, Yuling, Lai, Yanming, Lu, Xiliang, and Yang, Zhijian
- Subjects
Mathematics - Numerical Analysis - Abstract
Using deep neural networks to solve PDEs has attracted a lot of attentions recently. However, why the deep learning method works is falling far behind its empirical success. In this paper, we provide a rigorous numerical analysis on deep Ritz method (DRM) \cite{wan11} for second order elliptic equations with Neumann boundary conditions. We establish the first nonasymptotic convergence rate in $H^1$ norm for DRM using deep networks with $\mathrm{ReLU}^2$ activation functions. In addition to providing a theoretical justification of DRM, our study also shed light on how to set the hyper-parameter of depth and width to achieve the desired convergence rate in terms of number of training samples. Technically, we derive bounds on the approximation error of deep $\mathrm{ReLU}^2$ network in $H^1$ norm and on the Rademacher complexity of the non-Lipschitz composition of gradient norm and $\mathrm{ReLU}^2$ network, both of which are of independent interest.
- Published
- 2021
- Full Text
- View/download PDF
14. Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on H\'older Class
- Author
-
Jiao, Yuling, Lai, Yanming, Lu, Xiliang, Wang, Fengru, Yang, Jerry Zhijian, and Yang, Yuanyuan
- Subjects
Computer Science - Machine Learning ,Computer Science - Neural and Evolutionary Computing ,Mathematics - Numerical Analysis - Abstract
In this paper, we construct neural networks with ReLU, sine and $2^x$ as activation functions. For general continuous $f$ defined on $[0,1]^d$ with continuity modulus $\omega_f(\cdot)$, we construct ReLU-sine-$2^x$ networks that enjoy an approximation rate $\mathcal{O}(\omega_f(\sqrt{d})\cdot2^{-M}+\omega_{f}\left(\frac{\sqrt{d}}{N}\right))$, where $M,N\in \mathbb{N}^{+}$ denote the hyperparameters related to widths of the networks. As a consequence, we can construct ReLU-sine-$2^x$ network with the depth $5$ and width $\max\left\{\left\lceil2d^{3/2}\left(\frac{3\mu}{\epsilon}\right)^{1/{\alpha}}\right\rceil,2\left\lceil\log_2\frac{3\mu d^{\alpha/2}}{2\epsilon}\right\rceil+2\right\}$ that approximates $f\in \mathcal{H}_{\mu}^{\alpha}([0,1]^d)$ within a given tolerance $\epsilon >0$ measured in $L^p$ norm $p\in[1,\infty)$, where $\mathcal{H}_{\mu}^{\alpha}([0,1]^d)$ denotes the H\"older continuous function class defined on $[0,1]^d$ with order $\alpha \in (0,1]$ and constant $\mu > 0$. Therefore, the ReLU-sine-$2^x$ networks overcome the curse of dimensionality on $\mathcal{H}_{\mu}^{\alpha}([0,1]^d)$. In addition to its supper expressive power, functions implemented by ReLU-sine-$2^x$ networks are (generalized) differentiable, enabling us to apply SGD to train.
- Published
- 2021
15. Sparse Signal Recovery from Phaseless Measurements via Hard Thresholding Pursuit
- Author
-
Cai, Jian-Feng, Li, Jingzhi, Lu, Xiliang, and You, Juntao
- Subjects
Mathematics - Numerical Analysis - Abstract
In this paper, we consider the sparse phase retrieval problem, recovering an $s$-sparse signal $\bm{x}^{\natural}\in\mathbb{R}^n$ from $m$ phaseless samples $y_i=|\langle\bm{x}^{\natural},\bm{a}_i\rangle|$ for $i=1,\ldots,m$. Existing sparse phase retrieval algorithms are usually first-order and hence converge at most linearly. Inspired by the hard thresholding pursuit (HTP) algorithm in compressed sensing, we propose an efficient second-order algorithm for sparse phase retrieval. Our proposed algorithm is theoretically guaranteed to give an exact sparse signal recovery in finite (in particular, at most $O(\log m + \log(\|\bm{x}^{\natural}\|_2/|x_{\min}^{\natural}|))$) steps, when $\{\bm{a}_i\}_{i=1}^{m}$ are i.i.d. standard Gaussian random vector with $m\sim O(s\log(n/s))$ and the initialization is in a neighborhood of the underlying sparse signal. Together with a spectral initialization, our algorithm is guaranteed to have an exact recovery from $O(s^2\log n)$ samples. Since the computational cost per iteration of our proposed algorithm is the same order as popular first-order algorithms, our algorithm is extremely efficient. Experimental results show that our algorithm can be several times faster than existing sparse phase retrieval algorithms.
- Published
- 2020
- Full Text
- View/download PDF
16. On the Regularizing Property of Stochastic Gradient Descent
- Author
-
Jin, Bangti and Lu, Xiliang
- Subjects
Mathematics - Numerical Analysis - Abstract
Stochastic gradient descent is one of the most successful approaches for solving large-scale problems, especially in machine learning and statistics. At each iteration, it employs an unbiased estimator of the full gradient computed from one single randomly selected data point. Hence, it scales well with problem size and is very attractive for truly massive dataset, and holds significant potentials for solving large-scale inverse problems. In the recent literature of machine learning, it was empirically observed that when equipped with early stopping, it has regularizing property. In this work, we rigorously establish its regularizing property (under \textit{a priori} early stopping rule), and also prove convergence rates under the canonical sourcewise condition, for minimizing the quadratic functional for linear inverse problems. This is achieved by combining tools from classical regularization theory and stochastic analysis. Further, we analyze the preasymptotic weak and strong convergence behavior of the algorithm. The theoretical findings shed insights into the performance of the algorithm, and are complemented with illustrative numerical experiments., Comment: 22 pages, better presentation
- Published
- 2018
- Full Text
- View/download PDF
17. Robust Decoding from 1-Bit Compressive Sampling with Least Squares
- Author
-
Huang, Jian, Jiao, Yuling, Lu, Xiliang, and Zhu, Liping
- Subjects
Mathematics - Numerical Analysis ,Statistics - Computation - Abstract
In 1-bit compressive sensing (1-bit CS) where target signal is coded into a binary measurement, one goal is to recover the signal from noisy and quantized samples. Mathematically, the 1-bit CS model reads: $y = \eta \odot\textrm{sign} (\Psi x^* + \epsilon)$, where $x^{*}\in \mathcal{R}^{n}, y\in \mathcal{R}^{m}$, $\Psi \in \mathcal{R}^{m\times n}$, and $\epsilon$ is the random error before quantization and $\eta\in \mathcal{R}^{n}$ is a random vector modeling the sign flips. Due to the presence of nonlinearity, noise and sign flips, it is quite challenging to decode from the 1-bit CS. In this paper, we consider least squares approach under the over-determined and under-determined settings. For $m>n$, we show that, up to a constant $c$, with high probability, the least squares solution $x_{\textrm{ls}}$ approximates $ x^*$ with precision $\delta$ as long as $m \geq\widetilde{\mathcal{O}}(\frac{n}{\delta^2})$. For $m< n$, we prove that, up to a constant $c$, with high probability, the $\ell_1$-regularized least-squares solution $x_{\ell_1}$ lies in the ball with center $x^*$ and radius $\delta$ provided that $m \geq \mathcal{O}( \frac{s\log n}{\delta^2})$ and $\|x^*\|_0 := s < m$. We introduce a Newton type method, the so-called primal and dual active set (PDAS) algorithm, to solve the nonsmooth optimization problem. The PDAS possesses the property of one-step convergence. It only requires to solve a small least squares problem on the active set. Therefore, the PDAS is extremely efficient for recovering sparse signals through continuation. We propose a novel regularization parameter selection rule which does not introduce any extra computational overhead. Extensive numerical experiments are presented to illustrate the robustness of our proposed model and the efficiency of our algorithm.
- Published
- 2017
18. Preasymptotic Convergence of Randomized Kaczmarz Method
- Author
-
Jiao, Yuling, Jin, Bangti, and Lu, Xiliang
- Subjects
Mathematics - Numerical Analysis ,Mathematics - Optimization and Control - Abstract
Kaczmarz method is one popular iterative method for solving inverse problems, especially in computed tomography. Recently, it was established that a randomized version of the method enjoys an exponential convergence for well-posed problems, and the convergence rate is determined by a variant of the condition number. In this work, we analyze the preasymptotic convergence behavior of the randomized Kaczmarz method, and show that the low-frequency error (with respect to the right singular vectors) decays faster during first iterations than the high-frequency error. Under the assumption that the inverse solution is smooth (e.g., sourcewise representation), the result explains the fast empirical convergence behavior, thereby shedding new insights into the excellent performance of the randomized Kaczmarz method in practice. Further, we propose a simple strategy to stabilize the asymptotic convergence of the iteration by means of variance reduction. We provide extensive numerical experiments to confirm the analysis and to elucidate the behavior of the algorithms., Comment: 20 pages
- Published
- 2017
- Full Text
- View/download PDF
19. Iterative Soft/Hard Thresholding with Homotopy Continuation for Sparse Recovery
- Author
-
Jiao, Yuling, Jin, Bangti, and Lu, Xiliang
- Subjects
Mathematics - Numerical Analysis ,Mathematics - Optimization and Control - Abstract
In this note, we analyze an iterative soft / hard thresholding algorithm with homotopy continuation for recovering a sparse signal $x^\dag$ from noisy data of a noise level $\epsilon$. Under suitable regularity and sparsity conditions, we design a path along which the algorithm can find a solution $x^*$ which admits a sharp reconstruction error $\|x^* - x^\dag\|_{\ell^\infty} = O(\epsilon)$ with an iteration complexity $O(\frac{\ln \epsilon}{\ln \gamma} np)$, where $n$ and $p$ are problem dimensionality and $\gamma\in (0,1)$ controls the length of the path. Numerical examples are given to illustrate its performance., Comment: 5 pages, 4 figures
- Published
- 2017
- Full Text
- View/download PDF
20. Group Sparse Recovery via the $\ell^0(\ell^2)$ Penalty: Theory and Algorithm
- Author
-
Jiao, Yuling, Jin, Bangti, and Lu, Xiliang
- Subjects
Computer Science - Information Theory ,Mathematics - Numerical Analysis - Abstract
In this work we propose and analyze a novel approach for group sparse recovery. It is based on regularized least squares with an $\ell^0(\ell^2)$ penalty, which penalizes the number of nonzero groups. One distinct feature of the approach is that it has the built-in decorrelation mechanism within each group, and thus can handle challenging strong inner-group correlation. We provide a complete analysis of the regularized model, e.g., existence of a global minimizer, invariance property, support recovery, and properties of block coordinatewise minimizers. Further, the regularized problem admits an efficient primal dual active set algorithm with a provable finite-step global convergence. At each iteration, it involves solving a least-squares problem on the active set only, and exhibits a fast local convergence, which makes the method extremely efficient for recovering group sparse signals. Extensive numerical experiments are presented to illustrate salient features of the model and the efficiency and accuracy of the algorithm. A comparative study indicates its competitiveness with existing approaches., Comment: 15 pp, to appear at IEEE Transactions on Signal Processing
- Published
- 2016
- Full Text
- View/download PDF
21. Alternating Direction Method of Multipliers for Linear Inverse Problems
- Author
-
Jiao, Yuling, Jin, Qinian, Lu, Xiliang, and Wang, Weijie
- Subjects
Mathematics - Numerical Analysis - Abstract
In this paper we propose an iterative method using alternating direction method of multipliers (ADMM) strategy to solve linear inverse problems in Hilbert spaces with general convex penalty term. When the data is given exactly, we give a convergence analysis of our ADMM algorithm without assuming the existence of Lagrange multiplier. In case the data contains noise, we show that our method is a regularization method as long as it is terminated by a suitable stopping rule. Various numerical simulations are performed to test the efficiency of the method.
- Published
- 2016
22. A simple finite element method for the boundary value problem with a Riemann-Liouville derivative
- Author
-
Jin, Bangti, Lazarov, Raytcho, Lu, Xiliang, and Zhou, Zhi
- Subjects
Mathematics - Numerical Analysis - Abstract
We consider a boundary value problem involving a Riemann-Liouville fractional derivative of order $\alpha\in (3/2,2)$ on the unit interval $(0,1)$. The standard Galerkin finite element approximation converges slowly due to the presence of singularity term $x^{\alpha-1}$ in the solution representation. In this work, we develop a simple technique, by transforming it into a second-order two-point boundary value problem with nonlocal low order terms, whose solution can reconstruct directly the solution to the original problem. The stability of the variational formulation, and the optimal regularity pickup of the solution are analyzed. A novel Galerkin finite element method with piecewise linear or quadratic finite elements is developed, and $L^2(D)$ error estimates are provided. The approach is then applied to the corresponding fractional Sturm-Liouville problem, and error estimates of the eigenvalue approximations are given. Extensive numerical results fully confirm our theoretical study., Comment: 22 pp
- Published
- 2015
23. A fast nonstationary iterative method with convex penalty for inverse problems in Hilbert spaces
- Author
-
Jin, Qinian and Lu, Xiliang
- Subjects
Mathematics - Numerical Analysis - Abstract
In this paper we consider the computation of approximate solutions for inverse problems in Hilbert spaces. In order to capture the special feature of solutions, non-smooth convex functions are introduced as penalty terms. By exploiting the Hilbert space structure of the underlying problems, we propose a fast iterative regularization method which reduces to the classical nonstationary iterated Tikhonov regularization when the penalty term is chosen to be the square of norm. Each iteration of the method consists of two steps: the first step involves only the operator from the problem while the second step involves only the penalty term. This splitting character has the advantage of making the computation efficient. In case the data is corrupted by noise, a stopping rule is proposed to terminate the method and the corresponding regularization property is established. Finally, we test the performance of the method by reporting various numerical simulations, including the image deblurring, the determination of source term in Poisson equation, and the de-autoconvolution problem., Comment: To appear in Inverse Problems
- Published
- 2014
- Full Text
- View/download PDF
24. An Analysis of Finite Element Approximation in Electrical Impedance Tomography
- Author
-
Gehre, Matthias, Jin, Bangti, and Lu, Xiliang
- Subjects
Mathematics - Numerical Analysis - Abstract
We present a finite element analysis of electrical impedance tomography for reconstructing the conductivity distribution from electrode voltage measurements by means of Tikhonov regularization. Two popular choices of the penalty term, i.e., $H^1(\Omega)$-norm smoothness penalty and total variation seminorm penalty, are considered. A piecewise linear finite element method is employed for discretizing the forward model, i.e., the complete electrode model, the conductivity, and the penalty functional. The convergence of the finite element approximations for the Tikhonov model on both polyhedral and smooth curved domains is established. This provides rigorous justifications for the ad hoc discretization procedures in the literature., Comment: 20 pages
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.