Search

Your search keyword '"Xu, Zhi-Qin John"' showing total 194 results

Search Constraints

Start Over You searched for: Author "Xu, Zhi-Qin John" Remove constraint Author: "Xu, Zhi-Qin John" Publication Year Range Last 10 years Remove constraint Publication Year Range: Last 10 years
194 results on '"Xu, Zhi-Qin John"'

Search Results

1. On understanding and overcoming spectral biases of deep neural network learning methods for solving PDEs

2. Complexity Control Facilitates Reasoning-Based Compositional Generalization in Transformers

3. A rationale from frequency perspective for grokking in training neural network

4. The Buffer Mechanism for Multi-Step Information Reasoning in Language Models

5. Initialization is Critical to Whether Transformers Fit Composite Functions by Reasoning or Memorizing

6. Loss Jump During Loss Switch in Solving PDEs with Neural Networks

7. Efficient and Flexible Method for Reducing Moderate-size Deep Neural Networks with Condensation

8. Input gradient annealing neural network for solving low-temperature Fokker-Planck equations

10. Understanding Time Series Anomaly State Detection through One-Class Classification

11. Solving multiscale dynamical systems by deep learning

12. An Unsupervised Deep Learning Approach for the Wave Equation Inverse Problem

13. Optimistic Estimate Uncovers the Potential of Nonlinear Models

14. Stochastic Modified Equations and Dynamics of Dropout Algorithm

15. Loss Spike in Training Neural Networks

16. Understanding the Initial Condensation of Convolutional Neural Networks

17. Laplace-fPINNs: Laplace-based fractional physics-informed neural networks for solving forward and inverse problems of subdiffusion

18. Phase Diagram of Initial Condensation for Two-layer Neural Networks

19. Linear Stability Hypothesis and Rank Stratification for Nonlinear Models

20. Bayesian Inversion with Neural Operator (BINO) for Modeling Subdiffusion: Forward and Inverse Problems

21. DeepFlame: A deep learning empowered open-source platform for reacting flow simulations

22. Implicit regularization of dropout

23. Embedding Principle in Depth for the Loss Landscape Analysis of Deep Neural Networks

24. An Experimental Comparison Between Temporal Difference and Residual Gradient with Neural Network Approximation

25. Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width

26. Limitation of Characterizing Implicit Regularization by Data-independent Functions

27. Overview frequency principle/spectral bias in deep learning

28. A multi-scale sampling method for accurate and robust deep neural network to predict combustion chemical kinetics

29. A deep learning-based model reduction (DeePMR) method for simplifying chemical kinetics

30. Subspace Decomposition based DNN algorithm for elliptic type multi-scale PDEs

31. Embedding Principle: a hierarchical structure of loss landscape of deep neural networks

32. Dropout in Training Neural Networks: Flatness of Solution and Noise Structure

33. Data-informed Deep Optimization

34. Force-in-domain GAN inversion

35. MOD-Net: A Machine Learning Approach via Model-Operator-Data Network for Solving PDEs

36. Embedding Principle of Loss Landscape of Deep Neural Networks

37. An Upper Limit of Decaying Rate with Respect to Frequency in Deep Neural Network

38. Towards Understanding the Condensation of Neural Networks at Initial Training

40. Linear Frequency Principle Model to Understand the Absence of Overfitting in Neural Networks

41. Frequency Principle in Deep Learning Beyond Gradient-descent-based Training

42. Fourier-domain Variational Formulation and Its Well-posedness for Supervised Learning

43. On the exact computation of linear frequency principle dynamics and its generalization

44. A multi-scale DNN algorithm for nonlinear elliptic equations with multiple scales

45. A regularized deep matrix factorized model of matrix completion for image restoration

46. Deep frequency principle towards understanding why deeper learning is faster

47. Multi-scale Deep Neural Network (MscaleDNN) for Solving Poisson-Boltzmann Equation in Complex Domains

48. Phase diagram for two-layer ReLU neural networks at infinite-width limit

49. Implicit bias with Ritz-Galerkin method in understanding deep learning for solving PDEs

Catalog

Books, media, physical & digital resources