116 results on '"polynomial chaos"'
Search Results
2. SeAr PC: Sensitivity enhanced arbitrary Polynomial Chaos.
- Author
-
Pepper, Nick, Montomoli, Francesco, and Kantarakias, Kyriakos
- Subjects
- *
SENSITIVITY analysis - Published
- 2024
- Full Text
- View/download PDF
3. Novel method for reliability optimization design based on rough set theory and hybrid surrogate model.
- Author
-
Fan, Haoran, Wang, Chong, and Li, Shaohua
- Subjects
- *
ROUGH sets , *KNOWLEDGE base , *POLYNOMIAL chaos , *AIRFRAMES , *SYSTEM safety , *KRIGING - Abstract
• Dual-approximate model is adopted to quantify the bounded‑but-irregular set. • Radical and conservative reliability indexes are defined to evaluate structure safety. • Two types of optimization models are constructed to obtain different design results. • A hybrid surrogate model nested with the kriging model and PCE is developed. Considering the intricate correlation among uncertain parameters from multiple sources in engineering practice, the bounded region describing parameter uncertainty displays irregular boundary features. To facilitate effective reliability analysis and optimization within this context, this paper introduces a novel reliability optimization design methodology grounded in rough set theory and a hybrid surrogate model. Initially, utilizing the concept of equivalent knowledge bases and limited experimental data, upper and lower approximation sets are constructed to quantitatively describe the irregular region of uncertain parameters from both internal and external perspectives. Subsequently, based on the upper and lower bounds of the response derived from the dual-approximate model, both radical and conservative reliability indexes are defined to assess system safety. On this basis, reliability optimization design models for complex structures are established by incorporating these indexes as constraints. Meanwhile, to enhance computational efficiency during each optimization iteration, a hybrid surrogate model integrating Polynomial Chaos Expansion (PCE) and Kriging models is employed, effectively substituting for the original computationally intensive simulations. Finally, the practicality and effectiveness of the proposed method are verified through a numerical example and two optimization problems relevant to aircraft structures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Lagrangian operator inference enhanced with structure-preserving machine learning for nonintrusive model reduction of mechanical systems.
- Author
-
Sharma, Harsh, Najera-Flores, David A., Todd, Michael D., and Kramer, Boris
- Subjects
- *
MACHINE learning , *POLYNOMIAL chaos , *DIGITAL image correlation , *MECHANICAL models , *REDUCED-order models , *SCIENCE education , *NONLINEAR systems - Abstract
Complex mechanical systems often exhibit strongly nonlinear behavior due to the presence of nonlinearities in the energy dissipation mechanisms, material constitutive relationships, or geometric/connectivity mechanics. Numerical modeling of these systems leads to nonlinear full-order models that possess an underlying Lagrangian structure. This work proposes a Lagrangian operator inference method enhanced with structure-preserving machine learning to learn nonlinear reduced-order models (ROMs) of nonlinear mechanical systems. This two-step approach first learns the best-fit linear Lagrangian ROM via Lagrangian operator inference and then presents a structure-preserving machine learning method to learn nonlinearities in the reduced space. The proposed approach can learn a structure-preserving nonlinear ROM purely from data, unlike the existing operator inference approaches that require knowledge about the mathematical form of nonlinear terms. From a machine learning perspective, it accelerates the training of the structure-preserving neural network by providing an informed prior (i.e., the linear Lagrangian ROM structure), and it reduces the computational cost of the network training by operating on the reduced space. The method is first demonstrated on two simulated examples: a conservative nonlinear rod model and a two-dimensional nonlinear membrane with nonlinear internal damping. Finally, the method is demonstrated on an experimental dataset consisting of digital image correlation measurements taken from a lap-joint beam structure from which a predictive model is learned that captures amplitude-dependent frequency and damping characteristics accurately. The numerical results demonstrate that the proposed approach yields generalizable nonlinear ROMs that exhibit bounded energy error, capture the nonlinear characteristics reliably, and provide accurate long-time predictions outside the training data regime. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Generative models for the deformation of industrial shapes with linear geometric constraints: Model order and parameter space reductions.
- Author
-
Padula, Guglielmo, Romor, Francesco, Stabile, Giovanni, and Rozza, Gianluigi
- Subjects
- *
GEOMETRIC shapes , *MACHINE learning , *GEOMETRIC modeling , *COMPUTATIONAL fluid dynamics , *POLYNOMIAL chaos , *NAVIER-Stokes equations - Abstract
Real-world applications of computational fluid dynamics often involve the evaluation of quantities of interest for several distinct geometries that define the computational domain or are embedded inside it. For example, design optimization studies require the realization of response surfaces from the parameters that determine the geometrical deformations to relevant outputs to be optimized. In this context, a crucial aspect to be addressed is represented by the limited resources at disposal to computationally generate different geometries or to physically obtain them from direct measurements. This is the case for patient-specific biomedical applications for example. When additional linear geometrical constraints need to be imposed, the computational costs increase substantially. Such constraints include total volume conservation and barycenter location. We develop a new paradigm that employs generative models from machine learning to efficiently sample new geometries with linear constraints. A consequence of our approach is the reduction of the parameter space from the original geometrical parametrization to a low-dimensional latent space of the generative models. Crucial is the assessment of the quality of the distribution of the constrained geometries obtained with respect to physical and geometrical quantities of interest. Non-intrusive model order reduction is enhanced since smaller parametric spaces are considered. We test our methodology on two academic test cases: a mixed Poisson problem on the 3d Stanford bunny with fixed barycenter deformations and the multiphase turbulent incompressible Navier–Stokes equations for the Duisburg's naval hull test case with fixed volume deformations. [Display omitted] • A methodology to enforce geometrical constraints ȏn generative models is developed. • For validation we observe some metrics on physical quantities of interest. • The methodology is tested for model order reduction and reduction in parameter space. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Classifier-based adaptive polynomial chaos expansion for high-dimensional uncertainty quantification.
- Author
-
Thapa, Mishal, Mulani, Sameer B., Paudel, Achyut, Gupta, Subham, and Walters, Robert W.
- Subjects
- *
POLYNOMIAL chaos , *RANDOM numbers , *LAMINATED materials - Abstract
A novel approach for the construction of polynomial chaos expansion (PCE) is proposed to facilitate high-dimensional uncertainty quantification (UQ). The current PCE techniques are well-known for UQ; however, they are affected by the curse of dimensionality that leads to over-fitting and tractability issues. Although L 1 -minimization can be used to deal with over-fitting, it is still ineffective for problems with a large number of independent random inputs or requiring a very high-order PCE. Therefore, a classifier-based PCE (CAPCE) is presented to mitigate the factorial growth of basis terms and prevent over-fitting. The adaptive framework includes four basis selection strategies – enrichment, screening, discovery, and recycling – in the inner loop and the enrichment of the training samples in the outer loop. Mainly, an adaptive classifier is trained on the dominant and discarded basis terms obtained from L 1 -minimization during discovery. It then allows the judicious selection of new higher-order basis candidates for L 1 -solver in the next iteration, thereby alleviating the effect of the curse of dimensionality. The proposed framework has been tested with analytical problems of varying sizes of independent random inputs and an engineering composite laminate problem. The comparison of CAPCE with the available efficient PCE techniques demonstrated improvements in accuracy using fewer samples due to a faster convergence rate. • Classifier-based adaptive polynomial chaos expansion (PCE) for high-dimensional uncertainty quantification. • Four strategies for PCE basis adaptivity—enrichment, screening, discovery, and recycling. • Framework with adaptive basis selection and sequential sampling. • Comparison with available efficient sparse PCE approaches. • Proposed adaptive PCE has a faster convergence rate than available approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Incremental Neural Controlled Differential Equations for modeling of path-dependent material behavior.
- Author
-
He, Yangzi and Semnani, Shabnam J.
- Subjects
- *
DIFFERENTIAL equations , *BOUNDARY value problems , *CONTINUOUS time models , *RECURRENT neural networks , *TIME-varying systems , *MATERIAL point method , *POLYNOMIAL chaos - Abstract
Data-driven surrogate modeling has emerged as a promising approach for reducing computational expenses of multi-scale simulations. Recurrent Neural Network (RNN) is a common choice for modeling of path-dependent behavior. However, previous studies have shown that RNNs fail to make predictions that are consistent with perturbation in the input strain, leading to potential oscillations and lack of convergence when implemented within finite element simulations. In this work, we leverage neural differential equations which have recently emerged to model time series in a continuous manner and show their robustness in modeling elasto-plastic path-dependent material behavior. We develop a new sequential model called Incremental Neural Controlled Differential Equation (INCDE) for general time-variant dynamical systems, including path-dependent constitutive models. INCDE is formulated and analyzed in terms of stability and convergence. Surrogate models based on INCDE are subsequently trained and tested for J2 and Drucker–Prager plasticity. The surrogate models are implemented for material point simulations and boundary value problems solved using the finite element method with various cyclic and monotonic loading protocols to demonstrate the robustness, consistency, and accuracy of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Quality measures for the evaluation of machine learning architectures on the quantification of epistemic and aleatoric uncertainties in complex dynamical systems.
- Author
-
Guth, Stephen, Mojahed, Alireza, and Sapsis, Themistoklis P.
- Subjects
- *
EPISTEMIC uncertainty , *MACHINE learning , *COMPUTATIONAL fluid dynamics , *GAUSSIAN processes , *BAYESIAN analysis , *DYNAMICAL systems , *NAVAL architecture , *POLYNOMIAL chaos - Abstract
Machine learning methods for the construction of data-driven reduced order model models are used in an increasing variety of engineering domains, especially as a supplement to expensive computational fluid dynamics for design problems. An important check on the reliability of surrogate models is Uncertainty Quantification (UQ), a self assessed estimate of the model error. Accurate UQ allows for cost savings by reducing both the required size of training data sets and the required safety factors, while poor UQ prevents users from confidently relying on model predictions. We introduce two quality measures for the quantification of uncertainty that are suitable for high dimensional problems: the distribution of normalized residuals on validation data and the distribution of estimated uncertainties. While the distribution of estimated uncertainties evaluates how confident the model is in making predictions, the distribution of normalized residuals evaluates how accurate these uncertainty estimates are We examine several machine learning techniques, including both Gaussian processes and a family UQ-augmented neural networks: Ensemble neural networks (ENN), Bayesian neural networks (BNN), Dropout neural networks (D-NN), and Gaussian neural networks (G-NN). We evaluate UQ accuracy (distinct from model accuracy) We apply these metrics to two model data sets representative of complex dynamical systems: an ocean engineering problem in which a ship traverses irregular wave episodes, and a dispersive wave turbulence system with extreme events, the Majda–McLaughlin–Tabak model. We present conclusions concerning model architecture and hyperparameter tuning for use in applications of UQ to dynamical systems, such as active learning schemes. • Uncertainty Quantification (UQ) is ability of a model to estimate its own errors. • Neural Networks can estimate epistemic uncertainty with prediction ensembles. • Dropout networks performed very well in UQ metrics. • Ensemble networks also did well, and are simple to implement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Model reduction of coupled systems based on non-intrusive approximations of the boundary response maps.
- Author
-
Discacciati, Niccolò and Hesthaven, Jan S.
- Subjects
- *
DOMAIN decomposition methods , *POLYNOMIAL chaos , *ARTIFICIAL neural networks , *PARTIAL differential equations , *NONLINEAR equations - Abstract
We propose a local, non-intrusive model order reduction technique to accurately approximate the solution of coupled multi-component parametrized systems governed by partial differential equations. Our approach is based on the approximation of the boundary response maps arising from a non-overlapping domain decomposition method. To construct the surrogate, we combine dimensionality reduction techniques with interpolation or regression approaches, such as kernel interpolation methods and artificial neural networks. Two alternative training strategies, making use of the full coupled problem or an artificial parametrization of the boundary conditions, are proposed and discussed. We show the potential of our approach in a series of test cases, ranging from linear diffusion-like models to nonlinear multi-physics problems. High levels of accuracy and computational efficiency are achieved in all cases. • Non-intrusive model order reduction of coupled systems. • Approximation of the boundary response maps from domain decomposition methods. • Kernel interpolation methods and artificial neural networks are used and discussed. • Local and non-local training strategies are proposed and discussed. • Accurate and computationally efficient reduction of multi-physics problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A novel uncertainty quantification method for determining deformations and reliabilities of stochastic laminated composite plates with geometric nonlinearity.
- Author
-
Huo, Hui, Yu, Tianxiao, Zhao, Jian, Chen, Guohai, and Yang, Dixiong
- Subjects
- *
COMPOSITE plates , *LAMINATED materials , *MONTE Carlo method , *PREDICATE calculus , *PROBABILITY density function , *STRUCTURAL reliability , *RANDOM fields , *POLYNOMIAL chaos - Abstract
Stochastic finite element method (SFEM) for uncertainty quantification is widely applied for analysis of structures with intrinsic randomness. For determining the geometrically nonlinear deformations of laminated composite plates with random fields, the existing intrusive SFEMs have the limitations of low applicability, insufficient accuracy, or low efficiency. To this end, this paper proposes a novel and efficient non-intrusive SFEM incorporating direct probability integral method to achieve probability density functions (PDFs) of stochastic responses and reliabilities of laminated composite plates with geometric nonlinearity. Firstly, the von Kármán strain-displacement relation based on the third-order shear deformation theory is employed to model the geometric nonlinearity of laminated plates. The random field is discretized via Karhunen–Loève expansion. Secondly, the probability density integral equation (PDIE) is derived from the new perspective of probability conservation. The proposed non-intrusive SFEM decouples the equilibrium equation and PDIE to compute the response PDFs and reliabilities of uncertain laminated composite plates in a unified way. Moreover, the criterion which can judge the applicability of geometrically nonlinear theory is suggested for performing uncertainty quantification. Finally, comparisons of the results in terms of Monte Carlo simulation and the literature demonstrate the high accuracy and efficiency of the proposed method. For stochastic laminated plates, the response statistical moments vary nonlinearly with linear increase of load amplitude due to geometric nonlinearity, the deflection variability increases and structural reliability decreases with the increase in variability and correlation length of random field, and the stacking angle significantly affects the stochastic nonlinear deflections and reliabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Optimal surrogate boundary selection and scalability studies for the shifted boundary method on octree meshes.
- Author
-
Yang, Cheng-Hau, Saurabh, Kumar, Scovazzi, Guglielmo, Canuto, Claudio, Krishnamurthy, Adarsh, and Ganapathysubramanian, Baskar
- Subjects
- *
POISSON'S equation , *PARTIAL differential equations , *LINEAR equations , *GEOMETRIC modeling , *SCALABILITY , *POLYNOMIAL chaos - Abstract
The accurate and efficient simulation of Partial Differential Equations (PDEs) in and around arbitrarily defined geometries is critical for many application domains. Immersed boundary methods (IBMs) alleviate the usually laborious and time-consuming process of creating body-fitted meshes around complex geometry models (described by CAD or other representations, e.g., STL, point clouds), especially when high levels of mesh adaptivity are required. In this work, we advance the field of IBM in the context of the recently developed Shifted Boundary Method (SBM). In the SBM, the location where boundary conditions are enforced is shifted from the actual boundary of the immersed object to a nearby surrogate boundary, and boundary conditions are corrected utilizing Taylor expansions. This approach allows choosing surrogate boundaries that conform to a Cartesian mesh without losing accuracy or stability. Our contributions in this work are as follows: (a) we show that the SBM numerical error can be greatly reduced by an optimal choice of the surrogate boundary, (b) we mathematically prove the optimal convergence of the SBM for this optimal choice of the surrogate boundary, (c) we deploy the SBM on massively parallel octree meshes, including algorithmic advances to handle incomplete octrees, and (d) we showcase the applicability of these approaches with a wide variety of simulations involving complex shapes, sharp corners, and different topologies. Specific emphasis is given to Poisson's equation and the linear elasticity equations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. A stochastic LATIN method for stochastic and parameterized elastoplastic analysis.
- Author
-
Zheng, Zhibao, Néron, David, and Nackenhorst, Udo
- Subjects
- *
RANDOM variables , *POLYNOMIAL chaos , *STOCHASTIC approximation , *STOCHASTIC orders , *NONLINEAR equations , *STOCHASTIC analysis , *PROBABILITY theory - Abstract
The LATIN method has been developed and successfully applied to a variety of deterministic problems, but few work has been developed for nonlinear stochastic problems. This paper presents a stochastic LATIN method to solve stochastic and/or parameterized elastoplastic problems. To this end, the stochastic solution is decoupled into spatial, temporal and stochastic spaces, and approximated by the sum of a set of products of triplets of spatial functions, temporal functions and random variables. Each triplet is then calculated in a greedy way using a stochastic LATIN iteration. The high efficiency of the proposed method relies on two aspects: The nonlinearity is efficiently handled by inheriting advantages of the classical LATIN method, and the randomness and/or parameters are effectively treated by a sample-based approximation of stochastic spaces. Further, the proposed method is not sensitive to the stochastic and/or parametric dimensions of inputs due to the sample description of stochastic spaces. It can thus be applied to high-dimensional stochastic and parameterized problems. Five numerical examples demonstrate the promising performance of the proposed stochastic LATIN method. • A stochastic LATIN method is developed for stochastic elastoplastic analysis. • Random and parameterized inputs are handled using a unified probability framework. • Decoupled approximations of solutions are used in both local and global stages. • Stochastic elastoplastic constitutive models are evolved in a non-intrusive way. • High-dimensional stochastic elastoplastic problems can be solved efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. An active sparse polynomial chaos expansion approach based on sequential relevance vector machine.
- Author
-
Li, Yangtian, Luo, Yangjun, and Zhong, Zheng
- Subjects
- *
POLYNOMIAL chaos , *ACTIVE learning , *MACHINERY , *MACHINE learning - Abstract
Polynomial chaos expansion (PCE) is a popular surrogate modeling approach employed in uncertainty quantification for a variety of engineering problems. However, the challenges for full PCE lie in the "curse of dimensionality" of the expansion coefficients. In this paper, we propose a new sparse PCE approach by introducing active learning technique and sequence relevance vector machine (SRVM). As an active learning technique, efficient loss function based on expected improvement (EI-based ELF) is introduced to search for the best next sampling point and enrich the training samples. Relevance vector machine (RVM) is a superior machine learning technique due to the sparsity of its adopted model. SRVM leads to the maximization of marginal likelihood by sequentially selecting basis functions. To assess the performance of the proposed method, four examples are investigated, and the results show that the proposed method outperforms the classical sparse PCE in terms of both accuracy and efficiency. [Display omitted] • Proposed method outperforms two traditional ones in certain high-dimensional cases. • The combination of proposed method and AL techniques promotes models' accuracy. • The proposed method shows superior scalability in engineering cases of high order. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Surrogate-accelerated Bayesian framework for high-temperature thermal diffusivity characterization.
- Author
-
Hu, Yuan, Abuseada, Mostafa, Alghfeli, Abdalla, Holdheim, Saurin, and Fisher, Timothy S.
- Subjects
- *
THERMAL diffusivity , *MARKOV chain Monte Carlo , *POLYNOMIAL chaos , *RANDOM walks , *BAYESIAN analysis , *RANDOM sets , *ACCELERATED life testing - Abstract
Precise determination of thermal diffusivity at high temperatures is crucial for aerospace and energy industries. Periodic heating techniques such as Å ngström's method are data-rich and commonly used for measuring thermal diffusivity of a solid material. In previous Å ngström's method studies, regression techniques have been used to solve the associated inverse problem and to estimate thermal diffusivity by minimizing the residual between measurements and model predictions. This approach lacks rigorous uncertainty quantification and does not allow incorporation of prior knowledge for parameters. Adopting a Bayesian framework addresses these issues; however, probing the Bayesian posterior distribution is prohibitively expensive using Markov chain Monte Carlo (MCMC) methods, especially when the physical model is computationally expensive, as in the present study for which an analytical solution does not exist. This study employs a parametric surrogate model in the form of polynomial chaos to accelerate the physical model by several orders of magnitude to support Bayesian analysis. Moreover, high-temperature testing environments are difficult to control precisely, and many unknown parameters exist beyond the quantity of interest. Previous studies have employed random walks to set new parameters in the MCMC sampling process. However, random walks are inefficient in exploring high-dimensional parameter spaces and thus suffer high auto-correlation and poor convergence. To improve the efficiency of the MCMC sampler, this study employs a No-U-Turn sampler that explores the parameter space thoroughly and efficiently. We demonstrate the effectiveness of this Bayesian framework by analyzing experimental results on a graphite sample at approximately 1000 K. [Display omitted] • Polynomial Chaos surrogate model accelerates computation speed 10000-fold. • No-U-turn sampler enables high sampling efficiency compared to traditional approaches. • Rigorous uncertainty quantification for non-linear heat transfer problem. • Direct optical heating for rapid non-contact high-T thermal diffusivity measurement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. An efficient adaptive forward–backward selection method for sparse polynomial chaos expansion.
- Author
-
Zhao, Huan, Gao, Zhenghong, Xu, Fang, Zhang, Yidian, and Huang, Jiangtao
- Subjects
- *
POLYNOMIAL chaos , *ACOUSTIC wave propagation , *GAUSSIAN distribution , *GAUSSIAN function , *DISTRIBUTION (Probability theory) , *PROCESS optimization - Abstract
As an efficient uncertainty quantification (UQ) methodology for moment propagation and probability analysis of quantities of interest, polynomial chaos (PC) expansions have received broad and sustained attentions. However, the exponentially increasing cost of building PC representations with increasing dimension of uncertainty, i.e., the curse of dimensionality, seriously restricts the practical application of PC at the industrial level. Some efficient strategies applying adaptive basis selection algorithm for sparse optimization (or l 1 -minimization) of PC show great potential compared to the classical full PC. However, these strategies mainly focus on forward selection algorithms, which are incapable of correcting any error made by these algorithms. Hence, this paper develops a novel adaptive forward–backward selection (AFBS) algorithm for reconstructing sparse PC. The proposed algorithm by a reasonable combination of forward selection and adaptively backward elimination technique can efficiently correct mistakes made by earlier forward selection steps, which retains the most significant PC terms and discards redundant or insignificant ones. The accuracy of built PC metamodel is checked by a cross-validation procedure. As a consequence, the most significant PC terms are detected sequentially and corresponding PC coefficients are accurately recovered. It largely enhances the sparsity of PC and improves the prediction accuracy compared to the popular forward selection algorithms, e.g., least angle regressions (LARs). To validate the efficiency of the proposed algorithm, a complex analytical function with Gaussian distribution inputs and two challenging aerodynamic applications including a sonic boom propagation analysis considering atmospheric uncertainty and a natural-laminar-flow (NLF) airfoil computation under both geometrical and operational uncertainties are elaborated. With an in-depth comparison with some popular PC reconstruction methodologies, the performance of the devised AFBS method is assessed comprehensively. The results demonstrate that the proposed AFBS method can efficiently identify the significant PC contributions describing the problems, and reproduce sparser PC metamodel and more accurate UQ results than those classical full PC and LARs methods. • Propose a novel and efficient AFBS method for reconstructing sparse PC. • Propose an adaptive regression framework via cross validation to select optimal PC. • Make comprehensive comparisons among AFBS, LAR and full PC methods. • AFBS for UQ is efficient and robust which provides more accurate PC for QoI's. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
16. Modeling uncertainties in molecular dynamics simulations using a stochastic reduced-order basis.
- Author
-
Wang, Haoran, Guilleminot, Johann, and Soize, Christian
- Subjects
- *
MOLECULAR models , *CONFIGURATION space , *POLYNOMIAL chaos , *REDUCTION potential , *MOLECULAR dynamics - Abstract
A methodology enabling the robust treatment of model-form uncertainties in molecular dynamics simulations is proposed. The approach consists in properly randomizing a reduced-order basis, obtained by the method of snapshots in the configuration space. A multi-step strategy to identify the hyperparameters in the stochastic reduced-order basis is further introduced. The relevance of the framework is finally demonstrated by characterizing various types of modeling errors associated with molecular dynamics simulations on a graphene sheet. In particular, the ability of the framework to represent uncertainties raised by model reduction and interatomic potential selection is assessed. • Novel approach to take into account model-form uncertainties in molecular dynamics. • Approach based on the use of a stochastic reduced-order basis. • Various applications and validation on a graphene sheet. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
17. Shape optimization under uncertainty for rotor blades of horizontal axis wind turbines.
- Author
-
Keshavarzzadeh, Vahid, Ghanem, Roger G., and Tortorelli, Daniel A.
- Subjects
- *
HORIZONTAL axis wind turbines , *REDUCED-order models , *STRUCTURAL optimization , *POLYNOMIAL chaos , *UNCERTAINTY , *WIND pressure - Abstract
We present a computational framework for the shape optimization of a Horizontal-Axis Wind Turbine (HAWT) rotor blade under uncertainty. Our framework integrates aerodynamic simulations based on the blade element method which utilizes reduced order models of the blade structure and wind load with design sensitivity analysis and nonlinear programming. The wind velocity is modeled as a stochastic process to account for variations in time and space. An additional stochastic process accounts for uncertainties in the structural material properties. The uncertainty propagation is based on a non-intrusive polynomial chaos expansion that allows accurate estimation of stochastic performance metrics such as generated power and structural compliance. Sensitivities of cost and constraint functions with respect to shape parameters namely twist angles are computed with an efficient scheme to enable gradient based optimization. To demonstrate the effect of uncertainty, designs obtained from optimization under uncertainty are compared to those obtained from deterministic optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. An efficient and robust adaptive sampling method for polynomial chaos expansion in sparse Bayesian learning framework.
- Author
-
Zhou, Yicheng, Lu, Zhenzhou, Cheng, Kai, and Ling, Chunyan
- Subjects
- *
POLYNOMIAL chaos , *SAMPLING methods , *EXPERIMENTAL design , *COST functions - Abstract
Sparse polynomial chaos expansion has been widely used to tackle problems of function approximation in the field of uncertain quantification. The accuracy of PCE depends on how to construct the experimental design. Therefore, adaptive sampling methods of designs of experiment are raised. Classic designs of experiment for PCE are based on least-square minimization techniques, where the design space is only defined by the inputs without involving the responses of the system. To overcome this limitation, a novel adaptive sampling method is introduced in sparse Bayesian learning framework. The design point is enriched sequentially by maximizing a generalized expectation of loss function criterion which allows an effective use of all the information available, on which two adaptive strategies are derived to get a balance between the global exploration and the local exposition via the error information from the previous iteration. The numerical results show that the proposed method is superior to classic design of experiment in terms of efficiency and robustness. • A novel method is proposed to build sparse PCE in sparse Bayesian learning framework. • An expected loss function design criterion (ELF) is derived for experimental design. • Two adaptive strategies are exploited to implement ELF criterion. • The proposed method strikes a balance between global exploration and local exploitation. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. Polynomial chaos expansions for dependent random variables.
- Author
-
Jakeman, John D., Franzelin, Fabian, Narayan, Akil, Eldred, Michael, and Plfüger, Dirk
- Subjects
- *
RANDOM variables , *DEPENDENT variables , *POLYNOMIAL chaos , *MAGNITUDE (Mathematics) , *INDEPENDENT variables , *GALERKIN methods - Abstract
Polynomial chaos expansions (PCE) are well-suited to quantifying uncertainty in models parameterized by independent random variables. The assumption of independence leads to simple strategies for building multivariate orthonormal bases and for sampling strategies to evaluate PCE coefficients. In contrast, the application of PCE to models of dependent variables is much more challenging. Three approaches can be used to construct PCE of models of dependent variables. The first approach uses mapping methods where measure transformations, such as the Nataf and Rosenblatt transformation, can be used to map dependent random variables to independent ones; however we show that this can significantly degrade performance since the Jacobian of the map must be approximated. A second strategy is the class of dominating support methods. In these approaches a PCE is built using independent random variables whose distributional support dominates the support of the true dependent joint density; we provide evidence that this approach appears to produce approximations with suboptimal accuracy. A third approach, the novel method proposed here, uses Gram–Schmidt orthogonalization (GSO) to numerically compute orthonormal polynomials for the dependent random variables. This approach has been used successfully when solving differential equations using the intrusive stochastic Galerkin method, and in this paper we use GSO to build PCE using a non-intrusive stochastic collocation method. The stochastic collocation method treats the model as a black box and builds approximations of the input–output map from a set of samples. Building PCE from samples can introduce ill-conditioning which does not plague stochastic Galerkin methods. To mitigate this ill-conditioning we generate weighted Leja sequences, which are nested sample sets, to build accurate polynomial interpolants. We show that our proposed approach, GSO with weighted Leja sequences, produces PCE which are orders of magnitude more accurate than PCE constructed using mapping or dominating support methods. • Development of approximations of models parameterized by dependent random variables. • A thorough numerical exploration of the pros and cons of existing methods is given. • Algorithm is applied in the contexts of Bayesian inference and dimension reduction. • Method can reduce the number of model evaluations by orders of magnitude. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
20. A data-driven framework for sparsity-enhanced surrogates with arbitrary mutually dependent randomness.
- Author
-
Lei, Huan, Li, Jing, Gao, Peiyuan, Stinis, Panagiotis, and Baker, Nathan A.
- Subjects
- *
PROBABILITY measures , *POLYNOMIAL chaos , *PARTIAL differential equations , *SURROGATE mothers , *GAUSSIAN distribution , *STOCHASTIC models - Abstract
The challenge of quantifying uncertainty propagation in real-world systems is rooted in the high-dimensionality of the stochastic input and the frequent lack of explicit knowledge of its probability distribution. Traditional approaches show limitations for such problems, especially when the size of the training data is limited. To address these difficulties, we have developed a general framework of constructing surrogate models on spaces of stochastic input with arbitrary probability measure irrespective of the mutual dependencies between individual components of the random inputs and the analytical form. The present Data-driven Sparsity-enhancing Rotation for Arbitrary Randomness (DSRAR) framework includes a data-driven construction of multivariate polynomial basis for arbitrary mutually dependent probability measures and a sparsity enhancement rotation procedure. This sparsity enhancement method was initially proposed in our previous work (Lei et al., 2015) for Gaussian density distributions, which may not be feasible for non-Gaussian distributions due developed a new data-driven approach to construct orthonormal polynomials for arbitrary mutually dependent randomness, ensuring the constructed basis maintains the orthogonality/near-orthogonality with respect to the density of the rotated random vector, where directly applying the regular polynomial chaos including arbitrary polynomial chaos (aPC) Oladyshkin and Nowak (2012) shows limitations due to the assumption of the mutual independence between the components of the random inputs. The developed DSRAR framework leads to accurate recovery, with only limited training data, of a sparse representation of the target functions. The effectiveness of our method is demonstrated in challenging problems such as partial differential equations and realistic molecular systems within high-dimensional (O (10)) conformational spaces where the underlying density is implicitly represented by a large collection of sample data, as well as systems with explicitly given non-Gaussian probabilistic measures. • Surrogate construction for arbitrary mutually dependent probability measures. • A general framework for both explicit non-Gaussian densities and implicit probability measures. • Data-driven orthonormal-basis construction coupled with sparsity-enhancing rotations. • Application to molecular systems with non-Gaussian mutually dependent randomness. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. Spectral stochastic isogeometric analysis of free vibration.
- Author
-
Li, Keyan, Wu, Di, Gao, Wei, and Song, Chongmin
- Subjects
- *
ISOGEOMETRIC analysis , *FREE vibration , *STOCHASTIC analysis , *PROBABILITY density function , *POLYNOMIAL chaos , *CUMULATIVE distribution function - Abstract
A novel spectral stochastic isogeometric analysis (SSIGA) is proposed for the free vibration analysis of engineering structures involving uncertainties. The proposed SSIGA framework treats the stochastic free vibration problem as a stochastic generalized eigenvalue problem. The stochastic Young's modulus and material density of the structure are modelled as random fields with Gaussian and non-Gaussian distributions. The basis functions, the non-uniform rational B-spline (NURBS) and T-spline, within Computer Aided Design (CAD) system are adopted within the SSIGA, which can eliminate geometric errors between design model and uncertainty analysis model. The arbitrary polynomial chaos (aPC) expansion is implemented to investigate the stochastic responses (i.e. eigenvalues and eigenvectors) of the structure. A Galerkin-based method is freshly proposed to solve the stochastic generalized eigenvalue problems. The statistical moments, probability density function (PDF) and cumulative distribution function (CDF) of the eigenvalues can be effectively obtained. Two numerical examples with irregular geometries are investigated to illustrate the applicability, accuracy and efficiency of the proposed SSIGA for free vibration analysis of engineering structures. • A Galerkin-based method within spectral stochastic isogeometric analysis (SSIGA) is freshly developed for stochastic free vibration. • Spatially dependent uncertain Young's modulus and material density of plate and shell are investigated. • A stochastic free vibration analysis in Computer-Aided Design (CAD) is presented. • Arbitrary polynomial chaos approach is adopted to achieve a more generalized computational stochastic approach. • Applicability, accuracy and efficiency of the Galerkin-based method is verified against large-scale sampling method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Multi-level multi-fidelity sparse polynomial chaos expansion based on Gaussian process regression.
- Author
-
Cheng, Kai, Lu, Zhenzhou, and Zhen, Ying
- Subjects
- *
POLYNOMIAL chaos , *KRIGING , *ORTHOGONAL polynomials , *GAUSSIAN processes - Abstract
Abstract The polynomial chaos expansion (PCE) approaches have drawn much attention in the field of simulation-based uncertainty quantification (UQ) of stochastic problem. In this paper, we present a multi-level multi-fidelity (MLMF) extension of non-intrusive sparse PCE based on recent work of recursive Gaussian process regression (GPR) methodology. The proposed method firstly builds the full PCE with varying degree of fidelity based on GPR technique using orthogonal polynomial covariance function. Then an autoregressive scheme is used to exploit the cross-correlation of these PCE models of different fidelity level, and this procedure yields a high-fidelity PCE model that encodes the information of all the lower fidelity levels. Furthermore, an iterative scheme is used to detect the important bases of PCE in each fidelity level. Three test examples are investigated d to validate the performance of the proposed method, and the results show that the present method provides an accurate meta-model for UQ of stochastic problem. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. A painless intrusive polynomial chaos method with RANS-based applications.
- Author
-
Chatzimanolakis, M., Kantarakias, K.-D., Asouti, V.G., and Giannakoglou, K.C.
- Subjects
- *
POLYNOMIAL chaos , *NUMERICAL solutions to equations , *COMPUTER software development , *MATHEMATICAL models of turbulence - Abstract
Abstract This paper presents a method for the quantification of uncertainty propagation using intrusive Polynomial Chaos Expansion (iPCE). Contrary to commonly implemented intrusive methods built on a problem specific-approach that depends on the governing PDEs, the proposed method, although intrusive in nature, can easily be implemented to any system of equations. A proper mathematical framework is developed that performs the derivation and numerical solution of the iPCE equations with little additional effort, avoiding the laborious mathematical and software development commonly associated with intrusive approaches. Computational cost, convergence and stability properties are analyzed in detail and discussed. The proposed uncertainty quantification (UQ) method is applied to the Reynolds-Averaged Navier–Stokes (RANS) equations, coupled with the Spalart–Allmaras turbulence model and results are compared with those of the non-intrusive PCE (niPCE). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Development of [formula omitted]-inverse model by using generalized polynomial chaos.
- Author
-
Yeo, Kyongmin, Hwang, Youngdeok, Liu, Xiao, and Kalagnanam, Jayant
- Subjects
- *
INVERSE problems , *PARAMETER estimation , *GREEN'S functions , *POLYNOMIAL chaos , *RADIAL basis functions - Abstract
Abstract We present a h p -inverse model to estimate a smooth, non-negative source function from a limited number of observations for a two-dimensional linear source inversion problem. A standard least-square inverse model is formulated by using a set of Gaussian radial basis functions (GRBF) on a rectangular mesh system with a uniform grid space. Here, the choice of the mesh system is modeled as a random variable and the generalized polynomial chaos (gPC) expansion is used to represent the random mesh system. It is shown that the convolution of gPC and GRBF provides hierarchical basis functions for the linear source inverse model with the h p -refinement capability. We propose a mixed l 1 and l 2 regularization to exploit the hierarchical nature of the basis functions to find a sparse solution. The h p -inverse model has an advantage over the standard least-square inverse model when the number of data is limited. It is shown that the h p -inverse model provides a good estimate of the source function even when the number of unknown parameters (m) is much larger the number of data (n), e.g., m ∕ n > 40. Highlights • A h p -inverse model is developed by using a random mesh system. • The generalized polynomial chaos is employed to model the stochastic inverse problem. • A mixed l 1 and l 2 regularization is proposed for the hierarchical basis functions. • The h p -inverse model estimate is more robust to the changes in the computational mesh system. • The h p -inverse model is shown to be very effective when the data is limited. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. A non-intrusive B-splines Bézier elements-based method for uncertainty propagation.
- Author
-
Abdedou, Azzedine and Soulaïmani, Azzeddine
- Subjects
- *
SPLINES , *UNCERTAINTY , *MONTE Carlo method , *CHAOS theory , *SHALLOW-water equations , *APPROXIMATION theory - Abstract
Abstract A non-intrusive B-Splines Bézier Elements based Method (BSBEM) is proposed as an efficient tool for uncertainty propagation analysis in physical hyperbolic problems. The model's output response is approximated using a surrogate model whose coefficients are obtained from a set of deterministic calls by means of a regression technique. The accuracy, efficiency and the generality of the proposed approach are assessed using benchmark numerical examples by comparing the convergence of the statistics moments with those of the Polynomial chaos-based point collocation method (Pcol) and the Monte Carlo (MC) method. The generic character of the proposed approach allows it to be implemented for several engineering fields. BSBEM is applied to quantify uncertainty propagation through dam break flows modelled by shallow water equations. The obtained results, which are depicted in terms of water depth and inundation line confidence intervals, show that with a meticulous exploitation of the multi-element aspect and the smoothness feature of the basis functions, the proposed method provides an accurate and smooth approximation of the stochastic output response. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. Sparsity-promoting elastic net method with rotations for high-dimensional nonlinear inverse problem.
- Author
-
Wang, Yuepeng, Ren, Lanlan, Zhang, Zongyuan, Lin, Guang, and Xu, Chao
- Subjects
- *
NONLINEAR analysis , *INVERSE problems , *HIGH-dimensional model representation , *KALMAN filtering , *COMPUTER algorithms - Abstract
Abstract An elastic-net (EN) based polynomial chaos (PC) ensemble Kalman filter (PC-EnKF) with iterative PC-basis rotations is developed for high-dimensional nonlinear inverse modeling. To avoid the huge computational cost of estimating PC expansion coefficients and the Kalman gain matrix in PC-EnKF, this paper focuses mainly on solving the minimization problem of the elastic-net (EN) cost function with the fast iterative shrinkage-thresholding algorithm (FISTA). To further enhance the sparsity and accuracy, an iterative PC-basis rotation method is employed. When performing the rotation technique, two key issues need to be addressed to accommodate the computation of the inverse problem. One is the derivation of a new multi-dimensional random variable. This can be realized by exploring the construction of the gradient matrix used in a multi-parameter and vector-valued response model. The other issue is the selection of the number of iterative rotations during the process of each data assimilation, which can be addressed by resorting to a curve of sparsity versus the number of iterations. As for the regularization parameters, they can be tuned by calculating the information criteria (IC). Through the numerical examples, we demonstrate that EN-based PC-EnKF combined with the iterative PC-basis rotation method is well suited in the high-dimensional nonlinear inverse modeling, and has great potential in the high-dimensional nonlinear inverse modeling of real-world complex systems. Highlights • An elastic-net-based sparse polynomial chaos-ensemble Kalman filter is designed. • Regularization parameters are selected with the information criterion. • First work on employing the iterative rotations to the inverse problem. • The selection of the optimal number of iterative rotations is studied. • Gradient matrix is constructed in a multi-parameter response model. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. A preconditioning approach for improved estimation of sparse polynomial chaos expansions.
- Author
-
Alemazkoor, Negin and Meidani, Hadi
- Subjects
- *
POLYNOMIAL chaos , *STOCHASTIC matrices , *SIGNAL-to-noise ratio , *COMPRESSED sensing , *LINEAR equations - Abstract
Abstract Compressive sampling has been widely used for sparse polynomial chaos (PC) approximation of stochastic functions. The recovery accuracy of compressive sampling highly depends on the incoherence properties of the measurement matrix. In this paper, we consider preconditioning the underdetermined system of equations that is to be solved. Premultiplying a linear equation system by a non-singular matrix results in an equivalent equation system, but it can potentially improve the incoherence properties of the resulting preconditioned measurement matrix and lead to a better recovery accuracy. When measurements are noisy, however, preconditioning can also potentially result in a worse signal-to-noise ratio, thereby deteriorating recovery accuracy. In this work, we propose a preconditioning scheme that improves the incoherence properties of measurement matrix and at the same time prevents undesirable deterioration of signal-to-noise ratio. We provide theoretical motivations and numerical examples that demonstrate the promise of the proposed approach in improving the accuracy of estimated polynomial chaos expansions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
28. Stochastic modeling and statistical calibration with model error and scarce data.
- Author
-
Wang, Zhiheng and Ghanem, Roger
- Subjects
- *
HIERARCHICAL Bayes model , *MONTE Carlo method , *MARKOV chain Monte Carlo , *STATISTICAL models , *POLYNOMIAL chaos , *STOCHASTIC models - Abstract
This paper introduces a procedure to assess the predictive accuracy of stochastic models subject to model error and sparse data. Model error is introduced as uncertainty on the coefficients of appropriate polynomial chaos expansions (PCE). The error associated with finite sample size allows us to conceive of these coefficients as statistics of the data that we describe as random variables whose influence on output quantities of interest is evaluated through the extended polynomial chaos expansion (EPCE). A Bayesian data assimilation scheme is introduced to update these expansions by considering the resulting nested chaos expansion as a hierarchical probabilistic model. Stochastic models of quantities of interest (QoI) are thus constructed and efficiently evaluated. The Metropolis–Hastings Markov chain Monte Carlo procedure is used to sample the posterior. Two illustrative analytical and numerical problems are used to demonstrate the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Stochastic dynamic analysis of composite plates in thermal environments using nonlinear autoregressive model with exogenous input in polynomial chaos expansion surrogate.
- Author
-
Chandra, S., Matsagar, V.A., and Marburg, S.
- Subjects
- *
POLYNOMIAL chaos , *STOCHASTIC analysis , *MONTE Carlo method , *COMPOSITE materials , *COMPOSITE plates , *DYNAMIC loads , *ENGINEERING systems - Abstract
The application of composite materials has increased manifolds in aerospace and high-speed vehicle industries, where structures experience dynamic loads along with temperature variation. To ensure safety of these structures, the stochastic dynamic analysis in varying thermal environments is essential. The generalized polynomial chaos (gPC) expansion method is a well-known metamodel used to quantify uncertainties in engineering systems instead of using the direct Monte Carlo simulation. Though the gPC expansion method is used extensively for engineering systems, its classical version has some limitations while implementing in dynamical systems. The accuracy of the gPC expansion method reduces in delayed time domain and can be improved to some extent by increasing the order of polynomial, which though is computationally intensive. Furthermore, the classical gPC expansion method is suitable for low-dimensional problems having independent random parameters. To overcome these limitations in the classical gPC expansion method, a system identifier, i.e., nonlinear autoregressive model with exogenous input (NARX) is coupled with it to avoid response degeneration in delayed time domain. The temperature-dependent material properties of composite lamina can be random and correlated, however the classical gPC expansion method is unable to accommodate the correlated random parameters. To address these challenges, the NARX-gPC expansion method is modified to conduct stochastic dynamic analysis of composite plates using correlated random material properties of the composite lamina in varying thermal environments. The study indicates that the correlation has a significant effect on the stochastic dynamic response at low and high temperatures for cross-ply laminates in comparison with angle-ply laminates, which is not captured through uncorrelated random material properties. The prediction efficiency of the developed surrogate by using the adaptive NARX-gPC expansion method is established even though it is trained using data in a shorter time domain, thereby computational performance is enhanced. Additionally, this surrogate is implemented effectively for higher order randomness in the input parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Multifidelity uncertainty quantification with models based on dissimilar parameters.
- Author
-
Zeng, Xiaoshu, Geraci, Gianluca, Eldred, Michael S., Jakeman, John D., Gorodetsky, Alex A., and Ghanem, Roger
- Subjects
- *
POLYNOMIAL chaos , *NUCLEAR fuel elements , *FINITE element method , *ACOUSTIC field , *PREDICATE calculus , *PARAMETERIZATION - Abstract
Multifidelity uncertainty quantification (MF UQ) sampling approaches have been shown to significantly reduce the variance of statistical estimators while preserving the bias of the highest-fidelity model, provided that the low-fidelity models are well correlated. However, maintaining a high level of correlation can be challenging, especially when models depend on different input uncertain parameters, which drastically reduces the correlation. Existing MF UQ approaches do not adequately address this issue. In this work, we propose a new sampling strategy that exploits a shared space to improve the correlation among models with dissimilar parameterization. We achieve this by transforming the original coordinates onto an auxiliary manifold using the adaptive basis (AB) method (Tipireddy and Ghanem, 2014). The AB method has two main benefits: (1) it provides an effective tool to identify the low-dimensional manifold on which each model can be represented, and (2) it enables easy transformation of polynomial chaos representations from high- to low-dimensional spaces. This latter feature is used to identify a shared manifold among models without requiring additional evaluations. We present two algorithmic flavors of the new estimator to cover different analysis scenarios, including those with legacy and non-legacy high-fidelity (HF) data. We provide numerical results for analytical examples, a direct field acoustic test, and a finite element model of a nuclear fuel assembly. For all examples, we compare the proposed strategy against both single-fidelity and MF estimators based on the original model parameterization. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Evidence theory-based reliability optimization design using polynomial chaos expansion.
- Author
-
Wang, Chong and Matthies, Hermann G.
- Subjects
- *
EPISTEMIC uncertainty , *OPTIMAL designs (Statistics) , *POLYNOMIAL chaos , *ALGORITHMS , *COEFFICIENTS (Statistics) - Abstract
With the development of reliability technique, the safety assessment for the problem with epistemic uncertainty has attracted widespread attention. Evidence theory is a useful tool to deal with such uncertainty, and this paper aims to develop an efficient approach for the evidence theory-based reliability analysis and optimization design. By using mutually exclusive intervals to quantify the focal elements of evidence variable, a confidence range bounded by belief measure and plausibility measure is derived for system reliability assessment, by which the relatively conservative and radical optimization models can be respectively established. To decrease the huge computational burden in repetitive limit state function evaluations under the time-consuming implicit computational model, an explicit surrogate model is constructed by the Legendre polynomial chaos expansion in the support box. A Clenshaw–Curtis point-based collocation method with Smolyak algorithm is then developed to predict the unknown coefficients in surrogate model, where the collocation level can be flexibly selected according to the accuracy requirement. Compared with the traditional deterministic optimization model under nominal value assumption, the results in two numerical examples verify the effectiveness of proposed method in mathematical theory and engineering application. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. Cost reduction of stochastic Galerkin method by adaptive identification of significant polynomial chaos bases for elliptic equations.
- Author
-
Pranesh, Srikara and Ghosh, Debraj
- Subjects
- *
STOCHASTIC analysis , *FINITE element method , *GALERKIN methods , *POLYNOMIAL chaos , *ALGEBRAIC equations - Abstract
One widely used and computationally efficient method for uncertainty quantification using spectral stochastic finite element is the stochastic Galerkin method. Here the solution is represented in polynomial chaos expansion, and the residual of the discretized governing equation is projected on the polynomial chaos bases. This results in a system of deterministic algebraic equations with the polynomials chaos coefficients as unknown. However, one impediment for its large scale applications is the curse of dimensionality, that is, the exponential growth of the number of polynomial chaos bases with the stochastic dimensionality and degree of expansion. Here, for a stochastic elliptic problem, an adaptive selection of polynomial chaos bases is proposed. Accordingly, during the first few iterations in the preconditioned conjugate gradient method for solving the system of linear algebraic equations, the chaos bases with maximal contribution –in an appropriately defined metric – to the solution are first identified. Subsequently, only these bases are retained for further iterations until convergence is achieved. Using numerical studies a three times cost saving over the existing method is observed. Furthermore, for enhancing the computational cost gain, the stochastic Galerkin method is reformulated as a generalized Sylvester equation. This step allowed efficient usage of the sparsity of moments of product of polynomial chaos bases. Through numerical studies on problems with large stochastic dimensionality, an additional cost saving of up to one order of magnitude –twenty times –is observed. This amounts to sixty times speedup over the existing method, when adaptive selection and generalized Sylvester equation formulation are used together. The proposed methodology can be easily incorporated in an existing standard stochastic Galerkin method solver for elliptic problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
33. Dual interval-and-fuzzy analysis method for temperature prediction with hybrid epistemic uncertainties via polynomial chaos expansion.
- Author
-
Wang, Chong, Matthies, Hermann G., Xu, Menghui, and Li, Yunlong
- Subjects
- *
EPISTEMIC uncertainty , *TEMPERATURE measurements , *POLYNOMIAL chaos , *INTERVAL analysis , *FUZZY systems - Abstract
In both mathematical theory and engineering application, the uncertainty propagation problem with incomplete knowledge, especially when different types of epistemic uncertainties exist simultaneously, has been recognized as a challenge issue. By using interval variables and fuzzy variables to characterize the hybrid uncertainties with only boundary information and membership function, this paper proposes a new dual interval-and-fuzzy response analysis method for the thermal engineering system. In the presented dual-stage analysis framework, the temperature response ranges with respect to interval variables are firstly derived, and then the membership functions of response bounds with respect to fuzzy variables are calculated in virtue of level-cut strategy and fuzziness reconstruction. To avoid the huge computational burden caused by repetitive FEM simulations, the Legendre polynomial chaos expansion is adopted as the surrogate model for temperature response. Two Clenshaw–Curtis point-based collocation methods are proposed to calculate the polynomial expansion coefficients, where CCP-CM constructs the collocation points via full tensor product grids, and CCP-MCM employs Smolyak algorithm to reconstruct the sparse grid collocation points. By comparing results with traditional Monte Carlo simulation, a numerical example about a 3D sandwich structure is provided to verify the effectiveness of proposed methodology in practical engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
34. Sparse polynomial chaos expansions via compressed sensing and D-optimal design.
- Author
-
Diaz, Paul, Doostan, Alireza, and Hampton, Jerrad
- Subjects
- *
POLYNOMIAL chaos , *COMPRESSED sensing , *OPTIMAL designs (Statistics) , *SPARSE approximations , *COMPUTER simulation - Abstract
In the field of uncertainty quantification, sparse polynomial chaos (PC) expansions are commonly used by researchers for a variety of purposes, such as surrogate modeling. Ideas from compressed sensing may be employed to exploit this sparsity in order to reduce computational costs. A class of greedy compressed sensing algorithms use least squares minimization to approximate PC coefficients. This least squares problem lends itself to the theory of optimal design of experiments (ODE). Our work focuses on selecting an experimental design that improves the accuracy of sparse PC approximations for a fixed computational budget. We propose a novel sequential design, greedy algorithm for sparse PC approximation. The algorithm sequentially augments an experimental design according to a set of the basis polynomials deemed important by the magnitude of their coefficients, at each iteration. Our algorithm incorporates topics from ODE to estimate the PC coefficients. A variety of numerical simulations are performed on three physical models and manufactured sparse PC expansions to provide a comparative study between our proposed algorithm and other non-adaptive methods. Further, we examine the importance of sampling by comparing different strategies in terms of their ability to generate a candidate pool from which an optimal experimental design is chosen. It is demonstrated that the most accurate PC coefficient approximations, with the least variability, are produced with our design-adaptive greedy algorithm and the use of a studied importance sampling strategy. We provide theoretical and numerical results which show that using an optimal sampling strategy for the candidate pool is key, both in terms of accuracy in the approximation, but also in terms of constructing an optimal design. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
35. Scalable domain decomposition solvers for stochastic PDEs in high performance computing.
- Author
-
Desai, Ajit, Khalil, Mohammad, Pettit, Chris, Poirel, Dominique, and Sarkar, Abhijit
- Subjects
- *
DOMAIN decomposition methods , *SCALABILITY , *PARTIAL differential equations , *COMPUTER systems , *FINITE element method , *STOCHASTIC models , *POLYNOMIAL chaos , *HEAT equation - Abstract
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. Although these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. Parallel sparse matrix–vector operations are used to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. Extreme value oriented random field discretization based on an hybrid polynomial chaos expansion — Kriging approach.
- Author
-
Dubreuil, S., Bartoli, N., Lefebvre, T., Colomer, J. Mas, and Gogu, C.
- Subjects
- *
DISCRETIZATION methods , *KRIGING , *POLYNOMIAL chaos , *RANDOM fields , *APPROXIMATION theory - Abstract
This article addresses the characterization of extreme value statistics of continuous second order random field. More precisely, it focuses on the parametric study of engineering models under uncertainty. Hence, the quantity of interest of this model is defined on both a parametric space and a stochastic space. Moreover, we consider that the model is computationally expensive to evaluate. For this reason it is assumed that uncertainty propagation, at a single point of the parametric space, is achieved by polynomial chaos expansion. The main contribution of the present study is the development of an adaptive approach for the discretization of the random field modeling the quantity of interest. Objective of this new approach is to focus the computational budget over the areas of the parametric space where the minimum or the maximum of the field is likely to be for any realization of the stochastic parameters. To this purpose two original random field representations, based on polynomial chaos expansion and Kriging interpolation, are introduced. Moreover, an original adaptive enrichment scheme based on Kriging is proposed. Advantages of this approach with respect to accuracy and computational cost are demonstrated on several numerical examples. The proposed method is also illustrated on the parametric study of an aircraft wing under uncertainty. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. An arbitrary polynomial chaos expansion approach for response analysis of acoustic systems with epistemic uncertainty.
- Author
-
Yin, Shengwen, Yu, Dejie, Xia, Baizhan, and Luo, Zhen
- Subjects
- *
POLYNOMIAL chaos , *EPISTEMIC uncertainty , *GAUSSIAN quadrature formulas , *UNIFORMITY , *FINITE element method - Abstract
By introducing the arbitrary polynomial chaos theory, the Evidence-Theory-based Arbitrary Polynomial Chaos Expansion Method (ETAPCEM) is proposed to improve the computational accuracy of polynomial chaos expansion methods for the evidence-theory-based analysis of acoustic systems with epistemic uncertainty. In ETAPCEM, the epistemic uncertainty of acoustic systems is treated with evidence theory. The response of acoustic systems in the range of variation of evidence variables is approximated by the arbitrary polynomial chaos expansion, through which the lower and upper bounds of the response over all focal elements can be efficiently calculated by a number of numerical solvers. Inspired by the application of polynomial chaos theory in the interval and random analysis, the weight function of the optimal polynomial basis of ETAPCEM for evidence-theory-based uncertainty analysis is derived from the uniformity approach. Compared with the conventional evidence-theory-based polynomial chaos expansion methods, including the recently proposed evidence-theory-based Jacobi expansion method, the main advantage of ETAPCEM is that the polynomial basis orthogonalized with arbitrary weight functions can be obtained to construct the polynomial chaos expansion. Thereby the optimal polynomial basis of polynomial chaos expansion for arbitrary types of the evidence variable can be established by using ETAPCEM. The effectiveness of the proposed method for acoustic problems has been fully demonstrated by comparing it with the conventional evidence-theory-based polynomial chaos expansionmethods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Least squares polynomial chaos expansion: A review of sampling strategies.
- Author
-
Hadigol, Mohammad and Doostan, Alireza
- Subjects
- *
POLYNOMIAL chaos , *LEAST squares , *OPTIMAL designs (Statistics) , *COHERENCE (Physics) , *PROBABILITY theory - Abstract
As non-intrusive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal , that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE’s, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. Risk-averse structural topology optimization under random fields using stochastic expansion methods.
- Author
-
Martínez-Frutos, Jesús, Herrero-Pérez, David, Kessler, Mathieu, and Periago, Francisco
- Subjects
- *
STRUCTURAL dynamics , *MECHANICAL stress analysis , *RANDOM fields , *MONTE Carlo method , *POLYNOMIAL chaos - Abstract
This work proposes a level-set based approach for solving risk-averse structural topology optimization problems considering random field loading and material uncertainty. The use of random fields increases the dimensionality of the stochastic domain, which poses several computational challenges related to the minimization of the Excess Probability as a measure of risk awareness. This problem is addressed both from the theoretical and numerical viewpoints. First, an existence result under a typical geometrical constraint on the set of admissible shapes is proved. Second, a level-set continuous approach to find the numerical solution of the problem is proposed. Since the considered cost functional has a discontinuous integrand, the numerical approximation of the functional and its sensitivity combine an adaptive anisotropic Polynomial Chaos (PC) approach with a Monte-Carlo (MC) sampling method for uncertainty propagation. Furthermore, to address the increment of dimensionality induced by the random field, an anisotropic sparse grid stochastic collocation method is used for the efficient computation of the PC coefficients. A key point is that the non-intrusive nature of such an approach facilitates the use of High Performance Computing (HPC) to alleviate the computational burden of the problem. Several numerical experiments including random field loading and material uncertainty are presented to show the feasibility of the proposal. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. A new non-intrusive polynomial chaos using higher order sensitivities.
- Author
-
Thapa, Mishal, Mulani, Sameer B., and Walters, Robert W.
- Subjects
- *
POLYNOMIAL chaos , *DIFFERENTIATION (Mathematics) , *FINITE difference method , *COMPUTATIONAL mechanics , *MONTE Carlo method - Abstract
This paper proposes a new non-intrusive method for uncertainty quantification called Polynomial Chaos Decomposition with Differentiation (PCDD) that uses higher-order sensitivities of the response. In PCDD, the polynomial chaos expansion (PCE) of the response is differentiated with respect to the basis random variables using multi-indices. This differentiation results in a system of linear equations which can then be solved to determine the expansion coefficients. Here, the higher accuracy, Modified Forward Finite Difference (ModFFD) that involves representation of the response using Taylor expansion of order equal to the chaos-order is used in combination with PCE. Therefore, the total number of samples required with this method is equal to the number of terms in the PCE. To verify the validity of this new technique, two analytical problems and two stochastic composite laminate problems were studied. The results of the analytical problems showed that the accuracy of PCDD using ModFFD is similar to that of PCDD using analytical sensitivities, which in addition is comparable to the exact results. For composite laminate problems, the PCDD displayed very high accuracy comparable to 50,000 Latin Hypercube Samples, which underlines the computational efficiency of this proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Efficient probabilistic multi-fidelity calibration of a damage-plastic model for confined concrete.
- Author
-
Kučerová, Anna, Sýkora, Jan, Havlásek, Petr, Jarušková, Daniela, and Jirásek, Milan
- Subjects
- *
POLYNOMIAL chaos , *MARKOV chain Monte Carlo , *MARKOV processes , *SYNTHETIC fibers , *CALIBRATION - Abstract
This detailed study investigates Bayesian inference with material model parameters, exemplified using an advanced damage-plastic model with parameters identified from recently proposed innovative tests of concrete cylinders subjected to confined compression. The study has two main objectives—one specific and the other more general: (i) to evaluate the potential and benefits of the elaborated experimental setup for estimating 15 material parameters in the damage-plastic model for concrete (CDPM2) and (ii) to demonstrate the robustness and efficiency of the numerical tools applied to the problem of simultaneous probabilistic identification for a large number of parameters from limited and noisy data. The paper therefore provides sufficient detail about all steps included in the identification procedure, allowing its application to any other material model calibration problem. The computational burden connected to probabilistic analysis based on Markov chain Monte Carlo method is mitigated here by using a surrogate model based on polynomial chaos expansion. As a benefit, this type of surrogate allows global sensitivity analysis to be performed easily, and it also facilitates analysis of complexity in the relationship between particular material parameters (or pairs of parameters) and simulated material response components. The description also includes strategies for the construction of an efficient anisotropic polynomial expansion with varying degrees in particular parameters. As a problem-specific acceleration of the surrogate construction, two variants of the model simulating the confined compression experiment are described. One is a high-fidelity model that considers detailed specimen geometry, while the other is a computationally faster and more stable low-fidelity variant which simulates the test on the level of a single material point. The robustness and efficiency of the elaborated identification strategy are illustrated by calibration from synthetic as well as from experimental data obtained for four distinct sets of specimens made of the same concrete and equipped with aluminum hoops. The consistent results of particular calibrations, sensitivity analysis and also expert expectations based on deep understanding of material laws confirm the robustness of the identification strategy and its ability to simultaneously estimate 11 out of 15 parameters in a complex damage-plastic model from relatively inexpensive failure tests. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Hybrid uncertainty propagation in structural-acoustic systems based on the polynomial chaos expansion and dimension-wise analysis.
- Author
-
Xu, Menghui, Du, Jianke, Wang, Chong, and Li, Yunlong
- Subjects
- *
STRUCTURAL acoustics , *MATHEMATICAL expansion , *POLYNOMIAL chaos , *COLLOCATION methods , *INTERVAL analysis , *MATHEMATICAL bounds - Abstract
Hybrid uncertainties are ubiquitous in the structural-acoustic analysis and greatly affect the behaviors of the coupled system. However, requirements of small uncertainties and rewriting simulation codes for the numerical analysis are always necessary for current methods. In this context, a non-intrusive dual-layer analysis procedure, i.e. a hybrid method of the polynomial chaos expansion and dimension-wise analysis (PCE-DW), is proposed in this paper. The PCE procedure based on the sparse grid collocation strategy is utilized to handle the random uncertainty while the DW procedure is employed to evaluate the interval bounds of statistical characteristics of the system response. The PCE-DW also applies to the structural-acoustic analysis with only random or interval parameters. The investigation of two structural-acoustic systems with hybrid uncertainties finally demonstrates the accuracy, efficiency and capability of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
43. Divide and conquer: An incremental sparsity promoting compressive sampling approach for polynomial chaos expansions.
- Author
-
Alemazkoor, Negin and Meidani, Hadi
- Subjects
- *
POLYNOMIAL chaos , *DYNAMICAL systems , *STOCHASTIC processes , *HERMITE polynomials , *ALGORITHMS - Abstract
This paper introduces an efficient sparse recovery approach for Polynomial Chaos (PC) expansions, which promotes the sparsity by breaking the dimensionality of the problem. The proposed algorithm incrementally explores sub-dimensional expansions for a sparser recovery, and shows success when removal of uninfluential parameters that results in a lower coherence for measurement matrix, allows for a higher order and/or sparser expansion to be recovered. The incremental algorithm effectively searches for the sparsest PC approximation, and not only can it decrease the prediction error, but it can also reduce the dimensionality of PCE model. Four numerical examples are provided to demonstrate the validity of the proposed approach. The results from these examples show that the incremental algorithm substantially outperforms conventional compressive sampling approaches for PCE, in terms of both solution sparsity and prediction error. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
44. Functional approximation and projection of stored energy functions in computational homogenization of hyperelastic materials: A probabilistic perspective.
- Author
-
Staber, B. and Guilleminot, J.
- Subjects
- *
ENERGY function , *PROBABILITY theory , *ASYMPTOTIC homogenization , *ELASTICITY , *POLYNOMIAL chaos , *HILBERT space - Abstract
This work is concerned with the construction of a surrogate model for the homogenized stored energy functions defining the effective behavior of nonlinear elastic microstructures. Here, a probabilistic standpoint is adopted and allows for the definition of a nonlinear mapping between the macroscopic deformations and the homogenized potential. This functional approximation is specifically obtained by means of a polynomial chaos expansion, the coefficients of which are computed through a Gauss–Legendre quadrature rule. By invoking well-known results related to projections in Hilbert spaces, closest approximations of arbitrary potentials to standard ( e.g. Ogden-type) models are subsequently defined and characterized by appropriate residuals. Numerical illustrations on various microstructures are finally provided in order to discuss the relevance of the proposed framework. In particular, it is shown that the surrogate model compares very well with reference results at both the macroscale and the structural scale, even for moderate-order expansions, and that the aforementioned closest approximations constitute very accurate approximation provided that the subset onto which the homogenized response is projected is properly chosen. This result readily allows for nonconcurrent coupling using standard constitutive models available in commercial codes and therefore makes the approach very attractive for engineering applications. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
45. Unified reliability-based design optimization with probabilistic, uncertain-but-bounded and fuzzy variables.
- Author
-
Meng, Zeng, Li, Changquan, and Hao, Peng
- Subjects
- *
CONJUGATE gradient methods , *LAGRANGIAN functions , *STRUCTURAL engineering , *POLYNOMIAL chaos , *CHAOS theory , *MECHANICAL engineering - Abstract
Reliability-based design optimization (RBDO) is critical in improving the design objective and guaranteeing the safety level of mechanical and engineering structures. However, a large number of multi-source uncertainties exist in the real-world, and their applications pose significant challenges owing to the lack of unity and generality of the RBDO theory and high performance computational methods. This study proposes a unified reliability-based design optimization (URBDO) method to deal with multi-source uncertainties, in which probabilistic, uncertain-but-bounded, and fuzzy parameters are simultaneously considered. First, a novel URBDO model is proposed by combining the probabilistic, non-probabilistic, and fuzzy theories, and it consists of nested quadruple optimization loops. Second, the single-loop URBDO method is further developed based on the Lagrangian function to ease the unaffordable computational burden, where the probabilistic analysis, non-probabilistic analysis, fuzzy analysis, and deterministic optimization are simultaneously implemented. Third, the directional properties of non-probabilistic and fuzzy computations are disclosed, and the directional chaos single-loop method (DCSLM) and conjugate gradient single-loop method (CGSLM) are constructed based on the chaos control theory and conjugate gradient method. Finally, three numerical examples and three practical engineering examples are tested to validate the effectiveness of the proposed URBDO model and the high performances of the DCSLM and CGSLM. • A unified reliability-based design optimization model is proposed. • The directional property of non-probabilistic and fuzzy reliability analyses is revealed. • A new directional chaos single-loop method is developed. • A novel conjugate gradient single-loop method is established. • The efficiency and robustness of the proposed methods are verified by the numerical and engineering examples. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. A spectral surrogate model for stochastic simulators computed from trajectory samples.
- Author
-
Lüthen, Nora, Marelli, Stefano, and Sudret, Bruno
- Subjects
- *
STOCHASTIC models , *POLYNOMIAL chaos , *PROBABILITY density function , *STOCHASTIC analysis , *RANDOM fields , *MARGINAL distributions - Abstract
Stochastic simulators are non-deterministic computer models which provide a different response each time they are run, even when the input parameters are held at fixed values. They arise when additional sources of uncertainty are affecting the computer model, which are not explicitly modeled as input parameters. The uncertainty analysis of stochastic simulators requires their repeated evaluation for different values of the input variables, as well as for different realizations of the underlying latent stochasticity. The computational cost of such analyses can be considerable, which motivates the construction of surrogate models that can approximate the original model and its stochastic response, but can be evaluated at much lower cost. We propose a surrogate model for stochastic simulators based on spectral expansions. Considering a certain class of stochastic simulators that can be repeatedly evaluated for the same underlying random event, we view the simulator as a random field indexed by the input parameter space. For a fixed realization of the latent stochasticity, the response of the simulator is a deterministic function, called trajectory. Based on samples from several such trajectories, we approximate the latter by sparse polynomial chaos expansion and compute analytically an extended Karhunen–Loève expansion (KLE) to reduce its dimensionality. The uncorrelated but dependent random variables of the KLE are modeled by advanced statistical techniques such as parametric inference, vine copula modeling, and kernel density estimation. The resulting surrogate model approximates the marginals and the covariance function, and allows to obtain new realizations at low computational cost. We observe that in our numerical examples, the first mode of the KLE is by far the most important, and investigate this phenomenon and its implications. • New surrogate modeling method for stochastic simulators based on random field view. • Constructed from trajectories using spectral methods and statistical inference. • Able to emulate moment functions, marginal distributions, and trajectories. • Investigation of observation that first mode often dominates the expansion. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Projection pursuit adaptation on polynomial chaos expansions.
- Author
-
Zeng, Xiaoshu and Ghanem, Roger
- Subjects
- *
POLYNOMIAL chaos , *CONSTRAINTS (Physics) , *STRUCTURAL dynamics , *STOCHASTIC approximation , *DISTRIBUTION (Probability theory) - Abstract
The present work addresses the issue of accurate stochastic approximations in high-dimensional parametric space using tools from uncertainty quantification (UQ). The basis adaptation method and its accelerated algorithm in polynomial chaos expansions (PCE) were recently proposed to construct low-dimensional approximations adapted to specific quantities of interest (QoI). The present paper addresses one difficulty with these adaptations, namely their reliance on quadrature point sampling, which limits the reusability of potentially expensive samples. Projection pursuit (PP) is a statistical tool to find the "interesting" projections in high-dimensional data and thus bypass the curse-of-dimensionality. In the present work, we combine the fundamental ideas of basis adaptation and projection pursuit regression (PPR) to propose a novel method to simultaneously learn the optimal low-dimensional spaces and PCE representation from given data. While this projection pursuit adaptation (PPA) can be entirely data-driven, the constructed approximation exhibits mean-square convergence to the solution of an underlying governing equation and thus captures the supports and probability distributions associated with the physics constraints. The proposed approach is demonstrated on a borehole problem and a structural dynamics problem, demonstrating the versatility of the method and its ability to discover low-dimensional manifolds with high accuracy with limited data. In addition, the method can learn surrogate models for different quantities of interest while reusing the same data set. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Stochastic analysis of structures under limited observations using kernel density estimation and arbitrary polynomial chaos expansion.
- Author
-
Zhang, Ruijing and Dai, Hongzhe
- Subjects
- *
POLYNOMIAL chaos , *PROBABILITY density function , *STOCHASTIC analysis , *SELF-efficacy , *ENGINEERING systems , *SCIENTIFIC community - Abstract
Over the past decades there has been considerable interest among the scientific community in modeling random system inputs from limited observations and propagating the system responses. In this paper, we develop a novel method for the reasonable modeling of system inputs and efficient propagation of response. Our method firstly constructs the random model of system inputs from observations by developing a novel kernel density estimator (KDE) for Karhunen–Loeve (KL) variables. By further implementing the arbitrary polynomial chaos (aPC) formulation on dependent non-Gaussian KL variables, the associated aPC-based response propagation is then developed. In our method, the developed random model can accurately represent the input parameters from limited observations as the developed KDE of KL variables can incorporate the inherent relation between marginals of input parameters and KL variables, while empowering the input model to preserve the second-order correlations. Furthermore, since the statistical moments of KL variables can be analytically evaluated, the aPC formulation for achieving optimal convergence of system responses can be accurately determined. In addition, the system response can be accurately propagated in an efficient way by developing an aPC-based regression method using space-filling design, in which the conditional distributions in Rosenblatt transformation of aPC variables can be efficiently determined without evaluations of multi-dimensional integrations. In this way, the current work provides an effective framework for the reasonable stochastic modeling and efficient response propagation of real-life engineering systems with limited observations. Two numerical examples are presented to highlight the effectiveness of the developed method. • An effective method for random system analyses with limited observations is proposed. • The developed model can reasonably capture the non-Gaussianity of system parameters. • The arbitrary polynomial chaos (aPC) formulation can be accurately constructed. • The developed aPC-based propagation can efficiently approximate the system response. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Stochastic Galerkin framework with locally reduced bases for nonlinear two-phase transport in heterogeneous formations.
- Author
-
Pettersson, Per and Tchelepi, Hamdi A.
- Subjects
- *
TWO-phase flow , *GALERKIN methods , *POLYNOMIAL chaos , *PARTIAL differential equations , *CONSERVATION laws (Mathematics) , *FINITE volume method - Abstract
The generalized polynomial chaos method with multiwavelet basis functions is applied to the Buckley–Leverett equation. We consider a spatially homogeneous domain modeled as a random field. The problem is projected onto stochastic basis functions which yields an extended system of partial differential equations. Analysis and numerical methods leading to reduced computational cost are presented for the extended system of equations. The accurate representation of the evolution of a discontinuous stochastic solution over time requires a large number of stochastic basis functions. Adaptivity of the stochastic basis to reduce computational cost is challenging in the stochastic Galerkin setting since the change of basis affects the system matrix itself. To achieve adaptivity without adding overhead by rewriting the entire system of equations for every grid cell, we devise a basis reduction method that distinguishes between locally significant and insignificant modes without changing the actual system matrices. Results are presented for problems in one and two spatial dimensions, with varying number of stochastic dimensions. We show how to obtain stochastic velocity fields from realistic permeability fields and demonstrate the performance of the stochastic Galerkin method with local basis reduction. The system of conservation laws is discretized with a finite volume method and we demonstrate numerical convergence to the reference solution obtained through Monte Carlo sampling. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
50. Fokker–Planck linearization for non-Gaussian stochastic elastoplastic finite elements.
- Author
-
Karapiperis, Konstantinos, Sett, Kallol, Levent Kavvas, M., and Jeremić, Boris
- Subjects
- *
FOKKER-Planck equation , *GAUSSIAN processes , *STOCHASTIC analysis , *ELASTOPLASTICITY , *FINITE element method , *BOUNDARY value problems - Abstract
Presented here is a finite element framework for the solution of stochastic elastoplastic boundary value problems with non-Gaussian parametric uncertainty. The framework relies upon a stochastic Galerkin formulation, where the stiffness random field is decomposed using a multidimensional polynomial chaos expansion. At the constitutive level, a Fokker–Planck–Kolmogorov (FPK) plasticity framework is utilized, under the assumption of small strain kinematics. A linearization procedure is developed that serves to update the polynomial chaos coefficients of the expanded random stiffness in the elastoplastic regime, leading to a nonlinear least-squares optimization problem. The proposed framework is illustrated in a static shear beam example of elastic-perfectly plastic as well as isotropic hardening material. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.