35 results on '"Raphael Haftka"'
Search Results
2. Characterization of Heterogeneity Distribution of SiC/SiC Composites
- Author
-
James Nance, Hemanth Thandaga Nagaraju, Ghatu Subhash, Bhavani Sankar, and Raphael Haftka
- Published
- 2020
- Full Text
- View/download PDF
3. Stress Analysis and Failure Behavior of SiC/SiC Textile Composite Tubes
- Author
-
Hemanth Thandaga Nagaraju, James Nance, Bhavani Sankar, Ghatu Subhash, and Raphael Haftka
- Published
- 2020
- Full Text
- View/download PDF
4. Micromechanics of Composite Tubes by Curved and Flat RVEs
- Author
-
Hemanth Thandaga Nagaraju, James Nance, Bhavani Sankar, Ghatu Subhash, and Raphael Haftka
- Published
- 2019
- Full Text
- View/download PDF
5. Efficiently Measuring Heterogneity of SiC/SiC Tubular Composites
- Author
-
James Nance, Hemanth Nagaraju, Ghatu Subhash, Bhavani Sankar, and Raphael Haftka
- Published
- 2019
- Full Text
- View/download PDF
6. Failure Characterization of SiC/SiC Woven Tubular Composites
- Author
-
James Nance, Hemanth Thandaga Nagaraju, Ghatu Subhash, Raphael Haftka, Bhavani Sankar, and Christian Deck
- Published
- 2019
- Full Text
- View/download PDF
7. Forensic uncertainty quantification of explosive dispersal of particles
- Author
-
Kyle Hughes, Angela Diggs, Chanyoung Park, Nam-Ho Kim, Raphael Haftka, and Don Littrell
- Subjects
Explosive material ,Computer science ,business.industry ,Design of experiments ,Empirical measure ,Machine learning ,computer.software_genre ,Documentation ,A priori and a posteriori ,Crime scene ,Artificial intelligence ,Uncertainty quantification ,business ,Set (psychology) ,computer - Abstract
In addition to the numerical challenges of simulating the explosive dispersal of particles, validation of the simulation is often plagued with poor knowledge of the experimental conditions. The level of experimental detail required for validation is beyond what is usually included in the literature. This presentation proposes the use of forensic uncertainty quantification (UQ) to investigate validation-quality experiments to discover possible sources of uncertainty that may have been missed in initial design of experiments or under-reported. The current experience of the authors has found that by making an analogy to crime scene investigation when looking at validation experiments, valuable insights may be gained. One examines all the data and documentation provided by the validation experimentalists, corroborates evidence, and quantifies large sources of uncertainty a posteriori with empirical measurements. In addition, it is proposed that forensic UQ may benefit from an independent investigator to help remove possible implicit biases and increases the likelihood of discovering unrecognized uncertainty. Forensic UQ concepts will be discussed and then applied to a set of validation experiments performed at Eglin Air Force Base.
- Published
- 2018
- Full Text
- View/download PDF
8. Approche bayesienne pour gérer les incertitudes dans l'identification à partir de mesures de champ
- Author
-
Christian Gogu, Weiqi yin, Raphael Haftka, Peter Ifju, Jérôme Molimard, Le Riche, R., Vautrin, A., Service irevues, irevues, and Association Française de Mécanique
- Subjects
[PHYS.MECA]Physics [physics]/Mechanics [physics] ,[PHYS.MECA] Physics [physics]/Mechanics [physics] - Abstract
Colloque avec actes et comité de lecture. Internationale.; International audience; La méthode d'identification bayesienne présente l'avantage de pouvoir tenir compte de différentes sources d'incertitude présentes dans le problème et de quantifier l'incertitude avec laquelle les propriétés sont identifiées aussi bien en termes de variances que de corrélations. Son application à un problème d'identification de propriétés élastiques orthotropes à partir de mesures de champs de déplacements par Moiré interférométrique est présentée dans ce papier.
- Published
- 2011
9. Une approche bayesienne de l'identification des constantes élastiques orthotropes de pli à partir de tests de vibration de flexion sur stratifié = A Bayesian approach to identification of orthotropic elastic constants of the ply from vibration tests on a laminate
- Author
-
Christian Gogu, Rodolphe Le Riche, Jérôme Molimard, Alain Vautrin, Raphael Haftka, Département Mécanique et Procédés d'Elaboration (MPE-ENSMSE), École des Mines de Saint-Étienne (Mines Saint-Étienne MSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-SMS, Département d'Ingénierie Mécanique et Aérospatiale, University of Florida [Gainesville] (UF), UMR 5146 - Laboratoire Claude Goux (LCG-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Institut Mines-Télécom [Paris] (IMT), Centre Sciences, Information et Technologies pour l'Environnement (SITE-ENSMSE), Centre National de la Recherche Scientifique (CNRS), Département Décision en Entreprise : Modélisation, Optimisation (DEMO-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut Henri Fayol, Philippe OLIVIER et Jacques LAMON, Session : Méthodes expérimentales (LEVEQUE D. -- PERIE J.N.), and Ecole Des Mines Albi, Cromep
- Subjects
response surface methodology ,vibration de plaques ,bayesian identification ,méthode des surface de réponse ,plate vibration ,ply elastic constants ,[SPI.MAT] Engineering Sciences [physics]/Materials ,[SPI.MECA]Engineering Sciences [physics]/Mechanics [physics.med-ph] ,[SPI.MECA] Engineering Sciences [physics]/Mechanics [physics.med-ph] ,identification bayesienne ,constantes élastiques du pli ,[SPI.MAT]Engineering Sciences [physics]/Materials - Abstract
National audience; L'identification bayesienne est une des approches permettant de prendre en considération des erreurs de mesure ainsi que de modélisation. De plus, cette méthode identifie une densité de probabilité, permettant ainsi d'avoir des informations aussi bien sur la variance que sur la corrélation des propriétés identifiées. Cette procédure peut cependant être très coûteuse en temps de calcul. Pour réduire ce temps de calcul nous proposons ici une approche bayesienne basée sur la méthode des surfaces de réponse. L'approche est appliquée au problème de l'identification de constantes élastiques orthotropes à partir des mesures des fréquences propres d'une plaque vibrante libre. La procédure développée prend en compte des erreurs de mesures, de l'incertitude sur les autres paramètres d'entrée du modèle (dimensions de la plaque, densité), ainsi que des erreurs systématiques. La densité de probabilité jointe des quatre constantes élastiques du pli est identifiée et caractérisée par ses valeurs moyennes et la matrice de variance covariance. Nous trouvons que certaines paramètres, comme le coefficient de Poisson, sont identifiées avec une bien plus grande incertitude que d'autres et qu'une corrélation significative existe entre les différentes propriétés identifiées.
- Published
- 2009
10. High Fidelity Frequency Response Surface Approximations for Vibration Based Elastic Constants Identification
- Author
-
Christian Gogu, Raphael Haftka, Rodolphe Le Riche, Jérôme Molimard, Département Mécanique et Procédés d'Elaboration (MPE-ENSMSE), École des Mines de Saint-Étienne (Mines Saint-Étienne MSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-SMS, Département d'Ingénierie Mécanique et Aérospatiale, University of Florida [Gainesville] (UF), Laboratoire de Tribologie et Dynamique des Systèmes (LTDS), École Centrale de Lyon (ECL), Université de Lyon-Université de Lyon-École Nationale des Travaux Publics de l'État (ENTPE)-Ecole Nationale d'Ingénieurs de Saint Etienne-Centre National de la Recherche Scientifique (CNRS), Department of Mechanical and Aerospace Engineering [Gainesville] (UF|MAE), Equipe : Calcul de Risque, Optimisation et Calage par Utilisation de Simulateurs (CROCUS-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-UR LSTI, Plasticité, Endommagement et Corrosion des Matériaux (PECM-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-SMS-Centre National de la Recherche Scientifique (CNRS), Institut Mines-Télécom [Paris] (IMT), Centre Sciences, Information et Technologies pour l'Environnement (SITE-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Département Décision en Entreprise : Modélisation, Optimisation (DEMO-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut Henri Fayol, and Centre National de la Recherche Scientifique (CNRS)
- Subjects
[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,identification ,response surface approximation ,natural frequencies ,[SPI.MECA]Engineering Sciences [physics]/Mechanics [physics.med-ph] ,nondimensionalization ,composites ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,[SPI.MAT]Engineering Sciences [physics]/Materials - Abstract
International audience; Some applications such as identification or Monte Carlo based uncertainty quantification often require simple analytical formulas that are fast to evaluate. Approximate closed-form solutions for the natural frequencies of free orthotropic plates have been developed and have a wide range of applicability, but they are not very accurate. For moderate ranges of plate parameters, such as those needed for identifying material properties from vibration tests, good accuracy can be achieved by using response surface methodology combined with dimensional analysis. This paper first demonstrates that such a response surface can be much more accurate then the approximate analytical solutions even for relatively large ranges of the material and geometric parameters. Second it compares the accuracy of the elastic constants identified from experiments using the two approximations. It demonstrates the advantage of high fidelity approximations in vibration-based elastic constants identification. For a least squares identification approach, the approximate analytical solution led to physically implausible properties, while the high-fidelity response surface approximation obtained reasonable estimates.
- Published
- 2009
11. Statistical Characterization of Damage Propagation Properties in Structural Health Monitoring
- Author
-
Alexandra Coppe, Raphael Haftka, Nam-Ho Kim, and F. Yuan
- Subjects
Bayesian statistics ,Engineering ,Fuselage ,Cabin pressurization ,business.industry ,Probabilistic logic ,Structural health monitoring ,Structural engineering ,business ,Material properties ,Likelihood function ,Characterization (materials science) - Abstract
Structural health monitoring provides sensor data that follow fatigue induced damage growth in service. This information may in turn be used to improve the characterization of the material properties that govern damage propagation for the structure being monitored. These properties are often widely distributed between nominally identical structures because of differences in manufacturing and aging. The improved accuracy in damage growth characteristics allows more accurate prediction of the remaining useful life (RUL) of the structural component. In this paper, a probabilistic approach using Bayesian statistics is employed to progressively narrow the uncertainty in damage growth parameters in spite of variability and error in sensor measurements. Starting from an initial, wide distribution of damage parameters that are obtained from coupon tests, the distribution is progressively narrowed using the damage growth between consecutive measurements. Detailed discussions on how to construct the likelihood function under given variability of sensor data and how to update the distribution are presented. The approach is applied to crack growth in fuselage panels due to cycles of pressurization and depressurization. It is shown that the proposed method rapidly converges to the accurate damage parameters when the initial damage size is 20mm and the variability in sensor data is 1mm. It is observed that the distribution narrows down rapidly when the damage grows fast, while slows down when the damage grows slowly. This property works in favor because more accurate information will be obtained when the damage is dangerous. Using the identified damage parameters, the RUL is predicted with 95% confidence in order to obtain conservative prediction. The proposed approach may have the potential of turning aircraft into flying fatigue laboratories.
- Published
- 2009
- Full Text
- View/download PDF
12. Sensitivity of stress constraints using the adjoint method
- Author
-
Mehmet Akgun, Raphael Haftka, and John Garcelon
- Subjects
Stress (mechanics) ,Constraint (information theory) ,Mathematical optimization ,Small number ,Work (physics) ,Truss ,Sensitivity (control systems) ,Displacement (vector) ,Mathematics - Abstract
Adjoint sensitivity calculation of stress and displacement functional may be much less expensive than direct sensitivity calculation when the number of load cases is large. However, efficient implementation is problem dependent and requires strategies for reducing the number of constraints used. The study shows that for truss and plane-stress elements it is easy to implement the adjoint method. Also an example demonstrates that even an extreme form of constraint lumping can work well in optimization of a wing structure allowing us to keep only a small number of constraints.
- Published
- 1998
- Full Text
- View/download PDF
13. Structural optimization of a hat stiffened panel by response surface techniques
- Author
-
Roberto Vitali, Oung Park, Raphael Haftka, and Bhavani Sankar
- Subjects
Stress (mechanics) ,Surface (mathematics) ,Coupling ,Engineering ,Buckling ,business.industry ,Small number ,Microsoft excel ,Shell (structure) ,Structural engineering ,business ,Finite element method - Abstract
This paper describes a design study for the structural optimization of a typical bay of a blended wing body transport. A hat stiffened laminated composite shell concept is used in the design. The geometry of the design is determined with the PANDA2 program, but due to the presence of varying axial loads, a more accurate analysis procedure is needed. This is obtained by combining the STAGS finite element analysis program with response surface approximations for the stresses and the buckling loads. This design procedure results in weight savings of more than 30 percent, albeit at the expense of a more complex design. The response surface approximations allow easy coupling of the structural analysis program with the optimization program in the easily accessible Microsoft EXCEL spreadsheet program. The response surface procedure also allows the optimization to be carried out with a reasonable number of analyses. In particular, it allows combining a large number of inexpensive beamanalysis stress calculations with a small number of the more accurate STAGS analyses.
- Published
- 1997
- Full Text
- View/download PDF
14. Optimization of FRP laminated plates under uncertainty by fuzzy-set and genetic algorithm
- Author
-
Yoshiki Ohta and Raphael Haftka
- Subjects
Mathematical optimization ,Optimization problem ,business.industry ,Fuzzy set ,Stiffness ,Structural engineering ,Continuous design ,Fuzzy logic ,Buckling ,Deflection (engineering) ,medicine ,medicine.symptom ,business ,Membership function ,Mathematics - Abstract
This paper presents the optimization of FRP laminated rectangular plates under uncertainties in elastic properties. In optimization, thickness of FRP lamina and fiber angles are taken as design variables, and the maximum deflection of the plate is minimized under buckling and in-plane stiffness constraints. The uncertainties are entered by introducing fuzzy material constants, and then objectives and constraints are represented by a membership function of their own according to the Vertex Method. Thus the optimization problem is defined by two parameters, which are the membership function levels for the objective and constraints. Genetic algorithm with continuous design variables are employed in solving the optimization problem. By extensive numerical calculations the effect of uncertainties in elastic constants on the optimum design solutions are studied.
- Published
- 1997
- Full Text
- View/download PDF
15. Approximations in optimization and damage tolerant design
- Author
-
John Garcelon, Raphael Haftka, and Steve Scotti
- Subjects
Mathematics - Published
- 1997
- Full Text
- View/download PDF
16. Integration of finite element analysis program and panel design program
- Author
-
Satchi Venkataraman and Raphael Haftka
- Subjects
Structure (mathematical logic) ,Engineering ,Mathematical optimization ,Panel design ,Local analysis ,business.industry ,Constraint (computer-aided design) ,Minimum weight ,business ,Finite element method ,Variety (cybernetics) ,Task (project management) - Abstract
Integration of global and local analyses for structural optimization is a challenging task for designers. While computational costs are still a big hurdle for integrating local-global analysis in design optimization, additional difficulties are posed by the need to use a variety of codes or programs to perform the analysis. The paper presents a problem that required the use of two separate programs for the local and global analyses for a laminate composite liquid hydrogen tank structure optimized for minimum weight. The global and local analysis programs are integrated using constraint approximations. Both local linear approximations and more global response surface approximations are used. Optimization results of the two schemes are presented and relative merits discussed.
- Published
- 1997
- Full Text
- View/download PDF
17. Minimum-bias based experimental design for constructing response surfaces in structural optimization
- Author
-
Gerhard Venter and Raphael Haftka
- Subjects
Surface (mathematics) ,Noise ,Polynomial ,Mathematical optimization ,Minimum-variance unbiased estimator ,Genetic algorithm ,Point (geometry) ,Response surface methodology ,Type I and type II errors ,Mathematics - Abstract
Response surface methodology provides a powerful tool for finding approximations to complex response functions. In many applications the evaluation of each design point required to construct the response surface approximations, is expensive and/or time consuming. The cost associated with the evaluation of design points generally motivates the use of statistical design of experiments to select a small number of 'optimal' points at which to evaluate the response quantity. Response surface approximations have two types of error; the first is random error or noise and the second is bias or modeling error. Traditionally points are selected so as to minimize the random error using minimum variance based designs, for example the Z?-Optimal design. However, when using a low order polynomial to approximate a response quantity with unknown behavior, large bias errors may be introduced. In the present work, a genetic algorithm is developed for obtaining a minimum-bias based design. Two example problems are evaluated and the results as obtained from both .D-Optimal as well as minimum-bias designs are compared.
- Published
- 1997
- Full Text
- View/download PDF
18. Stacking sequence matching by two-stage genetic algorithm with consanguineous initial population
- Author
-
Akira Todoroki and Raphael Haftka
- Subjects
Sequence ,education.field_of_study ,Computer science ,Reliability (computer networking) ,Genetic algorithm ,Population ,Stacking ,Sequence matching ,Stage (hydrology) ,Representation (mathematics) ,education ,Algorithm ,Genealogy - Abstract
Stacking sequence optimization is an important problem in composite laminate design. It is a combinatorial problem, which has been solved by genetic algorithms (GA). However, for thick laminates with many plies, the combinatorial problem is large, and the GA may not be reliable. The paper proposes a strategy for improving the reliability of the GA for thick laminates based on a two stage approach. In the first stage, the laminate is optimized with a coarse representation of the stacking sequence. In the second stage, instead of using a random initial population, the optimum stacking sequence of the first stage is adapted to create an initial population for the second stage. The approach is called consanguineous because the entire initial population is descended from this single optimum design. Several approaches for creating the consanguineous initial population were tested, and an approach based on expansion and interchange operators proved tp be the best. The twostage approach is shown to be much more reliable than the usual single stage GA.
- Published
- 1997
- Full Text
- View/download PDF
19. Global/local analysis of composite plates with cutouts
- Author
-
Rakesh Kapania, Satish Haryadi, and Raphael Haftka
- Subjects
Isotropy ,Mathematical analysis ,Boundary (topology) ,Conformal map ,Boundary value problem ,Classification of discontinuities ,Displacement (vector) ,Finite element method ,Ritz method ,Mathematics - Abstract
The study focuses on the development of a simple and accurate global/local method for calculating the static response of stepped, simply-supported, isotropic and composite plates with circular and elliptical cutouts. The approach primarily involves two steps. In the first step a global approach, the Ritz method, is used to calculate the response of the structure. Displacement based Ritz functions for the plate without the cutout are augmented with a perturbation function, which is accurate for uniform thickness plates only, to account for the cutout. The Ritz solution does not accurately satisfy the natural boundary conditions at the cut-out boundary, nor does it accurately model the discontinuities caused by abrupt thickness changes. Therefore, a second step, local in nature is taken in which a small area in the vicinity of the hole and encompassing other points of singularities is discretized using a fine finite element mesh. The displacement boundary conditions for the local region are obtained from the global Ritz analysis. The chosen perturbation function is reliable for circular cutout in uniform plates, therefore elliptical cutouts were suitably transformed to circular shapes using conformal mapping. The methodology is then applied to the analysis of composite plates, and its usefulness successfully proved in such cases. The proposed approach resulted in accurate prediction of stresses, with considerable savings in CPU time and data storage for composite flat panels.
- Published
- 1997
- Full Text
- View/download PDF
20. Response surface approximations for fatigue life prediction
- Author
-
Gerhard Venter, Raphael Haftka, and Mehran Chirehdast
- Subjects
Surface (mathematics) ,Smoothness ,Mathematical optimization ,Complex geometry ,Optimization problem ,Numerical analysis ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Applied mathematics ,Minimum weight ,Response surface methodology ,Engineering design process ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Response surface methodology is used to construct response surface approximations for the fatigue life of structures with complex geometry and/or loading conditions. The smooth nature of the response surface approximations eliminates numerical noise, which is inherent to the numerical analysis procedures used to estimate the fatigue life. Procedures for constructing the required response surface approximations, as well as for implementing the resulting approximations in the design process, are discussed. The durability of a structure is introduced into the design process by including the minimum fatigue life as a constraint in a minimum weight optimization of the structure. The smoothness of the response surface approximations allows the use of a derivative based optimization in the design process. Two example problems, typical of the automotive industry, are evaluated to illustrate the proposed methodology. Both example problems have structures modeled by plate elements and the objective of the optimization problem is to find the optimum thickness distribution for a minimum specified fatigue life. It is shown that the thickness
- Published
- 1997
- Full Text
- View/download PDF
21. Variable-complexity response surface approximations for wing structural weight in HSCT design
- Author
-
Matthew Kaufman, Vladimir Balabanov, Susan Burgee, Anthony Giunta, Bernard Grossman, William Mason, Layne Watson, and Raphael Haftka
- Published
- 1996
- Full Text
- View/download PDF
22. Global/local analysis of square plates with cutouts
- Author
-
Rakesh Kapania, Satish Haryadi, and Raphael Haftka
- Published
- 1994
- Full Text
- View/download PDF
23. Optimization of laminated stacking sequence for buckling load maximization by genetic algorithm
- Author
-
RODOLPHE LE RICHE and RAPHAEL HAFTKA
- Published
- 1992
- Full Text
- View/download PDF
24. Application of bootstrap method in conservative estimation of reliability with limited samples.
- Author
-
Victor Picheny, Nam Kim, and Raphael Haftka
- Subjects
STATISTICAL bootstrapping ,ESTIMATION theory ,RELIABILITY in engineering ,DISTRIBUTION (Probability theory) ,LOGNORMAL distribution ,DEAD loads (Mechanics) - Abstract
Abstract  Accurate estimation of reliability of a system is a challenging task when only limited samples are available. This paper presents the use of the bootstrap method to safely estimate the reliability with the objective of obtaining a conservative but not overly conservative estimate. The performance of the bootstrap method is compared with alternative conservative estimation methods, based on biasing the distribution of system response. The relationship between accuracy and conservativeness of the estimates is explored for normal and lognormal distributions. In particular, detailed results are presented for the case when the goal has a 95% likelihood to be conservative. The bootstrap approach is found to be more accurate for this level of conservativeness. We explore the influence of sample size and target probability of failure on the quality of estimates, and show that for a given level of conservativeness, small sample sizes and low probabilities of failure can lead to a high likelihood of large overestimation. However, this likelihood can be reduced by increasing the sample size. Finally, the conservative approach is applied to the reliability-based optimization of a composite panel under thermal loading. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
25. Assessing the value of another cycle in Gaussian process surrogate-based optimization.
- Author
-
Nestor Queipo, Alexander Verde, Salvador Pintos, and Raphael Haftka
- Subjects
SURROGATE-based optimization ,GAUSSIAN processes ,ENGINEERING design ,ANALYSIS of covariance ,MATRICES (Mathematics) ,MULTIDISCIPLINARY design optimization ,MATHEMATICAL models - Abstract
Abstract Surrogate-based optimization (SBO) for engineering design, popular in the optimization of complex engineering systems (e.g., aerospace, automotive, oil industries), proceeds in design cycles. Each cycle consists of the analysis of a number designs, the fitting of a surrogate, optimization based on the surrogate, and exact analysis at the design obtained by the optimization. However, due to time and cost constraints, the design optimization is usually limited to a small number of cycles each with substantial number of simulations (short cycle SBO) and rarely allowed to proceed to convergence. This paper takes a first step towards establishing a statistically rigorous procedure for assessing the merit of investing in another cycle of analysis versus accepting the present best solution. The proposed approach assumes that the set of locations for the next cycle is given, and it relies on: (1) a covariance model obtained from available input/output data, (2) a Gaussian process-based surrogate model, and (3) the fact that the predictions in the next cycle are a realization of a Gaussian process with a covariance matrix and mean specified using (1) and (2). Its effectiveness was established using descriptive and inference statistical elements in the context of a well-known test function and the optimization of an alkali-surfactant-polymer flooding of petroleum reservoirs. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
26. Multiple surrogates: how cross-validation errors can help us to obtain the best predictor.
- Author
-
Felipe Viana, Raphael Haftka, and Valder Steffen
- Subjects
- *
SURROGATE-based optimization , *ERROR analysis in mathematics , *INTEGRAL equations , *MATHEMATICAL models , *MATHEMATICAL analysis , *ENGINEERING mathematics - Abstract
Abstract Surrogate models are commonly used to replace expensive simulations of engineering problems. Frequently, a single surrogate is chosen based on past experience. This approach has generated a collection of papers comparing the performance of individual surrogates. Previous work has also shown that fitting multiple surrogates and picking one based on cross-validation errors (PRESS in particular) is a good strategy, and that cross-validation errors may also be used to create a weighted surrogate. In this paper, we discussed how PRESS (obtained either from the leave-one-out or from the k-fold strategies) is employed to estimate the RMS error, and whether to use the best PRESS solution or a weighted surrogate when a single surrogate is needed. We also studied the minimization of the integrated square error as a way to compute the weights of the weighted average surrogate. We found that it pays to generate a large set of different surrogates and then use PRESS as a criterion for selection. We found that (1) in general, PRESS is good for filtering out inaccurate surrogates; and (2) with sufficient number of points, PRESS may identify the best surrogate of the set. Hence the use of cross-validation errors for choosing a surrogate and for calculating the weights of weighted surrogates becomes more attractive in high dimensions (when a large number of points is naturally required). However, it appears that the potential gains from using weighted surrogates diminish substantially in high dimensions. We also examined the utility of using all the surrogates for forming the weighted surrogates versus using a subset of the most accurate ones. This decision is shown to depend on the weighting scheme. Finally, we also found that PRESS as obtained through the k-fold strategy successfully estimates the RMSE. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
27. Approximate probabilistic optimization using exact-capacity-approximate-response-distribution (ECARD).
- Author
-
Sunil Kumar, Richard Pippy, Erdem Acar, Nam Kim, and Raphael Haftka
- Subjects
MULTIVARIATE analysis ,ARCHITECTURAL design ,MATHEMATICAL statistics ,RANDOM variables - Abstract
Abstract Probabilistic structural design deals with uncertainties in response (e.g. stresses) and capacity (e.g. failure stresses). The calculation of the structural response is typically expensive (e.g., finite element simulations), while the capacity is usually available from tests. Furthermore, the random variables that influence response and capacity are often disjoint. In previous work we have shown that this disjoint property can be used to reduce the cost of obtaining the probability of failure via Monte Carlo simulations. In this paper we propose to use this property for an approximate probabilistic optimization based on exact capacity and approximate response distributions (ECARD). In Approximate Probabilistic Optimization Using ECARD, the change in response distribution is approximated as the structure is re-designed while the capacity distribution is kept exact, thus significantly reducing the number of expensive response simulations. ECARD may be viewed as an extension of SORA (Sequential Optimization and Reliability Assessment), which proceeds with deterministic optimization iterations. In contrast, ECARD has probabilistic optimization iterations, but in each iteration, the response distribution is approximated so as not to require additional response calculations. The use of inexpensive probabilistic optimization allows easy incorporation of system reliability constraints and optimal allocation of risk between failure modes. The method is demonstrated using a beam problem and a ten-bar truss problem. The former allocates risk between two different failure modes, while the latter allocates risk between members. It is shown that ECARD provides most of the improvement from risk re-allocation that can be obtained from full probabilistic optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
28. Ensemble of surrogates.
- Author
-
Tushar Goel, Raphael Haftka, Wei Shyy, and Nestor Queipo
- Subjects
- *
TECHNICAL specifications , *EXPERIMENTAL design , *ENGINEERING , *INDUSTRIAL arts - Abstract
Abstract??The custom in surrogate-based modeling of complex engineering problems is to fit one or more surrogate models and select the one surrogate model that performs best. In this paper, we extend the utility of an ensemble of surrogates to (1) identify regions of possible high errors at locations where predictions of surrogates widely differ, and (2) provide a more robust approximation approach. We explore the possibility of using the best surrogate or a weighted average surrogate model instead of individual surrogate models. The weights associated with each surrogate model are determined based on the errors in surrogates. We demonstrate the advantages of an ensemble of surrogates using analytical problems and one engineering problem. We show that for a single problem the choice of test surrogate can depend on the design of experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
29. Optimization with non-homogeneous failure criteria like Tsai–Wu for composite laminates.
- Author
-
Albert Groenwold and Raphael Haftka
- Published
- 2006
- Full Text
- View/download PDF
30. Decomposition theory for multidisciplinary design optimization problems with mixed integer quasiseparable subsystems.
- Author
-
Raphael Haftka and Layne Watson
- Abstract
Numerous hierarchical and nonhierarchical decomposition strategies for the optimization of large scale systems, comprised of interacting subsystems, have been proposed. With a few exceptions, all of these strategies are essentially heuristic in nature. Recent work considered a class of optimization problems, called quasiseparable, narrow enough for a rigorous decomposition theory, yet general enough to encompass many large scale engineering design problems. The subsystems for these problems involve local design variables and global system variables, but no variables from other subsystems. The objective function is a sum of a global system criterion and the subsystems’ criteria. The essential idea is to give each subsystem a budget and global system variable values, and then ask the subsystems to independently maximize their constraint margins. Using these constraint margins, a system optimization then adjusts the values of the system variables and subsystem budgets. The subsystem margin problems are totally independent, always feasible, and could even be done asynchronously in a parallel computing context. An important detail is that the subsystem tasks, in practice, would be to construct response surface approximations to the constraint margin functions, and the system level optimization would use these margin surrogate functions. The present paper extends the quasiseparable necessary conditions for continuous variables to include discrete subsystem variables, although the continuous necessary and sufficient conditions do not extend to include integer variables. [ABSTRACT FROM AUTHOR]
- Published
- 2006
31. A double-distribution statistical algorithm for composite laminate optimization.
- Author
-
Laurent Grosset, Rodolphe LeRiche, and Raphael Haftka
- Published
- 2006
- Full Text
- View/download PDF
32. Integrated nonlinear structural analysis and design
- Author
-
RAPHAEL HAFTKA
- Published
- 1988
- Full Text
- View/download PDF
33. A Bayesian Framework for Orthotropic Elastic Constants Identification Accounting for both Error and Variability
- Author
-
Christian Gogu, Raphael Haftka, Rodolphe Le Riche, Jérôme Molimard, Alain Vautrin, Département Mécanique et Procédés d'Elaboration (MPE-ENSMSE), École des Mines de Saint-Étienne (Mines Saint-Étienne MSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-SMS, Département d'Ingénierie Mécanique et Aérospatiale, University of Florida [Gainesville] (UF), Laboratoire de Tribologie et Dynamique des Systèmes (LTDS), École Centrale de Lyon (ECL), Université de Lyon-Université de Lyon-École Nationale des Travaux Publics de l'État (ENTPE)-Ecole Nationale d'Ingénieurs de Saint Etienne-Centre National de la Recherche Scientifique (CNRS), Department of Mechanical and Aerospace Engineering [Gainesville] (UF|MAE), Equipe : Calcul de Risque, Optimisation et Calage par Utilisation de Simulateurs (CROCUS-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-UR LSTI, Plasticité, Endommagement et Corrosion des Matériaux (PECM-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-SMS-Centre National de la Recherche Scientifique (CNRS), Institut Mines-Télécom [Paris] (IMT), Centre Sciences, Information et Technologies pour l'Environnement (SITE-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Département Décision en Entreprise : Modélisation, Optimisation (DEMO-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut Henri Fayol, and Centre National de la Recherche Scientifique (CNRS)
- Subjects
Elastic constants identification ,Plate vibration ,Bayesian approach ,[SPI.MECA]Engineering Sciences [physics]/Mechanics [physics.med-ph] ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,Composites ,[SPI.MAT]Engineering Sciences [physics]/Materials - Abstract
http://218.196.244.90/comp_meeting/IACM-ECCOMAS08/pdfs/a1601.pdf; International audience; Identifying parameters of a model using experimental data has been extensively studied in various areas, including for determining elastic material properties from strain measurements or modal vibration data. In order to find the parameters that make the model agree best with the experiments, the most widely used method is based on minimizing the least squares error between the experimental data and the model predictions. The least squares method leads however to suboptimal results under a variety of conditions. In a previous study by the same authors [1] it has been shown on two simple test problems that the least squares method can be negatively affected by different magnitude of the experimental data, different uncertainty and correlation among the experimental data. Multiple sophistications to the least squares method exist in the literature, addressing these issues. Another natural way to address the previous shortcomings is through statistical frameworks, based on maximum likelihood or on Bayes' rule. The Bayesian framework is more general since it can include prior knowledge. Isenberg proposed the application of the Bayesian framework for parameter estimation in 1979 [2] and several articles have been published since on the application of this approach to frequency or modal identification in particular, i.e. identifying material properties from vibration test data [3],[4]. The statistical nature of the Bayesian approach implies that it can intrinsically handle the shortcomings discussed earlier on the classical least squares method. However it also implies numerical and computational cost issues since we usually have to carry out large numbers of simulations. Different types of uncertainty can be considered in a Bayesian framework, with error and variability representing two major categories. By error we mean either measurement error, which includes systematic as well as random error (or noise) or modeling error, that is, the error induced by using a certain model to represent the actual experiment. Variability on the other hand relates to uncertainty of fixed input parameters of our model, such as dimensions of the plate or plies, exact fiber volume fraction or fiber orientation. The uncertainty in the knowledge of these fixed parameters, unlike the error uncertainty, can be accurately propagated through the model to determine its effects on the identified parameters. However this propagation process is particularly computationally expensive. Previous studies such as [4] that applied the Bayesian approach for material properties identification from vibration test data of a beam considered error but not the more expensive variability. In our previous study [1] we did consider both error and variability however the Bayesian approach was applied on a simple vibration test problem for which we used an analytical model. The aim of this previous study was to illustrate on an example where computational issues do not cloud the process how the Bayesian approach can handle variable uncertainties as well as correlation in the measurements. In the current study we are interested in what happens when we apply a Bayesian updating procedure, accounting for both error and variability, to an actual identification problem, where inevitably we have to deal with numerical and computational cost issues. The chosen application consists in identifying orthotropic elastic properties from frequency response data of a free vibrating composite plate. The focus of the study is on the numerical side so we chose to use experimental results already available in the literature [5]. The final contribution will discuss the difficulties of accounting for both error and variability in a Bayesian approach and the solutions retained to tackle this problem, applied on the identification of orthotropic material properties from vibration test data.
34. Dimensionality reduction of full fields by the principal components analysis
- Author
-
Christian Gogu, Raphael Haftka, Rodolphe Le Riche, Jérôme Molimard, Alain Vautrin, Département Mécanique et Procédés d'Elaboration (MPE-ENSMSE), École des Mines de Saint-Étienne (Mines Saint-Étienne MSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-SMS, Département d'Ingénierie Mécanique et Aérospatiale, University of Florida [Gainesville] (UF), Laboratoire de Tribologie et Dynamique des Systèmes (LTDS), École Centrale de Lyon (ECL), Université de Lyon-Université de Lyon-École Nationale des Travaux Publics de l'État (ENTPE)-Ecole Nationale d'Ingénieurs de Saint Etienne-Centre National de la Recherche Scientifique (CNRS), Department of Mechanical and Aerospace Engineering [Gainesville] (UF|MAE), Equipe : Calcul de Risque, Optimisation et Calage par Utilisation de Simulateurs (CROCUS-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-UR LSTI, Plasticité, Endommagement et Corrosion des Matériaux (PECM-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-SMS-Centre National de la Recherche Scientifique (CNRS), UMR 5146 - Laboratoire Claude Goux (LCG-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Institut Mines-Télécom [Paris] (IMT), Centre Sciences, Information et Technologies pour l'Environnement (SITE-ENSMSE), Centre National de la Recherche Scientifique (CNRS), Département Décision en Entreprise : Modélisation, Optimisation (DEMO-ENSMSE), and Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut Henri Fayol
- Subjects
optical full field methods ,principal component analysis ,[SPI.MECA]Engineering Sciences [physics]/Mechanics [physics.med-ph] ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,dimensionality reduction ,[SPI.MAT]Engineering Sciences [physics]/Materials - Abstract
http://www.iccm-central.org/Proceedings/ICCM17proceedings/Themes/Behaviour/FULL%20FIELD%20MEASU%20METH/F14.7%20Gogu.pdf; International audience; The principal components analysis is applied to simulated full field displacement maps of 4569 measurement points thus allowing representation of any field within a certain domain as a linear combination of five basis vectors without losing significant information. A dimensionality reduction from 4569 to only 5 is thus achieved.
35. IMPROVING ACCURACY AND COMPENSATING FOR UNCERTAINTY IN SURROGATE MODELING
- Author
-
Picheny, Victor, Equipe : Recherche Opérationnelle pour le Génie Industriel (ROGI-ENSMSE), École des Mines de Saint-Étienne (Mines Saint-Étienne MSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-UR LSTI, Département Méthodes Scientifiques pour la Gestion Industrielle (MSGI-ENSMSE), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Centre G2I, Ecole Nationale Supérieure des Mines de Saint-Etienne, Raphael HAFTKA, Alain VAUTRIN, and Breuil, Florent
- Subjects
[MATH.MATH-GM]Mathematics [math]/General Mathematics [math.GM] ,Design of Experiments ,[MATH.MATH-GM] Mathematics [math]/General Mathematics [math.GM] ,Estimations conservatives ,Metamodels ,fiabilité ,Métamodèles ,Reliability ,Conservative estimation ,Plans d'expériences - Abstract
This dissertation addresses the issue of dealing with uncertainties when surrogate models are used to approximate costly numerical simulators. First, we propose alternatives to compensate for the surrogate model errors in order to obtain safe predictions with minimal impact on the accuracy (conservative strategies). The efficiency of the different methods are analyzed with the help of engineering problems, and are applied to the optimization of a laminate composite under reliability constraints. We also propose two contributions to the field of design of experiments (DoE) in order to minimize the uncertainty of surrogate models. Firstly, we developed a sequential method to build DoEs that minimize the error in a target region of the design space. Secondly, we proposed optimal sampling strategies when simulators with noisy responses are considered., Cette thèse est consacrée à la planification et l'exploitation d'expériences numériques à l'aide de modèles de remplacement (métamodèles), et plus particulièrement à la prise en compte des incertitudes induites par l'utilisation de ces métamodèles. Dans un premier temps, différentes méthodes sont proposées pour compenser ces incertitudes en biaisant les modèles afin de limiter le risque d'erreur 'défavorable' (méthodes conservatives). Cette étude s'appuie sur des applications en mécanique des structures, et en particulier, l'optimisation d'un système soumis a des contraintes de fiabilité. Cette thèse propose également deux contributions au domaine de la planification d'expériences numériques. D'une part, une méthode a été développée pour construire des plans permettant de minimiser l'erreur du modèle dans une région cible de l'espace de conception. Enfin, nous avons proposé des résultats pour la planification optimale des calculs dans le cas de simulateurs à réponse bruitées.
- Published
- 2009
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.