158 results on '"Renaud, John E."'
Search Results
2. Reliability Based Design Optimization of Robotic System Dynamic Performance
- Author
-
Bowling, Alan, Renaud, John E., Patel, Neal M., Newkirk, Jeremy, and Agarwal, Harish
- Published
- 2006
3. Mechanical behavior of acrylonitrile butadiene styrene fused deposition materials modeling
- Author
-
Rodríguez, José F., Thomas, James P., and Renaud, John E.
- Published
- 2003
- Full Text
- View/download PDF
4. Mechanical behavior of acrylonitrile butadiene styrene (ABS) fused deposition materials. Experimental investigation
- Author
-
Rodríguez, José F., Thomas, James P., and Renaud, John E.
- Published
- 2001
- Full Text
- View/download PDF
5. Characterization of the mesostructure of fused‐deposition acrylonitrile‐butadiene‐styrene materials
- Author
-
Rodriguez, Jose F., Thomas, James P., and Renaud, John E.
- Published
- 2000
- Full Text
- View/download PDF
6. Anatomic variation in the elastic anisotropy of cortical bone tissue in the human femur
- Author
-
Espinoza Orías, Alejandro A., Deuerling, Justin M., Landrigan, Matthew D., Renaud, John E., and Roeder, Ryan K.
- Published
- 2009
- Full Text
- View/download PDF
7. A fully anisotropic hierarchical hybrid cellular automaton algorithm to simulate bone remodeling
- Author
-
Penninger, Charles L., Patel, Neal M., Niebur, Glen L., Tovar, Andrés, and Renaud, John E.
- Published
- 2008
- Full Text
- View/download PDF
8. Homotopy curve tracking in approximate interior point optimization
- Author
-
Pérez, Victor M., Renaud, John E., and Watson, Layne T.
- Published
- 2009
- Full Text
- View/download PDF
9. Crashworthiness design using topology optimization
- Author
-
Patel, Neal M., Kang, Byung-Soo, Renaud, John E., and Tovar, Andres
- Subjects
Mathematical optimization -- Usage ,Vehicles -- Design and construction ,Engineering and manufacturing industries ,Science and technology - Abstract
Crashworthiness design is an evolving discipline that combines vehicle crash simulation and design synthesis. The goal is to increase passenger safety subject to manufacturing cost constraints. The crashworthiness design process requires modeling of the complex interactions involved in a crash event. Current approaches utilize a parametrized optimization approach that requires response surface approximations of the design space. This is due to the expensive nature of numerical crash simulations and the high nonlinearity, and noisiness in the design space. These methodologies usually require a significant effort to determine an initial design concept. In this paper, a heuristic approach to continuum-based topology optimization is developed for crashworthiness design. The methodology utilizes the cellular automata paradigm to generate three-dimensional design concepts. Furthermore, a constraint on maximum displacement is implemented to maintain a desired performance of the structures synthesized. Example design problems are used to demonstrate that the proposed methodology converges to a final topology in an efficient manner. [DOI: 10.1115/1.3116256]
- Published
- 2009
10. Topology synthesis of extrusion-based nonlinear transient designs
- Author
-
Patel, Neal M., Penninger, Charles L., and Renaud, John E.
- Subjects
Algorithms -- Usage ,Extrusion process -- Research ,Algorithm ,Engineering and manufacturing industries ,Science and technology - Abstract
Many practical structural designs require that the structure is easily manufactured. Design concepts synthesized using conventional topology optimization methods are typically not easily manufactured, in that multiple finishing processes are required to construct the component. A manufacturing technique that requires only minimal effort is extrusion. Extrusion is a manufacturing process used to create objects of a fixed cross-sectional profile. The result of using this process is lower costs for the manufacture of the final product. In this paper, a hybrid cellular automaton algorithm is developed to synthesize constant cross section structures that are subjected to nonlinear transient loading. The novelty of the proposed method is the ability to generate constant cross section topologies for plastic-dynamic problems since the issue of complex gradients can be avoided. This methodology is applied to extrusions with a curved sweep along the direction of extrusion as well. Three-dimensional examples are presented to demonstrate the efficiency of the proposed methodology in synthesizing these structures. Both static and dynamic loading cases are studied. [DOI: 10.1115/1.3116255]
- Published
- 2009
11. A variable fidelity model management framework for designing multiphase materials
- Author
-
Mejia-Rodriguez, Gilberto, Renaud, John E., and Tomar, Vikas
- Subjects
Finite element method -- Usage ,Fracture mechanics -- Research ,Strength of materials -- Measurement ,Strains and stresses -- Measurement ,Stress relaxation (Materials) -- Measurement ,Stress relieving (Materials) -- Measurement ,Engineering design -- Research ,Engineering and manufacturing industries ,Science and technology - Abstract
Research applications involving design tool development for multi phase material design are at an early stage of development. The computational requirements of advanced numerical tools for simulating material behavior such as the finite element method (FEM) and the molecular dynamics (MD) method can prohibit direct integration of these tools in a design optimization procedure where multiple iterations are required. One, therefore, requires a design approach that can incorporate multiple simulations (multiphysics) of varying fidelity such as FEM and MD in an iterative model management framework that can significantly reduce design cycle times. In this research a material design tool based on a variable fidelity model management framework is presented. In the variable fidelity material design tool, complex 'high-fidelity' FEM analyses are performed only to guide the analytic 'low-fidelity' model toward the optimal material design. The tool is applied to obtain the optimal distribution of a second phase, consisting of silicon carbide (SIC) fibers, in a silicon-nitride ([Si.sub.3][N.sub.4]) matrix to obtain continuous fiber SiC-[Si.sub.3][N.sub.4] ceramic composites with optimal fracture toughness. Using the variable fidelity material design tool in application to two test problems, a reduction in design cycle times of between 40% and 80% is achieved as compared to using a conventional design optimization approach that exclusively calls the high-fidelity FEM. The optimal design obtained using the variable fidelity approach is the same as that obtained using the conventional procedure. The variable fidelity material design tool is extensible to multiscale multiphase material design by using MD based material performance analyses as the high-fidelity analyses in order to guide low-fidelity continuum level numerical tools such as the FEM or finitedifference method with significant savings in the computational time. [DOI: 10.1115/1.2965361] Keywords: CFCC, FEM, fidelity, fracture toughness, optimal, scaling, stress intensity
- Published
- 2008
12. Comparative study of topology optimization techniques
- Author
-
Patel, Neal M., Tillotson, Donald, Renaud, John E., Tovar, Andres, and Izui, Kazuhiro
- Subjects
Algorithms -- Properties ,Algebraic topology -- Research ,Topology -- Research ,Mathematical optimization -- Research ,Control systems -- Research ,Algorithm ,Aerospace and defense industries ,Business - Abstract
This paper presents a comparison between three continuum-based topology optimization methods: the hybrid cellular automaton method, the optimality criteria method, and the method of moving asymptotes. The purpose of the study is to highlight the differences between the three. The optimality criteria method and the method of moving asymptotes are well established in topology optimization. The hybrid cellular automaton method is a recently developed gradient-free technique that combines both local design rules based on the cellular automaton paradigm and the finite element analysis. The closed-loop controllers used in the hybrid cellular automaton method are used to modify the mass distribution in the design domain to find an optimum material layout. The hybrid cellular automaton and optimality criteria methods and the method of moving asymptotes are described and applied in a comparative study to three sample problems. The influence of different algorithm control parameters is shown in this work. The paper demonstrates that, for the sample problems presented, the hybrid cellular automaton method generally required the fewest number of iterations to converge to a solution compared with the optimality criteria method and the method of moving asymptotes. The final topologies generated using the hybrid cellular automaton method typically had the lowest compliance and exhibited the fewest number of intermediate densities at the solution.
- Published
- 2008
13. An inverse-measure-based unilevel architecture for reliability-based design optimization
- Author
-
Agarwal, Harish, Mozumder, Chandan K., Renaud, John E., and Watson, Layne T.
- Published
- 2007
- Full Text
- View/download PDF
14. Update strategies for kriging models used in variable fidelity optimization
- Author
-
Gano, Shawn E., Renaud, John E., Martin, Jay D., and Simpson, Timothy W.
- Published
- 2006
- Full Text
- View/download PDF
15. A study using Monte Carlo Simulation for failure probability calculation in Reliability-Based Optimization
- Author
-
Padmanabhan, Dhanesh, Agarwal, Harish, Renaud, John E., and Batill, Stephen M.
- Published
- 2006
- Full Text
- View/download PDF
16. Reliability-based design optimization of robotic system dynamic performance
- Author
-
Bowling, Alan P., Renaud, John E., Newkirk, Jeremy T., Patel, Neal M., and Agarwal, Harish
- Subjects
Dynamic programming -- Design and construction ,Robotics -- Mechanical properties ,Engineering and manufacturing industries ,Science and technology - Abstract
In this investigation a robotic system's dynamic performance is optimized for high reliability under uncertainty. The dynamic capability equations (DCE) allow designers to predict the dynamic performance of a robotic system for a particular configuration and reference point on the end effector (i.e., point design). Here the DCE are used in conjunction with a reliability-based design optimization (RBDO) strategy in order to obtain designs with robust dynamic performance with respect to the end-effector reference point. In this work a unilevel performance measure approach is used to perform RBDO. This is important for the reliable design of robotic systems in which a solution to the DCE is required for each constraint call. The method is illustrated on a robot design problem. [DOI: 10.1115/1.2437804]
- Published
- 2007
17. Optimality conditions of the hybrid cellular automata for structural optimization
- Author
-
Tovar, Andres, Patel, Neal M., Kaushik, Amit K., and Renaud, John E.
- Subjects
Algorithms -- Usage ,Algebraic topology -- Methods ,Topology -- Methods ,Algorithm ,Aerospace and defense industries ,Business - Abstract
The hybrid cellular automaton method has been successfully applied to topology optimization using a uniform strain energy density distribution approach. In this work, a new set of design rules is derived from the first-order optimality conditions of a multi-objective problem. In this new formulation, the final topology is derived to minimize both mass and strain energy. In the hybrid cellular automaton algorithm, local design rules based on the cellular automaton paradigm are used to efficiently drive the design to optimality. In addition to the control-based techniques previously introduced, a new ratio technique is derived in this investigation. This work also compares the performance of the control strategies and the ratio technique.
- Published
- 2007
18. Topology optimization using a hybrid cellular automaton method with local control rules
- Author
-
Tovar, Andres, Patel, Neal M., Niebur, Glen L., Sen, Mihir, and Renaud, John E.
- Subjects
Algorithms -- Usage ,Cellular automata -- Research ,Structural optimization -- Research ,Algorithm ,Engineering and manufacturing industries ,Science and technology - Abstract
The hybrid cellular automaton (HCA) algorithm is a methodology developed to simulate the process of structural adaptation in bones. This methodology incorporates a distributed control loop within a structure in which ideally localized sensor cells activate local processes of the formation and resorption of material. With a proper control strategy, this process drives the overall structure to an optimal configuration. The controllers developed in this investigation include two-position, proportional, integral and derivative strategies. The HCA algorithm combines elements of the cellular automaton (CA) paradigm with finite element analysis (FEA). This methodology has proved to be computationally efficient to solve topology optimization problems. The resulting optimal structures are free of numerical instabilities such as the checkerboarding effect. This investigation presents the main features of the HCA algorithm and the influence of different parameters applied during the iterative optimization process. [DOI: 10.1115/1.2336251]
- Published
- 2006
19. New decoupled framework for reliability-based design optimization
- Author
-
Agarwal, Harish and Renaud, John E.
- Subjects
Aeronautics -- Research ,Control systems -- Research ,Aerospace and defense industries ,Business - Abstract
Traditionally, reliability-based design optimization (RBDO) has been formulated as a nested optimization problem. The inner loop, generally, involves the solution to optimization problems for computing the probabilities of failure of the critical failure modes, and the outer loop performs optimization by varying the decision variables. Such formulations are by nature computationally intensive, requiring numerous function and constraint evaluations. To alleviate this problem, researchers have developed iterative decoupled RBDO approaches. These methods perform deterministic optimization and reliability assessment in a sequential manner until a consistent reliability-based design is obtained. The sequential methods are attractive because a consistent reliable design can be obtained at considerably lower computational cost. However, the designs obtained by using these decoupled approaches cannot guarantee production of the true solution. A new decoupled method for RBDO is developed in this investigation. Postoptimal sensitivities of the most probable point (MPP) of failure with respect to the decision variables are introduced to update the MPPs during the deterministic optimization phase of the proposed approach. A damped Broyden-Fletcher-Goldfarb-Shanno method is used to significantly reduce the cost of obtaining these sensitivities. It is the use of postoptimal sensitivities that differentiates this new decoupled RBDO approach from previous efforts.
- Published
- 2006
20. Implicit uncertainty propagation for robust collaborative optimization
- Author
-
Gu, Xiaoyu, Renaud, John E., and Penninger, Charles L.
- Subjects
Engineering design -- Research ,Engineering and manufacturing industries ,Science and technology - Abstract
In this research we develop a mathematical construct for estimating uncertainties within the bilevel optimization framework of collaborative optimization. The collaborative optimization strategy employs decomposition techniques that decouple analysis tools in order to facilitate disciplinary autonomy and parallel execution. To ensure consistency of the physical artifact being designed, interdisciplinary consistency constraints are introduced at the system level. These constraints implicitly enforce multidisciplinary consistency when satisfied. The decomposition employed in collaborative optimization prevents the use of explicit propagation techniques for estimating uncertainties of system performance. In this investigation, we develop and evaluate an implicit method for estimating system performance uncertainties within the collaborative optimization framework. The methodology accounts for both the uncertainty associated with design inputs and the uncertainty of performance predictions from other disciplinary simulation tools. These implicit uncertainty estimates are used as the basis for a new robust collaborative optimization (RCO) framework. The bilevel robust optimization strategy developed in this research provides for disciplinary autonomy in system design, while simultaneously accounting for performance uncertainties to ensure feasible robustness of the resulting system. The method is effective in locating a feasible robust optimum in application studies involving a multidisciplinary aircraft concept sizing problem. The system-level consistency constraint formulation used in this investigation avoids the computational difficulties normally associated with convergence in collaborative optimization. The consistency constraints are formulated to have the inherent properties necessary for convergence of general nonconvex problems when performing collaborative optimization. [DOI: 10.1115/1.2205869] Keywords: collaborative optimization, robust optimization, uncertainty estimation
- Published
- 2006
21. Hybrid variable fidelity optimization by using a kriging-based scaling function
- Author
-
Gano, Shawn E., Renaud, John E., and Sanders, Brian
- Subjects
Aerospace and defense industries ,Business - Abstract
Solving design problems that rely on very complex and computationally expensive calculations using standard optimization methods might not be feasible given design cycle time constraints. Variable fidelity methods address this issue by using lower-fidelity models and a scaling function to approximate the higher-fidelity models in a provably convergent framework. In the past, scaling functions have mainly been either first-order multiplicative or additive corrections. These are being extended to second order. In this investigation variable metric approaches for calculating second-order scaling information are developed. A kriging-based scaling function is introduced to better approximate the high-fidelity response on a more global level. An adaptive hybrid method is also developed in this investigation. The adaptive hybrid method combines the additive and multiplicative approaches so that the designer does not have to determine which is more suitable prior to optimization. The methodologies developed in this research are compared to existing methods using two demonstration problems. The first problem is analytic, whereas the second involves the design of a supercritical high-lift airfoil. The results demonstrate that the kriging-based scaling methods improve computational expense by lowering the number of high-fidelity function calls required for convergence. The results also indicate the hybrid method is both robust and effective.
- Published
- 2005
22. Design of fused-deposition ABS components for stiffness and strength
- Author
-
Rodriguez, Jose F., Thomas, James P., and Renaud, John E.
- Subjects
Machine parts -- Design and construction ,Machine parts -- Research ,Engineering and manufacturing industries ,Science and technology - Abstract
The high degree of automation of Solid Freeform Fabrication (SFF) processing and its ability to create geometrically complex parts to precise dimensions provide it with a unique potential for low volume production of rapid tooling and functional components. A factor of significant importance in the above applications is the capability of producing components with adequate mechanical performance (e.g., stiffness and strength). This paper introduces a strategy for optimizing the design of Fused-Deposition Acrylonitrile-Butadiene-Styrene (FD-ABS; P400) components for stiffness and strength under a given set of loading conditions. In this strategy, a mathematical model of the structural system is linked to an approximate minimization algorithm to find the settings of select manufacturing parameters, which optimize the mechanical performance of the component. The methodology is demonstrated by maximizing the load carrying capacity of a two-section cantilevered FD-ABS beam. [DOI 10.1115/1.1582499]
- Published
- 2003
23. Adaptive experimental design for construction of response surface approximations
- Author
-
Perez, Victor M., Renaud, John E., and Watson, Layne T.
- Subjects
Structural engineering -- Research ,Sequential analysis -- Usage ,Aerospace and defense industries ,Business - Abstract
Sequential approximate optimization (SAO) is a class of methods available for the multidisciplinary design optimization of complex systems that are composed of several disciplines coupled together. One of the approaches used for SAO is based on a quadratic response surface approximation, where zero- and first-order information are required at the current design. In this method, designers must generate and query a database of order [??]([n.sup.2]) to compute the second-order terms of the quadratic response surface approximation. As the number of design variables grows, the computational cost of generating the required database becomes a concern. An adaptive experimental design (AED) approach that requires just [??](n) parameters for constructing a second-order approximation is presented. This is accomplished by transforming the matrix of second-order terms into the canonical form. The AED method periodically requires an order [??]([n.sup.2]) update of the second-order approximation to maintain accuracy. Results show that the proposed approach dramatically reduces the total number of calls to the simulation tools during the SAO process.
- Published
- 2002
24. Decision-based Collaborative Optimization
- Author
-
Gu, Xiaoyu, Renaud, John E., Ashe, Leah M., Batill, Stephen M., Budhiraja, Amrjit S., and Krajewski, Lee J.
- Subjects
Industrial design -- Models ,Engineering and manufacturing industries ,Science and technology - Abstract
In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design. [DOI: 10.1115/1.1432991] Keywords: Collaborative Optimization (CO), Decision Based Design (DBD), Uncertainty, Optimization
- Published
- 2002
25. Interactive multiobjective optimization procedure
- Author
-
Tappeta, Ravindra V. and Renaud, John E.
- Subjects
Structural optimization -- Methods ,Structural design -- Methods ,Multiple criteria decision making -- Methods ,Aerospace and defense industries ,Business - Abstract
A multiobjective system design and optimization procedure that takes into account the designer's preferences during the design process has been developed. It uses an aspiration-level approach to generate Pareto points and provides the designer with a means of investigating design parameters around a given Pareto point. The designer also has access to Pareto sensitivity information and the Pareto surface approximation. The proposed technique is used on a set of simple analytical constraints for its objective and constraints and on a high-performance, low-cost 10-bar structure with multiple objectives.
- Published
- 1999
26. Optimum design of an interbody implant for lumbar spine fixation
- Author
-
Tovar, Andrés, Gano, Shawn E., Mason, James J., and Renaud, John E.
- Published
- 2005
- Full Text
- View/download PDF
27. Uncertainty quantification using evidence theory in multidisciplinary design optimization
- Author
-
Agarwal, Harish, Renaud, John E., Preston, Evan L., and Padmanabhan, Dhanesh
- Published
- 2004
- Full Text
- View/download PDF
28. Advanced Information Technology in Simulation Based Life Cycle Design
- Author
-
Renaud, John E
- Subjects
Computer Programming And Software - Abstract
In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.
- Published
- 2003
29. Multidisciplinary Design Technology Development: A Comparative Investigation of Integrated Aerospace Vehicle Design Tools
- Author
-
Renaud, John E, Batill, Stephen M, and Brockman, Jay B
- Subjects
Computer Programming And Software - Abstract
This research effort is a joint program between the Departments of Aerospace and Mechanical Engineering and the Computer Science and Engineering Department at the University of Notre Dame. The purpose of the project was to develop a framework and systematic methodology to facilitate the application of Multidisciplinary Design Optimization (MDO) to a diverse class of system design problems. For all practical aerospace systems, the design of a systems is a complex sequence of events which integrates the activities of a variety of discipline "experts" and their associated "tools". The development, archiving and exchange of information between these individual experts is central to the design task and it is this information which provides the basis for these experts to make coordinated design decisions (i.e., compromises and trade-offs) - resulting in the final product design. Grant efforts focused on developing and evaluating frameworks for effective design coordination within a MDO environment. Central to these research efforts was the concept that the individual discipline "expert", using the most appropriate "tools" available and the most complete description of the system should be empowered to have the greatest impact on the design decisions and final design. This means that the overall process must be highly interactive and efficiently conducted if the resulting design is to be developed in a manner consistent with cost and time requirements. The methods developed as part of this research effort include; extensions to a sensitivity based Concurrent Subspace Optimization (CSSO) NMO algorithm; the development of a neural network response surface based CSSO-MDO algorithm; and the integration of distributed computing and process scheduling into the MDO environment. This report overviews research efforts in each of these focus. A complete bibliography of research produced with support of this grant is attached.
- Published
- 1999
30. Multidisciplinary Design Technology Development: A Comparative Investigation of Integrated Aerospace Vehicle Design Tools
- Author
-
Renaud, John E, Batill, Stephen M, and Brockman, Jay B
- Subjects
Aircraft Design, Testing And Performance - Abstract
This research effort is a joint program between the Departments of Aerospace and Mechanical Engineering and the Computer Science and Engineering Department at the University of Notre Dame. Three Principal Investigators; Drs. Renaud, Brockman and Batill directed this effort. During the four and a half year grant period, six Aerospace and Mechanical Engineering Ph.D. students and one Masters student received full or partial support, while four Computer Science and Engineering Ph.D. students and one Masters student were supported. During each of the summers up to four undergraduate students were involved in related research activities. The purpose of the project was to develop a framework and systematic methodology to facilitate the application of Multidisciplinary Design Optimization (N4DO) to a diverse class of system design problems. For all practical aerospace systems, the design of a systems is a complex sequence of events which integrates the activities of a variety of discipline "experts" and their associated "tools". The development, archiving and exchange of information between these individual experts is central to the design task and it is this information which provides the basis for these experts to make coordinated design decisions (i.e., compromises and trade-offs) - resulting in the final product design. Grant efforts focused on developing and evaluating frameworks for effective design coordination within a MDO environment. Central to these research efforts was the concept that the individual discipline "expert", using the most appropriate "tools" available and the most complete description of the system should be empowered to have the greatest impact on the design decisions and final design. This means that the overall process must be highly interactive and efficiently conducted if the resulting design is to be developed in a manner consistent with cost and time requirements. The methods developed as part of this research effort include; extensions to a sensitivity based Concurrent Subspace Optimization (CSSO) MDO algorithm; the development of a neural network response surface based CSSO-MDO algorithm; and the integration of distributed computing and process scheduling into the MDO environment. This report overviews research efforts in each of these focus. A complete bibliography of research produced with support of this grant is attached.
- Published
- 1998
31. Direct handling of equality constraints in multilevel optimization
- Author
-
Renaud, John E and Gabriele, Gary A
- Subjects
Structural Mechanics - Abstract
In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.
- Published
- 1990
32. Interactive Multiobjective Optimization Design Strategy for Decision Based Design
- Author
-
Tappeta, Ravindra V. and Renaud, John E.
- Subjects
Multiple criteria decision making -- Research ,Industrial design -- Methods ,Mathematical optimization -- Methods ,Engineering and manufacturing industries ,Science and technology - Abstract
This research focuses on multi-objective system design and optimization. The primary goal is to develop and test a mathematically rigorous and efficient interactive multiobjective optimization algorithm that takes into account the Decision Maker's (DM's) preferences during the design process. An interactive MultiObjective Optimization Design Strategy (iMOODS) has been developed in this research to include the Pareto sensitivity analysis, Pareto surface approximation and local preference functions to capture the DM's preferences in an Iterative Decision Making Strategy (IDMS). This new multiobjective optimization procedure provides the DM with a formal means for efficient design exploration around a given Pareto point. The use of local preference functions allows the iMOODS to construct the second order Pareto surface approximation more accurately in the preferred region of the Pareto surface. The iMOODS has been successfully applied to two test problems. The first problem consists of a set of simple analytical expressions for the objective and constraints. The second problem is the design and sizing of a high-performance and low-cost ten bar structure that has multiple objectives. The results indicate that the class functions are effective in capturing the local preferences of the DM. The Pareto designs that reflect the DM's preferences can be efficiently generated within IDMS. [DOI: 10.1115/1.1358302]
- Published
- 2001
33. Ability of Objective Functions to Generate Points on Nonconvex Pareto Frontiers
- Author
-
Messac, Achille, Sundararaj, Glynn J., Tappeta, Ravindra V., and Renaud, John E.
- Subjects
Functional equations -- Numerical solutions ,Aeronautics -- Models ,Astronautics -- Models ,Aerospace and defense industries ,Business - Abstract
New ground is broken in our understanding of objective functions' ability to capture Pareto solutions for multi-objective design optimization problems. It is explained why widely used objective functions fail to capture Pareto solutions when the Pareto frontier is not convex in objective space, and the means to avoid this limitation, when possible, is provided. These conditions are developed and presented in the general context of n-dimensional objective space, and numerical examples are provided. An important point is that most objective function structures can be made to generate nonconvex Pareto frontier solutions if the curvature of the objective function can be varied by setting one or more parameters. Because the occurrence of nonconvex efficient frontiers is common in practice, the results are of direct practical usefulness.
- Published
- 2000
34. Convergence analysis of hybrid cellular automata for topology optimization
- Author
-
Penninger, Charles L., Watson, Layne T., Tovar, Andres, Renaud, John E., and Computer Science
- Subjects
Data structures ,Algorithms - Abstract
The hybrid cellular automaton (HCA) algorithm was inspired by the structural adaptation of bones to their ever changing mechanical environment. This methodology has been shown to be an effective topology synthesis tool. In previous work, it has been observed that the convergence of the HCA methodology is affected by parameters of the algorithm. As a result, questions have been raised regarding the conditions by which HCA converges to an optimal design. The objective of this investigation is to examine the conditions that guarantee convergence to a Karush-Kuhn-Tucker (KKT) point. In this paper, it is shown that the HCA algorithm is a fixed point iterative scheme and the previously reported KKT optimality conditions are corrected. To demonstrate the convergence properties of the HCA algorithm, a simple cantilevered beam example is utilized. Plots of the spectral radius for projections of the design space are used to show regions of guaranteed convergence.
- Published
- 2009
35. KKT conditions satisfied using adaptive neighboring in hybrid cellular automata for topology optimization
- Author
-
Penninger, Charles L., Tovar, Andres, Watson, Layne T., Renaud, John E., and Computer Science
- Subjects
Data structures ,Algorithms - Abstract
The hybrid cellular automaton (HCA) method is a biologically inspired algorithm capable of topology synthesis that was developed to simulate the behavior of the bone functional adaptation process. In this algorithm, the design domain is divided into cells with some communication property among neighbors. Local evolutionary rules, obtained from classical control theory, iteratively establish the value of the design variables in order to minimize the local error between a field variable and a corresponding target value. Karush-Kuhn-Tucker (KKT) optimality conditions have been derived to determine the expression for the field variable and its target. While averaging techniques mimicking intercellular communication have been used to mitigate numerical instabilities such as checkerboard patterns and mesh dependency, some questions have been raised whether KKT conditions are fully satisfied in the final topologies. Furthermore, the averaging procedure might result in cancellation or attenuation of the error between the field variable and its target. Several examples are presented showing that HCA converges to different final designs for different neighborhood configurations or averaging schemes. Although it has been claimed that these final designs are optimal, this might not be true in a precise mathematical sense—the use of the averaging procedure induces a mathematical incorrectness that has to be addressed. In this work, a new adaptive neighboring scheme will be employed that utilizes a weighting function for the influence of a cell’s neighbors that decreases to zero over time. When the weighting function reaches zero, the algorithm satisfies the aforementioned optimality criterion. Thus, the HCA algorithm will retain the benefits that result from utilizing neighborhood information, as well as obtain an optimal solution.
- Published
- 2009
36. Homotopy methods for constraint relaxation in unilevel reliability based design optimization
- Author
-
Agarwal, Harish, Gano, Shawn E., Renaud, John E., Perez, Victor M., Watson, Layne T., and Computer Science
- Subjects
Data structures ,Algorithms - Abstract
Reliability based design optimization is a methodology for finding optimized designs that are characterized with a low probability of failure. The main ob jective in reliability based design optimization is to minimize a merit function while satisfying the reliability constraints. The reliability constraints are constraints on the probability of failure corre- sponding to each of the failure modes of the system or a single constraint on the system probability of failure. The probability of failure is usually estimated by performing a relia- bility analysis. During the last few years, a variety of different techniques have been devel- oped for reliability based design optimization. Traditionally, these have been formulated as a double-loop (nested) optimization problem. The upper level optimization loop gen- erally involves optimizing a merit function sub ject to reliability constraints and the lower level optimization loop(s) compute the probabilities of failure corresponding to the failure mode(s) that govern the system failure. This formulation is, by nature, computationally intensive. A new efficient unilevel formulation for reliability based design optimization was developed by the authors in earlier studies. In this formulation, the lower level optimiza- tion (evaluation of reliability constraints in the double loop formulation) was replaced by its corresponding first order Karush-Kuhn-Tucker (KKT) necessary optimality conditions at the upper level optimization. It was shown that the unilevel formulation is computation- ally equivalent to solving the original nested optimization if the lower level optimization is solved by numerically satisfying the KKT conditions (which is typically the case), and the two formulations are mathematically equivalent under constraint qualification and general- ized convexity assumptions. In the unilevel formulation, the KKT conditions of the inner optimization for each probabilistic constraint evaluation are imposed at the system level as equality constraints. Most commercial optimizers are usually numerically unreliable when applied to problems accompanied by many equality constraints. In this investigation an optimization framework for reliability based design using the unilevel formulation is de- veloped. Homotopy methods are used for constraint relaxation and to obtain a relaxed feasible design. A series of optimization problems are solved as the relaxed optimization problem is transformed via a homotopy to the original problem. A heuristic scheme is employed in this paper to update the homotopy parameter. The proposed algorithm is illustrated with example problems.
- Published
- 2007
37. Understanding Nanoscale Thermal Conduction and Mechanical Strength Correlation in High Temperature Ceramics With Improved Thermal Shock Resistance for Aerospace Applications
- Author
-
NOTRE DAME UNIV IN, Tomar, Vikas, Renaud, John E, NOTRE DAME UNIV IN, Tomar, Vikas, and Renaud, John E
- Abstract
Ceramics and semiconductors are an integral part of today s energy devices. This research addresses conductive heat transfer issues in such materials using a combination of classical and quantum mechanical atomistic simulations. For any kind of thermal system, thermal stress and thermal conduction cannot be decoupled. Such analyses have to be performed together. This research also focuses on understanding how mechanical strength gets affected by thermal conduction and vice-versa. We have highlighted important role played by electronic thermal conductivity in overall thermal conduction across interfaces. This is the first time ever, quantum simulations of electronic and phononic thermal conductivity of any material system have been reported. We have performed first ever measurements of nanoscale and microscale high temperature creep in a ceramic. Such measurements could lead to significant advances in tunable thermal protection systems operating at temperatures ranging from very low to ultra-high. We have proven for the first time that materials with biomimetic phase morphology have thermal conductivity values independent of strain. This finding has strong implication for developing materials with thermal properties independent of applied stress., Prepared in cooperation with Purdue University, West Lafayette, IN. The original document contains color images.
- Published
- 2010
38. Convergence analysis of hybrid cellular automata for topology optimization
- Author
-
Computer Science, Penninger, Charles L., Watson, Layne T., Tovar, Andres, Renaud, John E., Computer Science, Penninger, Charles L., Watson, Layne T., Tovar, Andres, and Renaud, John E.
- Abstract
The hybrid cellular automaton (HCA) algorithm was inspired by the structural adaptation of bones to their ever changing mechanical environment. This methodology has been shown to be an effective topology synthesis tool. In previous work, it has been observed that the convergence of the HCA methodology is affected by parameters of the algorithm. As a result, questions have been raised regarding the conditions by which HCA converges to an optimal design. The objective of this investigation is to examine the conditions that guarantee convergence to a Karush-Kuhn-Tucker (KKT) point. In this paper, it is shown that the HCA algorithm is a fixed point iterative scheme and the previously reported KKT optimality conditions are corrected. To demonstrate the convergence properties of the HCA algorithm, a simple cantilevered beam example is utilized. Plots of the spectral radius for projections of the design space are used to show regions of guaranteed convergence.
- Published
- 2009
39. KKT conditions satisfied using adaptive neighboring in hybrid cellular automata for topology optimization
- Author
-
Computer Science, Penninger, Charles L., Tovar, Andres, Watson, Layne T., Renaud, John E., Computer Science, Penninger, Charles L., Tovar, Andres, Watson, Layne T., and Renaud, John E.
- Abstract
The hybrid cellular automaton (HCA) method is a biologically inspired algorithm capable of topology synthesis that was developed to simulate the behavior of the bone functional adaptation process. In this algorithm, the design domain is divided into cells with some communication property among neighbors. Local evolutionary rules, obtained from classical control theory, iteratively establish the value of the design variables in order to minimize the local error between a field variable and a corresponding target value. Karush-Kuhn-Tucker (KKT) optimality conditions have been derived to determine the expression for the field variable and its target. While averaging techniques mimicking intercellular communication have been used to mitigate numerical instabilities such as checkerboard patterns and mesh dependency, some questions have been raised whether KKT conditions are fully satisfied in the final topologies. Furthermore, the averaging procedure might result in cancellation or attenuation of the error between the field variable and its target. Several examples are presented showing that HCA converges to different final designs for different neighborhood configurations or averaging schemes. Although it has been claimed that these final designs are optimal, this might not be true in a precise mathematical sense—the use of the averaging procedure induces a mathematical incorrectness that has to be addressed. In this work, a new adaptive neighboring scheme will be employed that utilizes a weighting function for the influence of a cell’s neighbors that decreases to zero over time. When the weighting function reaches zero, the algorithm satisfies the aforementioned optimality criterion. Thus, the HCA algorithm will retain the benefits that result from utilizing neighborhood information, as well as obtain an optimal solution.
- Published
- 2009
40. Enhanced multiobjective particle swarm optimization in combination with adaptive weighted gradient-based searching
- Author
-
Izui, Kazuhiro, Nishiwaki, Shinji, Yoshimura, Masataka, Nakamura, Masahiko, Renaud, John E., Izui, Kazuhiro, Nishiwaki, Shinji, Yoshimura, Masataka, Nakamura, Masahiko, and Renaud, John E.
- Published
- 2008
41. Comparative study of topology optimization techniques
- Author
-
90314228, Patel, Neal M., Tillotson, Donald, Renaud, John E., Tovar, Andres, Izui, Kazuhiro, 90314228, Patel, Neal M., Tillotson, Donald, Renaud, John E., Tovar, Andres, and Izui, Kazuhiro
- Published
- 2008
42. Enhanced multiobjective particle swarm optimization in combination with adaptive weighted gradient-based searching
- Author
-
90314228, 10346041, Izui, Kazuhiro, Nishiwaki, Shinji, Yoshimura, Masataka, Nakamura, Masahiko, Renaud, John E., 90314228, 10346041, Izui, Kazuhiro, Nishiwaki, Shinji, Yoshimura, Masataka, Nakamura, Masahiko, and Renaud, John E.
- Published
- 2008
43. Sequential approximate optimization-based robust design of SiC–Si3N4nanocomposite microstructures
- Author
-
Mejía-Rodríguez, Gilberto, primary, Renaud, John E., additional, Kim, Han Sung, additional, and Tomar, Vikas, additional
- Published
- 2013
- Full Text
- View/download PDF
44. A High Fidelity Hybrid Cellular Automaton Model for Bone Adaptation with Cellular Rules for Bone Resorption
- Author
-
Penninger, Charles L., primary, Tovar, Andrés, additional, Tomar, Vikas, additional, and Renaud, John E., additional
- Published
- 2013
- Full Text
- View/download PDF
45. Homotopy methods for constraint relaxation in unilevel reliability based design optimization
- Author
-
Computer Science, Agarwal, Harish, Gano, Shawn E., Renaud, John E., Perez, Victor M., Watson, Layne T., Computer Science, Agarwal, Harish, Gano, Shawn E., Renaud, John E., Perez, Victor M., and Watson, Layne T.
- Abstract
Reliability based design optimization is a methodology for finding optimized designs that are characterized with a low probability of failure. The main ob jective in reliability based design optimization is to minimize a merit function while satisfying the reliability constraints. The reliability constraints are constraints on the probability of failure corre- sponding to each of the failure modes of the system or a single constraint on the system probability of failure. The probability of failure is usually estimated by performing a relia- bility analysis. During the last few years, a variety of different techniques have been devel- oped for reliability based design optimization. Traditionally, these have been formulated as a double-loop (nested) optimization problem. The upper level optimization loop gen- erally involves optimizing a merit function sub ject to reliability constraints and the lower level optimization loop(s) compute the probabilities of failure corresponding to the failure mode(s) that govern the system failure. This formulation is, by nature, computationally intensive. A new efficient unilevel formulation for reliability based design optimization was developed by the authors in earlier studies. In this formulation, the lower level optimiza- tion (evaluation of reliability constraints in the double loop formulation) was replaced by its corresponding first order Karush-Kuhn-Tucker (KKT) necessary optimality conditions at the upper level optimization. It was shown that the unilevel formulation is computation- ally equivalent to solving the original nested optimization if the lower level optimization is solved by numerically satisfying the KKT conditions (which is typically the case), and the two formulations are mathematically equivalent under constraint qualification and general- ized convexity assumptions. In the unilevel formulation, the KKT conditions of the inner optimization for each probabilistic constraint evaluation are imposed at the system
- Published
- 2007
46. Reduced Sampling for Construction of Quadratic Response Surface Approximations Using Adaptive Experimental Design
- Author
-
Computer Science, Perez, Victor M., Renaud, John E., Watson, Layne T., Computer Science, Perez, Victor M., Renaud, John E., and Watson, Layne T.
- Abstract
The purpose of this paper is to reduce the computational complexity per step from O(n^2) to O(n) for optimization based on quadratic surrogates, where n is the number of design variables. Applying nonlinear optimization strategies directly to complex multidisciplinary systems can be prohibitively expensive when the complexity of the simulation codes is large. Increasingly, response surface approximations, and specifically quadratic approximations, are being integrated with nonlinear optimizers in order to reduce the CPU time required for the optimization of complex multidisciplinary systems. For evaluation by the optimizer, response surface approximations provide a computationally inexpensive lower fidelity representation of the system performance. The curse of dimensionality is a major drawback in the implementation of these approximations as the amount of required data grows quadratically with the number n of design variables in the problem. In this paper a novel technique to reduce the magnitude of the sampling from O(n^2) to O(n) is presented. The technique uses prior information to approximate the eigenvectors of the Hessian matrix of the response surface approximation and only requires the eigenvalues to be computed by response surface techniques. The technique is implemented in a sequential approximate optimization algorithm and applied to engineering problems of variable size and characteristics. Results demonstrate that a reduction in the data required per step from O(n^2) to O(n) points can be accomplished without significantly compromising the performance of the optimization algorithm. A reduction in the time (number of system analyses) required per step from O(n^2) to O(n) is significant, even more so as n increases. The novelty lies in how only O(n) system analyses can be used to approximate a Hessian matrix whose estimation normally requires O(n^2) system analyses.
- Published
- 2007
47. Convergence of Trust Region Augmented Lagrangian Methods Using Variable Fidelity Approximation Data
- Author
-
Rodriguez, Jose F., Renaud, John E., Watson, Layne T., and Computer Science
- Abstract
To date the primary focus of most constrained approximate optimization strategies is that application of the method should lead to improved designs. Few researchers have focused on the development of constrained approximate optimization strategies that are assured of converging to a Karush-Kuhn-Tucker (KKT) point for the problem. Recent work by the authors based on a trust region model management strategy has shown promise in managing the convergence of constrained approximate optimization in application to a suite of single level optimization test problems. Using a trust-region model management strategy, coupled with an augmented Lagrangian approach for constrained approximate optimization, the authors have shown in application studies that the approximate optimization process converges to a KKT point for the problem. The approximate optimization strategy sequentially builds a cumulative response surface approximation of the augmented Lagrangian which is then optimized subject to a trust region constraint. In this research the authors develop a formal proof of convergence for the response surface approximation based optimization algorithm. Previous application studies were conducted on single level optimization problems for which response surface approximations were developed using conventional statistical response sampling techniques such as central composite design to query a high fidelity model over the design space. In this research the authors extend the scope of application studies to include the class of multidisciplinary design optimization (MDO) test problems. More importantly the authors show that response surface approximations constructed from variable fidelity data generated during concurrent subspace optimizations (CSSOs) can be effectively managed by the trust region model management strategy. Results for two multidisciplinary test problems are presented in which convergence to a KKT point is observed. The formal proof of convergence and the successfull MDO application of the algorithm using variable fidelity data generated by CSSO are original contributions to the growing body of research in MDO.
- Published
- 1997
48. Topometry optimisation for crashworthiness design using hybrid cellular automata
- Author
-
Mozumder, Chandan, primary, Renaud, John E., additional, and Tovar, Andrés, additional
- Published
- 2012
- Full Text
- View/download PDF
49. Multidomain Topology Optimization for Crashworthiness Based on Hybrid Cellular Automata
- Author
-
Guo, Lian Shui, primary, Huang, Jun, additional, Tavor, Andres, additional, and Renaud, John E., additional
- Published
- 2011
- Full Text
- View/download PDF
50. Strain-based topology optimisation for crashworthiness using hybrid cellular automata
- Author
-
Guo, Lianshui, primary, Tovar, Andrés, additional, Penninger, Charles L., additional, and Renaud, John E., additional
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.