15,877 results
Search Results
2. A Short note on the Paper (New Characterizations of the Pareto Distribution
- Author
-
Gholamhossein G. Hamedani and Mahdi Rasekhi
- Subjects
Statistics and Probability ,Mathematical optimization ,Pareto interpolation ,Characterizations ,Management Science and Operations Research ,Conditional expectation ,symbols.namesake ,Generalized Pareto distribution ,Pareto distribution ,Pareto distributions ,60E05, 60E15 ,lcsh:Statistics ,lcsh:HA1-4737 ,Mathematics ,Probability and Statistics ,lcsh:Mathematics ,Order statistic ,lcsh:QA1-939 ,Power (physics) ,Modeling and Simulation ,symbols ,Lomax distribution ,Statistics, Probability and Uncertainty ,Random variable - Abstract
Nofal and El Gebaly (2017), presented certain characterizations of the Pareto distribution based on the conditional expectations of power of the order statistics. In this short note we show that the same results can easily be obtained in terms of the power of the random variable.
- Published
- 2017
3. Invited discussion paper small-sample distributional properties of nonlinear regression estimators (a geometric approach)
- Author
-
Andeej Pizman
- Subjects
Statistics and Probability ,Mathematical optimization ,Heuristic ,Gaussian ,Estimator ,Probability density function ,Conditional probability distribution ,Edgeworth series ,symbols.namesake ,Approximation error ,symbols ,Applied mathematics ,Statistics, Probability and Uncertainty ,Nonlinear regression ,Mathematics - Abstract
The paper is mainly a survey of the topic how to approximate the probability density of the parameter estimator in a nonlinear regression model. A short presentation of the geometry of the model and a heuristic discussion of the model and a heuristic discussion of the “irregularities” of the estimates are given. In the model with Gaussian errors we present the asymptotic normal approximation, the approximationby the second order Edgeworth expansion, a conditional density of BARNDORFF-NIELSEN, and mainly the approximation called “flat” or “saddlepoint” approximation, which will be shown to have several interesting properties. Further, we present the possibility of improving the approximation in some models, the extension of the approximation to some cases of nongaussian errors, and besides the maximum likelihood estimator we consider also the weighted least-squares estimator, with the weights not depending on the error concariance matrix.
- Published
- 1990
4. Minimax Optimization by Algorithms Employing Modified Lagrangians (Short Papers)
- Author
-
O. Einarsson
- Subjects
Mathematical optimization ,Radiation ,Ripple ,Condensed Matter Physics ,Minimax ,law.invention ,Nonlinear programming ,symbols.namesake ,Electric power transmission ,law ,symbols ,Optimization methods ,Electrical and Electronic Engineering ,Transformer ,Algorithm ,Lagrangian ,Mathematics - Abstract
Two general, modified Langrangian algorithms related to recent developments in nonlinear programming are presented. The methods give accurate results and are easy to program. An N-section transmission-line transformer is used as a test problem for minimax (equal ripple) optimization and the methods are compared to existing algorithms for network optimization.
- Published
- 1975
5. An Iterative Moment Method for Analyzing the Electromagnetic Field Distribution Inside Inhomogeneous Lossy Dielectric Objects (Short Papers)
- Author
-
M.F. Sultan and R. Mittra
- Subjects
Electromagnetic field ,Mathematical optimization ,Radiation ,Iterative method ,Basis function ,Transmission-line matrix method ,Optical field ,Condensed Matter Physics ,System of linear equations ,symbols.namesake ,Gaussian elimination ,symbols ,Applied mathematics ,Computational electromagnetics ,Electrical and Electronic Engineering ,Mathematics - Abstract
An iterative method is proposed for solving the electromagnetic deposition inside lossy inhomogeneous dielectric bodies. The technique uses the conventional method of moments to formulate the problem in matrix form. The resulting system of linear equations is solved iteratively by the method of conjugate gradients. The main advantage of the method is that the iterative procedure does not require the storage of any matrix, thus offering the possibility of solving larger problems compared to conventional inversion or Gaussian elimination schemes. Another important advantage is that monotonic convergence to a solution is ensured and accomplished within a fixed number of iterations, not exceeding the total number of basis functions, independently of the initial guess for the solution. Preliminary examples involving two-dimensional cylinders of fat and muscle are illustrated. The iterative method is expendable and applicable to the three-dimensional case.
- Published
- 1985
6. SUPPLEMENTARY OPTIMALITY PROPERTIES OF THE FAMILY OF GRADIENT-RESTORATION ALGORITHMS FOR OPTIMAL CONTROL PROBLEMS11This research was supported in part by the National Science Foundation, Grant No. ENG-79-18667.,22This paper is a condensation of the investigations reported in Refs. 1-3
- Author
-
A. Miele and T. Wang
- Subjects
Mathematical optimization ,symbols.namesake ,Quadratic equation ,Differential equation ,Lagrange multiplier ,symbols ,State (functional analysis) ,Boundary value problem ,Differential (infinitesimal) ,Optimal control ,Algorithm ,Finite set ,Mathematics - Abstract
The problem of minimizing a functional subject to differential constraints, nondifferential constraints, initial constraints, and final constraints is considered within the frame of the family of gradient-restoration algorithms for optimal control problems. This family includes sequential gradient-restoration algorithms (SGRA) and combined gradient-restoration algorithms (CGRA). The system of Lagrange multipliers associated with (i) the gradient phase of SGRA, (ii) the restoration phase of SGRA, and (iii) the combined gradient-restoration phase of CGRA is examined. It is shown that, in each case, the Lagrange multipliers are endowed with a supplementary optimality property: they minimize a special functional, quadratic in the multipliers, subject to the multiplier differential equations and boundary conditions, for given state, control, and parameter. These supplementary optimality properties have considerable computational implications: they allow one to reduce the study of an iteration of (i), (ii), (iii) to a mathematical programming problem involving a finite number of parameters as unknowns.
- Published
- 1984
7. A Computational Study of Exact Subgraph Based SDP Bounds for Max-Cut, Stable Set and Coloring
- Author
-
Elisabeth Gaar and Franz Rendl
- Subjects
90C27 ,Scheme (programming language) ,Mathematical optimization ,Relaxation hierarchy ,General Mathematics ,Maximum cut ,0211 other engineering and technologies ,0102 computer and information sciences ,02 engineering and technology ,Stable set ,90C22 ,01 natural sciences ,symbols.namesake ,FOS: Mathematics ,Coloring ,Semidefinite programming ,Mathematics - Optimization and Control ,computer.programming_language ,Mathematics ,90C22, 90C27 ,021103 operations research ,Full Length Paper ,Numerical analysis ,Dual (category theory) ,010201 computation theory & mathematics ,Optimization and Control (math.OC) ,Independent set ,symbols ,Max-Cut ,Graph optimization ,computer ,Software ,Lagrangian - Abstract
The "exact subgraph" approach was recently introduced as a hierarchical scheme to get increasingly tight semidefinite programming relaxations of several NP-hard graph optimization problems. Solving these relaxations is a computational challenge because of the potentially large number of violated subgraph constraints. We introduce a computational framework for these relaxations designed to cope with these difficulties. We suggest a partial Lagrangian dual, and exploit the fact that its evaluation decomposes into several independent subproblems. This opens the way to use the bundle method from non-smooth optimization to minimize the dual function. Finally computational experiments on the Max-Cut, stable set and coloring problem show the excellent quality of the bounds obtained with this approach., arXiv admin note: substantial text overlap with arXiv:1902.05345
- Published
- 2020
8. Zoeppritz-based AVO inversion using an improved Markov chain Monte Carlo method
- Author
-
Guangzhi Zhang, Xinpeng Pan, Jiajia Zhang, and Xingyao Yin
- Subjects
Mathematical optimization ,010504 meteorology & atmospheric sciences ,Bayesian probability ,Inverse transform sampling ,Energy Engineering and Power Technology ,010502 geochemistry & geophysics ,01 natural sciences ,Delayed rejection (DR) algorithm ,symbols.namesake ,Robustness (computer science) ,Geochemistry and Petrology ,Adaptive Metropolis (AM) algorithm ,0105 earth and related environmental sciences ,Mathematics ,Original Paper ,Estimator ,Markov chain Monte Carlo ,Geology ,Geotechnical Engineering and Engineering Geology ,Statistics::Computation ,Zoeppritz equations ,Fuel Technology ,Geophysics ,Exact Zoeppritz ,Bayesian AVO inversion ,symbols ,Nonlinear inversion ,Seismic inversion ,Economic Geology ,Algorithm ,Amplitude versus offset - Abstract
The conventional Markov chain Monte Carlo (MCMC) method is limited to the selected shape and size of proposal distribution and is not easy to start when the initial proposal distribution is far away from the target distribution. To overcome these drawbacks of the conventional MCMC method, two useful improvements in MCMC method, adaptive Metropolis (AM) algorithm and delayed rejection (DR) algorithm, are attempted to be combined. The AM algorithm aims at adapting the proposal distribution by using the generated estimators, and the DR algorithm aims at enhancing the efficiency of the improved MCMC method. Based on the improved MCMC method, a Bayesian amplitude versus offset (AVO) inversion method on the basis of the exact Zoeppritz equation has been developed, with which the P- and S-wave velocities and the density can be obtained directly, and the uncertainty of AVO inversion results has been estimated as well. The study based on the logging data and the seismic data demonstrates the feasibility and robustness of the method and shows that all three parameters are well retrieved. So the exact Zoeppritz-based nonlinear inversion method by using the improved MCMC is not only suitable for reservoirs with strong-contrast interfaces and long-offset ranges but also it is more stable, accurate and anti-noise.
- Published
- 2016
9. Fast Gaussian Mixture Probability Hypothesis Density Filter
- Author
-
Xiong Hua Fan, Wei Hua Wu, Jing Jiang, and Chong Yang Liu
- Subjects
Reduction (complexity) ,Probability hypothesis density filter ,Mathematical optimization ,symbols.namesake ,Filter (video) ,Gaussian ,symbols ,General Medicine ,Paper based ,Particle filter ,Time cost ,Recursive Bayesian estimation ,Mathematics - Abstract
Although the Gaussian mixture probability hypothesis density (GMPHD) filter is a multi-target tracker that can alleviate the computational intractability of the optimal multi-target Bayes filter and its computational complex is lower than that of sequential Monte Carlo probability hypothesis density (SMCPHD), its computational burden can be reduced further. In the standard GMPHD filter, each observation should be matched with each component when the PHD is updated. In practice, time cost of evaluating many unlikely measurements-to-components parings is wasteful, because their contribution is very limited. As a result, a substantial reduction in complexity could be obtained by directly setting relative value associated with these parings. A fast GMPHD algorithm is proposed in the paper based on gating strategy. Simulation results show that the fast GMPHD can save computational time by 60%~70% without any degradation in performance compared with standard GMPHD.
- Published
- 2014
10. The Green's Function for Poisson's Equation in a Two-Dielectric Region (Short Papers)
- Author
-
A.F. dos Santos and Viertel Vieira
- Subjects
Mathematical optimization ,symbols.namesake ,Radiation ,Reciprocity (electromagnetism) ,Mathematical analysis ,symbols ,Boundary value problem ,Dielectric ,Electrical and Electronic Engineering ,Poisson's equation ,Condensed Matter Physics ,Poisson distribution ,Mathematics - Abstract
The validity of the reciprocity relation satisfied by the Green's function for Poisson's equation in a two-dielectric region is briefly discussed.
- Published
- 1972
11. Refined elasticity sampling for Monte Carlo-based identification of stabilizing network patterns
- Author
-
Sergio Grimbs, Dorothee Childs, and Joachim Selbig
- Subjects
Statistics and Probability ,Mathematical optimization ,Citric Acid Cycle ,Monte Carlo method ,Ismb/Eccb 2015 Proceedings Papers Committee July 10 to July 14, 2015, Dublin, Ireland ,Perturbation (astronomy) ,Kinetic energy ,Models, Biological ,Biochemistry ,Instability ,symbols.namesake ,Applied mathematics ,Elasticity (economics) ,Molecular Biology ,Mathematics ,Supplementary data ,Institut für Informatik und Computational Science ,Systems ,Rate equation ,Computer Science Applications ,Kinetics ,Computational Mathematics ,Computational Theory and Mathematics ,Jacobian matrix and determinant ,symbols ,Monte Carlo Method ,Metabolic Networks and Pathways - Abstract
Motivation: Structural kinetic modelling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a representation of the system’s Jacobian matrix that depends solely on the network structure, steady state measurements, and the elasticities at the steady state. For a measured steady state, stability criteria can be derived by generating a large number of SKMs with randomly sampled elasticities and evaluating the resulting Jacobian matrices. The elasticity space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Here, we extend this approach by examining the kinetic feasibility of the elasticity combinations created during Monte Carlo sampling. Results: Using a set of small example systems, we show that the majority of sampled SKMs would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion is formulated that mitigates such infeasible models. After evaluating the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle and the intrinsic mechanisms responsible for their stability or instability. The findings of the statistical elasticity analysis confirm that several elasticities are jointly coordinated to control stability and that the main source for potential instabilities are mutations in the enzyme alpha-ketoglutarate dehydrogenase. Contact: dorothee.childs@embl.de Supplementary information: Supplementary data are available at Bioinformatics online.
- Published
- 2015
12. On bandlimitedness and lag-limitedness of fractional Gaussian noise
- Author
-
Wei Zhao and Ming Li
- Subjects
Statistics and Probability ,Bandlimiting ,Mathematical optimization ,symbols.namesake ,Gaussian noise ,Computation ,Lag ,Short paper ,symbols ,Applied mathematics ,Condensed Matter Physics ,Mathematics - Abstract
The contributions of this short paper are two-fold. We shall show two interesting properties of fractional Gaussian noise (fGn), namely, its bandlimitedness and lag-limitedness. The computation formulas for the maximum frequency of bandlimited fGn and the maximum lag of lag-limited fGn are proposed. In addition, we will give a new explanation of the statistical dependences of fGn based on the present bandlimitedness and lag-limitedness of fGn.
- Published
- 2013
13. Sum Power Iterative Water-Filling for Multi-Antenna Gaussian Broadcast Channels
- Author
-
Andrea Goldsmith, Nihar Jindal, Syed A. Jafar, Sriram Vishwanath, and Wonjong Rhee
- Subjects
Mathematical optimization ,Iterative method ,Gaussian ,MIMO ,Duality (mathematics) ,Data_CODINGANDINFORMATIONTHEORY ,Library and Information Sciences ,Covariance ,Computer Science Applications ,Channel capacity ,symbols.namesake ,Telecommunications link ,symbols ,Dirty paper coding ,Computer Science::Information Theory ,Information Systems ,Mathematics - Abstract
In this correspondence, we consider the problem of maximizing sum rate of a multiple-antenna Gaussian broadcast channel (BC). It was recently found that dirty-paper coding is capacity achieving for this channel. In order to achieve capacity, the optimal transmission policy (i.e., the optimal transmit covariance structure) given the channel conditions and power constraint must be found. However, obtaining the optimal transmission policy when employing dirty-paper coding is a computationally complex nonconvex problem. We use duality to transform this problem into a well-structured convex multiple-access channel (MAC) problem. We exploit the structure of this problem and derive simple and fast iterative algorithms that provide the optimum transmission policies for the MAC, which can easily be mapped to the optimal BC policies.
- Published
- 2005
14. The effect of Fisher information matrix approximation methods in population optimal design calculations
- Author
-
Eric A. Strömberg, Andrew C. Hooker, and Joakim Nyberg
- Subjects
Optimal design ,Mathematical optimization ,Population ,Fisher information matrix ,FOCE ,Sample (statistics) ,Full FIM ,030226 pharmacology & pharmacy ,01 natural sciences ,Models, Biological ,Pharmaceutical Sciences ,010104 statistics & probability ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Block-diagonal FIM ,Drug Discovery ,FO ,Applied mathematics ,Humans ,Computer Simulation ,0101 mathematics ,Fisher information ,Cluster analysis ,education ,Mathematics ,Pharmacology ,education.field_of_study ,Clinical Trials as Topic ,Original Paper ,Models, Statistical ,Sampling (statistics) ,Block matrix ,Farmaceutiska vetenskaper ,Confidence interval ,Research Design ,symbols ,Warfarin - Abstract
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
- Published
- 2016
15. Power Spectrum of Generalized Fractional Gaussian Noise
- Author
-
Ming Li
- Subjects
Mathematical optimization ,Article Subject ,Physics ,QC1-999 ,Applied Mathematics ,Short paper ,Autocorrelation ,General Physics and Astronomy ,Spectral density ,Probability density function ,symbols.namesake ,Gaussian noise ,symbols ,Statistical physics ,Mathematics - Abstract
Recently, we introduced a type of autocorrelation function (ACF) to describe a long-range dependent (LRD) process indexed with two parameters, which takes standard fractional Gaussian noise (fGn for short) as a special case. For simplicity, we call it the generalized fGn (GfGn). This short paper gives the power spectrum density function (PSD) of GfGn.
- Published
- 2013
16. The Capacity Region of the Gaussian Multiple-Input Multiple-Output Broadcast Channel
- Author
-
Yossef Steinberg, H. Weingarten, and Shlomo Shamai
- Subjects
Mathematical optimization ,Gaussian ,MIMO ,Data_CODINGANDINFORMATIONTHEORY ,Library and Information Sciences ,Topology ,Computer Science Applications ,Entropy power inequality ,Antenna array ,Superposition principle ,symbols.namesake ,Channel capacity ,symbols ,Entropy (information theory) ,Dirty paper coding ,Computer Science::Information Theory ,Information Systems ,Mathematics - Abstract
The Gaussian multiple-input multiple-output (MIMO) broadcast channel (BC) is considered. The dirty-paper coding (DPC) rate region is shown to coincide with the capacity region. To that end, a new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is optimal for the nondegraded case. Furthermore, the capacity region is characterized under a wide range of input constraints, accounting, as special cases, for the total power and the per-antenna power constraints
- Published
- 2006
17. A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels
- Author
-
John M. Cioffi and Mehdi Mohseni
- Subjects
Mathematical optimization ,Gaussian ,MIMO ,Data_CODINGANDINFORMATIONTHEORY ,Topology ,Entropy power inequality ,symbols.namesake ,Channel capacity ,Gaussian noise ,Convex optimization ,Converse ,symbols ,Dirty paper coding ,Computer Science::Information Theory ,Mathematics - Abstract
The paper provides a proof of the converse for the capacity region of the Gaussian MIMO broadcast channel under total average transmit power constraint. The proof uses several ideas from earlier works on the problem including the recent converse proof by Weingarten, Steinberg and Shamai. First the duality between Gaussian multiple access and broadcast channels is employed to show that every point on the boundary of the dirty paper coding region can be represented as the optimal solution to a convex optimization problem. Using the optimality conditions for this convex problem, a degraded broadcast channel is constructed for each point. It is then shown that the capacity region for this degraded broadcast channel contains the capacity region of the original channel. Moreover, the same point lies on the boundary of the dirty paper coding region for this degraded channel. Finally, the standard entropy power inequality is used to show that this point lies on the boundary of the capacity region of the degraded channel as well and consequently it is on the boundary of the capacity region of the original channel.
- Published
- 2006
18. The Augmented Complex Kernel LMS
- Author
-
Pantelis Bouboulis, Sergios Theodoridis, and Michael E. Mavroforakis
- Subjects
FOS: Computer and information sciences ,Complex data type ,Signal processing ,Mathematical optimization ,Short paper ,Hilbert space ,Machine Learning (cs.LG) ,Adaptive filter ,symbols.namesake ,Computer Science - Learning ,Kernel embedding of distributions ,Kernel (statistics) ,Signal Processing ,symbols ,Electrical and Electronic Engineering ,Algorithm ,Linear filter ,Mathematics - Abstract
Recently, a unified framework for adaptive kernel based signal processing of complex data was presented by the authors, which, besides offering techniques to map the input data to complex Reproducing Kernel Hilbert Spaces, developed a suitable Wirtinger-like Calculus for general Hilbert Spaces. In this short paper, the extended Wirtinger's calculus is adopted to derive complex kernel-based widely-linear estimation filters. Furthermore, we illuminate several important characteristics of the widely linear filters. We show that, although in many cases the gains from adopting widely linear estimation filters, as alternatives to ordinary linear ones, are rudimentary, for the case of kernel based widely linear filters significant performance improvements can be obtained., manuscript submitted to IEE Transactions on Signal Processing
- Published
- 2011
19. On the rate-limited Gelfand-Pinsker problem
- Author
-
Ravi Tandon and Sennur Ulukus
- Subjects
Discrete mathematics ,Mathematical optimization ,Transmitter ,Markov process ,Random parameters ,Upper and lower bounds ,symbols.namesake ,symbols ,Entropy (information theory) ,Dirty paper coding ,Random variable ,Computer Science::Information Theory ,Mathematics ,Communication channel - Abstract
We study a rate-limited version of the well known problem of coding for channels with random parameters which was studied by Gelfand and Pinsker [1]. In particular, we consider a state-dependent channel when the transmitter is supplied with the state information at a rate R e . We obtain a new upper bound on the capacity, C(R e ), for this channel. We explicitly evaluate this upper bound for the rate-limited dirty paper coding (DPC) problem and show that it strictly improves upon the DPC capacity for certain values of R e .
- Published
- 2009
20. A unified coding scheme for hybrid transmission of Gaussian source over gaussian channel
- Author
-
Shlomo Shamai and Chao Tian
- Subjects
Mathematical optimization ,Minimum mean square error ,Gaussian ,05 social sciences ,Bandwidth (signal processing) ,Bandwidth compression ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,Channel capacity ,symbols.namesake ,0508 media and communications ,Bandwidth expansion ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Uncountable set ,Dirty paper coding ,Algorithm ,Computer Science::Information Theory ,Mathematics - Abstract
We show that when transmitting a Gaussian source over an average-power-constrained Gaussian channel with mismatched bandwidth, there exists an uncountable set of hybrid digital analog schemes which are optimal under the minimum mean squared error criterion. This generalizes the recent result by Bross et al. that for the bandwidth matched case, there exists an uncountable set of optimal schemes, with the uncoded transmission and the separation approach being the two extremes. The proposed schemes for bandwidth expansion and compression both require proper combination of various components such as power splitting, bandwidth splitting, rate splitting, Wyner-Ziv coding and dirty-paper coding. This set of schemes includes all the existing known optimal schemes as special cases. We show that this set of schemes can be applied to the broadcast scenario with three receivers, when the receiver with median channel SNR achieves optimal distortion, and it offers distortion tradeoff between of the good receiver and bad receiver. Interestingly, though continuous, this tradeoff curve is in fact concave, implying that its performance is worse than a direct time-sharing approach in this three user scenario. We further show even in a more general broadcast setting with a continuum of receivers the time sharing scheme is better than any given scheme in this set; somewhat surprisingly, there exists a unique time-sharing ratio for this to hold.
- Published
- 2008
- Full Text
- View/download PDF
21. A combination forecasting model to chaotic time series
- Author
-
Qi-Shu Pan and Bin-Sheng Liu
- Subjects
symbols.namesake ,Mathematical optimization ,Fourier transform ,Ensemble forecasting ,symbols ,Chaotic ,Embedding ,Lyapunov exponent ,Paper based ,Volatility (finance) ,Forecast verification ,Physics::Atmospheric and Oceanic Physics ,Mathematics - Abstract
Chaotic time series exist in many natural economic phenomena. The commonly used forecast methods including adding-weight one-rank local-region method and forecast method based on the maximum Lyapunov exponent. The first method which could cause the forecast showing a smooth trend and the forecast result of the second method may have a drastic change in the trend. So the scope of application of these two methods is different. In the prediction of road day traffic time series, the time series is smooth overall and it also contains rich volatility. For this characteristic, a combination forecasting model is proposed in this paper based on organic combination of the two methods. It can solve the determination of the embedding dimension in chaos forecast and the evidence showed that the prediction is effectively.
- Published
- 2007
22. New Lagrangian function for nonconvex primal-dual decomposition
- Author
-
H. Mukai and Akio Tanikawa
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Optimization problem ,Short paper ,Structure (category theory) ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,Mathematics::Optimization and Control ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Separable space ,symbols.namesake ,020901 industrial engineering & automation ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Modelling and Simulation ,Decomposition (computer science) ,0101 mathematics ,Mathematics ,021103 operations research ,Primal dual ,Computational Mathematics ,Computational Theory and Mathematics ,Lagrangian relaxation ,Modeling and Simulation ,symbols ,Lagrangian - Abstract
In this paper, a new Lagrangian function is reported which is particularly suited for large-scale nonconvex optimization problems with separable structure. Our modification convexifies the standard Lagrangian function without destroying its separable structure so that the primal-dual decomposition technique can be applied even to nonconvex optimization problems. Furthermore, the proposed Lagrangian results in two levels of iterative optimization as compared with the three levels needed for techniques recently proposed for nonconvex primal-dual decomposition.
- Full Text
- View/download PDF
23. Sum power iterative water-filling for multi-antenna Gaussian broadcast channels
- Author
-
Sriram Vishwanath, Nihar Jindal, Syed A. Jafar, and Andrea Goldsmith
- Subjects
Mathematical optimization ,Iterative method ,Gaussian ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Regular polygon ,Duality (optimization) ,Data_CODINGANDINFORMATIONTHEORY ,Base station ,symbols.namesake ,Broadcast channels ,Telecommunications link ,symbols ,Dirty paper coding ,Computer Science::Information Theory ,Mathematics - Abstract
We consider the problem of maximizing sum rate on a multiple-antenna downlink in which the base station and receivers have multiple-antennas. The optimum scheme for this system was recently found to be "dirty paper coding". Obtaining the optimal transmission policies of the users when employing this dirty paper coding scheme is a computationally complex nonconvex problem. We use a "duality" to transform this problem into a convex multiple access problem, and then obtain a simple and fast iterative algorithm that gives us the optimum transmission policies.
24. A $(2/3)n^3$ fast-pivoting algorithm for the Gittins index and optimal stopping of a Markov chain
- Author
-
José Niòo-Mora
- Subjects
Mathematical optimization ,Gittins index ,Computational complexity theory ,Markov chain ,Probability (math.PR) ,General Engineering ,60G40 (Primary) 90C40, 90C390 (Secondary) ,Dynamic programming ,symbols.namesake ,Simplex algorithm ,Gaussian elimination ,Optimization and Control (math.OC) ,symbols ,FOS: Mathematics ,Optimal stopping ,Mathematics - Optimization and Control ,Algorithm ,Mathematics - Probability ,Mathematics ,Analysis of algorithms - Abstract
This paper presents a new fast-pivoting algorithm that computes the n Gittins index values of an n-state bandit—in the discounted and undiscounted cases—by performing (2/3)n3 + O(n2) arithmetic operations, thus attaining better complexity than previous algorithms and matching that of solving a corresponding linear-equation system by Gaussian elimination. The algorithm further applies to the problem of optimal stopping of a Markov chain, for which a novel Gittins-index solution approach is introduced. The algorithm draws on Gittins and Jones' (1974) index definition via calibration, on Kallenberg's (1986) proposal of using parametric linear programming, on Dantzig's simplex method, on the Varaiya et al. (1985) algorithm, and on the author's earlier work. This paper elucidates the structure of parametric simplex tableaux. Special structure is exploited to reduce the computational effort of pivot steps, decreasing the operation count by a factor of three relative to conventional pivoting, and by a factor of 3/2 relative to recent state-elimination algorithms. A computational study demonstrates significant time savings against alternative algorithms.
- Published
- 2023
- Full Text
- View/download PDF
25. A Numerical Approach for Evaluating the Time-Dependent Distribution of a Quasi Birth-Death Process
- Author
-
Birgit Sollie, Michel Mandjes, and Mathematics
- Subjects
Statistics and Probability ,Mathematical optimization ,General Mathematics ,0211 other engineering and technologies ,Markov process ,Context (language use) ,02 engineering and technology ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,SDG 3 - Good Health and Well-being ,0101 mathematics ,Mathematics ,021103 operations research ,Series (mathematics) ,Markov chain ,Model selection ,Quasi birth-death processes ,Maximum likelihood estimation ,Uniformization (probability theory) ,Quasi-birth–death process ,symbols ,Matrix exponential ,Time-dependent probabilities ,Erlang distribution - Abstract
This paper considers a continuous-time quasi birth-death (qbd) process, which informally can be seen as a birth-death process of which the parameters are modulated by an external continuous-time Markov chain. The aim is to numerically approximate the time-dependent distribution of the resulting bivariate Markov process in an accurate and efficient way. An approach based on the Erlangization principle is proposed and formally justified. Its performance is investigated and compared with two existing approaches: one based on numerical evaluation of the matrix exponential underlying the qbd process, and one based on the uniformization technique. It is shown that in many settings the approach based on Erlangization is faster than the other approaches, while still being highly accurate. In the last part of the paper, we demonstrate the use of the developed technique in the context of the evaluation of the likelihood pertaining to a time series, which can then be optimized over its parameters to obtain the maximum likelihood estimator. More specifically, through a series of examples with simulated and real-life data, we show how it can be deployed in model selection problems that involve the choice between a qbd and its non-modulated counterpart.
- Published
- 2022
26. Combined Parametric and Worst Case Circuit Analysis via Taylor Models
- Author
-
Riccardo Trinchero, Paolo Manfredi, Igor Simone Stievano, and Tongyu Ding
- Subjects
Mathematical optimization ,Technology and Engineering ,02 engineering and technology ,DC ,Upper and lower bounds ,parametric simulation ,Interval arithmetic ,symbols.namesake ,Circuit simulation ,SYSTEMS ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,0202 electrical engineering, electronic engineering, information engineering ,Taylor series ,VARIABILITY ANALYSIS ,Applied mathematics ,POLYNOMIAL CHAOS ,Taylor models ,Electrical and Electronic Engineering ,Remainder ,Circuit simulation, Taylor models, interval analysis, parameterized modeling, parametric simulation, tolerance analysis, uncertainty, worst case analysis ,parameterized modeling ,uncertainty ,Parametric statistics ,Mathematics ,INTERCONNECTS ,tolerance analysis ,020206 networking & telecommunications ,worst case analysis ,Matrix multiplication ,020202 computer hardware & architecture ,MACROMODELS ,SIMULATION ,symbols ,IBCN ,CASE TOLERANCE ANALYSIS ,interval analysis ,Linear circuit ,Network analysis - Abstract
This paper proposes a novel paradigm to generate a parameterized model of the response of linear circuits with the inclusion of worst case bounds. The methodology leverages the so-called Taylor models and represents parameter-dependent responses in terms of a multivariate Taylor polynomial, in conjunction with an interval remainder accounting for the approximation error. The Taylor model representation is propagated from input parameters to circuit responses through a suitable redefinition of the basic operations, such as addition, multiplication or matrix inversion, that are involved in the circuit solution. Specifically, the remainder is propagated in a conservative way based on the theory of interval analysis. While the polynomial part provides an accurate, analytical and parametric representation of the response as a function of the selected design parameters, the complementary information on the remainder error yields a conservative, yet tight, estimation of the worst case bounds. Specific and novel solutions are proposed to implement complex-valued matrix operations and to overcome well-known issues in the state-of-the-art Taylor model theory, like the determination of the upper and lower bound of the multivariate polynomial part. The proposed framework is applied to the frequency-domain analysis of linear circuits. An in-depth discussion of the fundamental theory is complemented by a selection of relevant examples aimed at illustrating the technique and demonstrating its feasibility and strength.
- Published
- 2016
27. A Proximal/Gradient Approach for Computing the Nash Equilibrium in Controllable Markov Games
- Author
-
Julio B. Clempner
- Subjects
TheoryofComputation_MISCELLANEOUS ,Computer Science::Computer Science and Game Theory ,Mathematical optimization ,021103 operations research ,Control and Optimization ,Optimization problem ,Markov chain ,Applied Mathematics ,0211 other engineering and technologies ,TheoryofComputation_GENERAL ,010103 numerical & computational mathematics ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,Nonlinear programming ,symbols.namesake ,Rate of convergence ,Nash equilibrium ,symbols ,0101 mathematics ,Algorithmic game theory ,Game theory ,Gradient method ,Mathematics - Abstract
This paper proposes a new algorithm for computing the Nash equilibrium based on an iterative approach of both the proximal and the gradient method for homogeneous, finite, ergodic and controllable Markov chains. We conceptualize the problem as a poly-linear programming problem. Then, we regularize the poly-linear functional employing a regularization approach over the Lagrange functional for ensuring the method to converge to some of the Nash equilibria of the game. This paper presents two main contributions: The first theoretical result is the proposed iterative approach, which employs both the proximal and the gradient method for computing the Nash equilibria in Markov games. The method transforms the game theory problem in a system of equations, in which each equation itself is an independent optimization problem for which the necessary condition of a minimum is computed employing a nonlinear programming solver. The iterated approach provides a quick rate of convergence to the Nash equilibrium point. The second computational contribution focuses on the analysis of the convergence of the proposed method and computes the rate of convergence of the step-size parameter. These results are interesting within the context of computational and algorithmic game theory. A numerical example illustrates the proposed approach.
- Published
- 2021
28. Consensus Control of a Class of Lipschitz Nonlinear Systems With Input Delay
- Author
-
Chunyan Wang, Zhengtao Ding, Zongyu Zuo, and Zongli Lin
- Subjects
Lyapunov function ,Mathematical optimization ,Multi-agent system ,Consensus control, Chua Circuit, Multi-agent systems, Input delay, Lipschitz nonlinearity ,Stability (learning theory) ,Lipschitz continuity ,Reduction (complexity) ,Nonlinear system ,symbols.namesake ,Control theory ,symbols ,Time domain ,Electrical and Electronic Engineering ,Laplacian matrix ,Mathematics - Abstract
This paper deals with the consensus control design for Lipschitz nonlinear multi-agent systems with input delay. The Artstein-Kwon-Pearson reduction method is employed to deal with the input delay and the integral term that remains in the transformed system is analyzed by using Krasovskii functional. Upon exploring certain features of the Laplacian matrix, sufficient conditions for global stability of the consensus control are identified using Lyapunov method in the time domain. The proposed control only uses relative state information of the agents. The effectiveness of the proposed control design is demonstrated through a simulation study.
- Published
- 2015
29. Demand Functions in Dynamic Games
- Author
-
Alexander M. Tarasyev and Nikolay A. Krasovskii
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Computer Science::Computer Science and Game Theory ,EQUILIBRIUM TRAJECTORIES ,Differential equation ,Time horizon ,02 engineering and technology ,01 natural sciences ,Constructive ,symbols.namesake ,020901 industrial engineering & automation ,Maximum principle ,OPTIMAL CONTROL ,DEMAND FUNCTION ,EQUILIBRIUM STRATEGY ,DEMAND FUNCTIONS ,0101 mathematics ,Differential (infinitesimal) ,Mathematics ,INFINITE TIME HORIZON ,DIFFERENTIAL GAMES ,HAMILTON - JACOBI EQUATIONS ,010102 general mathematics ,ComputingMilieux_PERSONALCOMPUTING ,CONSTRUCTIVE CONTROL ,Minimax ,GAME THEORY ,DIFFERENTIAL EQUATIONS ,Control and Systems Engineering ,Nash equilibrium ,GUARANTEED STRATEGIES ,symbols ,FUNCTIONS ,OPTIMAL CONTROLS ,Game theory - Abstract
The paper is devoted to construction of solutions in dynamic bimatrix games. In the model, the payoffs are presented by discounted integrals on the infinite time horizon. The dynamics of the game is subject to the system of the A.N. Kolmogorov type differential equations. The problem of construction of equilibrium trajectories is analyzed in the framework of the minimax approach proposed by N.N. Krasovskii and A.I. Subbotin in the differential games theory. The concept of dynamic Nash equilibrium developed by A.F. Kleimenov is applied to design the structure of the game solution. For obtaining constructive control strategies of players, the maximum principle of L.S. Pontryagin is used in conjunction with the generalized method of characteristics for Hamilton-Jacobi equations. The impact of the discount index is indicated for equilibrium strategies of the game and demand functions in the dynamic bimatrix game are constructed. © 2018 The paper is supported by Russin Foundation for Basic Reseaarch (Project No. 18-01-0264a).
- Published
- 2018
30. Stochastic Integration Filter with Improved State Estimate Mean-Square Error Computation
- Author
-
Ondřej Straka, Jindřich Duník, Jindřich Havlík, and Jiří Ajgl
- Subjects
Mathematical optimization ,Mean squared error ,Covariance matrix ,Computation ,Gaussian ,Bayesian probability ,MathematicsofComputing_NUMERICALANALYSIS ,Gaussian filter ,Nonlinear system ,symbols.namesake ,Filter (video) ,symbols ,Applied mathematics ,Mathematics - Abstract
The paper deals with the Bayesian state estimation of nonlinear stochastic dynamic systems. The focus is aimed at the stochastic integration filter, which represents the Gaussian filters with the state and measurement prediction moments calculated by the stochastic integration rule. Besides the value of the integral, the rule also provides the covariance matrix of the integral value error. In the paper an improved mean-square error of the state estimate is proposed based on utilization of the integral error covariance matrix. The improved calculation is illustrated using two numerical examples for the stochastic integration filter of the third and fifth degrees.
- Published
- 2017
31. Modification of Harris hawks optimization algorithm with random distribution functions for optimum power flow problem
- Author
-
Celaleddin Yeroglu, Ozan Akdag, and Abdullah Ates
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Exponential distribution ,Rayleigh distribution ,02 engineering and technology ,Function (mathematics) ,AC power ,F-distribution ,Normal distribution ,Electric power system ,symbols.namesake ,020901 industrial engineering & automation ,Artificial Intelligence ,Log-normal distribution ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Software ,Mathematics - Abstract
Harris hawks optimization (HHO) algorithm, which is inspired from Harris hawks hunting strategy, uses uniform random numbers in the optimization process. This paper proposes modifying HHO with seven types of random distribution function definitions that are chi-square distribution, normal distribution, exponential distribution, Rayleigh distribution, Student’s distribution, F distribution, and lognormal distribution to show effects on stochastic search-based optimization algorithm performance. The modified HHO algorithm is tested via some benchmark test functions. Results are compared with each other and with classical HHO solutions. Then, the HHO and its modified versions are applied to optimum power flow (OPF), which is an important problem for power system engineering for decades. The algorithms are applied to IEEE 30-bus test system to minimize total fuel cost of the power system, active/reactive power losses, and emission, by comparing with recent OPF researches. Considering the applicability of the proposed approach and the results achieved, one can confirm that it might be a different alternative method for solving OPF problems. One of the important results of the paper in the IEEE 30-bus test system is that the cost of fuel is calculated as 798.9105 $/h with classical HHO, while it is calculated as 798.66 $/h with the HHO modified with SD function.
- Published
- 2020
32. On q-Newton’s method for unconstrained multiobjective optimization problems
- Author
-
Shashi Kant Mishra, Geetanjali Panda, Bhagwat Ram, and Abu Talhamainuddin Ansary
- Subjects
Mathematical optimization ,021103 operations research ,Applied Mathematics ,0211 other engineering and technologies ,Value (computer science) ,02 engineering and technology ,Type (model theory) ,01 natural sciences ,Computational Mathematics ,symbols.namesake ,Multiobjective optimization problem ,0103 physical sciences ,Theory of computation ,symbols ,Benchmark (computing) ,Descent direction ,010301 acoustics ,Newton's method ,Quantum ,Mathematics - Abstract
In this paper, we present a method of so-called q-Newton’s type descent direction for solving unconstrained multiobjective optimization problems. The algorithm presented in this paper is implemented by applying an independent parameter q (quantum) in an Armijo-like rule to compute the step length which guarantees that the value of the objective function decreases at every iteration. The search processes gradually shift from global in the beginning to local as the algorithm converges due to q-gradient. The algorithm is experimented on 41 benchmark/test functions which are unimodal and multi-modal with 1, 2, 3, 4, 5, 10 and 50 different dimensions. The performance of the proposed method is confirmed by comparing with three existing schemes.
- Published
- 2020
33. Sequential characterizations of approximate solutions in convex vector optimization problems with set-valued maps
- Author
-
Tamaki Tanaka, Rabian Wangkeeree, and Nithirat Sisarat
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Applied Mathematics ,Mathematics::Optimization and Control ,0211 other engineering and technologies ,Regular polygon ,02 engineering and technology ,Management Science and Operations Research ,Marginal function ,Computer Science Applications ,Set (abstract data type) ,Constraint (information theory) ,symbols.namesake ,Pareto optimal ,Vector optimization ,Lagrange multiplier ,symbols ,Mathematics - Abstract
This paper deals with a convex vector optimization problem with set-valued maps. In the absence of constraint qualifications, it provides, by the scalarization theorem, sequential Lagrange multiplier conditions characterizing approximate weak Pareto optimal solutions for the problem in terms of the approximate subdifferentials of the marginal function associated with corresponding set-valued maps. The paper shows also that this result yields the approximate Lagrange multiplier condition for the problem under a new constraint qualification which is weaker than the Slater-type constraint qualification. Illustrative examples are also provided to discuss the significance of the sequential conditions.
- Published
- 2019
34. A bi-objective optimization approach to reducing uncertainty in pipeline erosion predictions
- Author
-
Haijing Gao, Hariprasad J. Subramani, Selen Cremaschi, and Wei Dai
- Subjects
Hyperparameter ,Mathematical optimization ,Mean squared error ,020209 energy ,General Chemical Engineering ,Pipeline (computing) ,02 engineering and technology ,Confidence interval ,Marginal likelihood ,Computer Science Applications ,symbols.namesake ,020401 chemical engineering ,Kernel (statistics) ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Minification ,0204 chemical engineering ,Gaussian process ,Mathematics - Abstract
Confidence in erosion model predictions is crucial for their effective use in design and operation of pipelines in upstream oil and gas industry. Accurate and precise estimates of the model discrepancy would increase the confidence in these predictions. We developed a Gaussian process (GP) model based framework to estimate erosion model discrepancy and its confidence interval. GP modeling, as a kernel-based approach, relies on the proper selection of hyperparameters. They are generally determined using the maximum marginal likelihood. Here, we present a bi-objective optimization approach, which uses minimization of mean squared error (MSE) and prediction variance (VAR) for training GP models. For this application, GP models trained using bi-objective optimization yielded lower MSE and VAR values than the ones trained using the maximum marginal likelihood. This paper is an extended version of a conference paper (Wei et al., 2018) presented at the 13th International Symposium on Process Systems Engineering.
- Published
- 2019
35. Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems
- Author
-
Puya Latafat, Panagiotis Patrinos, and Andreas Themelis
- Subjects
Lyapunov function ,Mathematical optimization ,General Mathematics ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,90C06, 90C25, 90C26, 49J52, 49J53 ,01 natural sciences ,Convexity ,Separable space ,symbols.namesake ,Convergence (routing) ,FOS: Mathematics ,Nonsmooth nonconvex optimization ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics ,021103 operations research ,Block-coordinate updates ,Function (mathematics) ,Rate of convergence ,Optimization and Control (math.OC) ,KL inequality ,symbols ,Proximal Gradient Methods ,Minification ,Forward-backward envelope ,Software - Abstract
This paper analyzes block-coordinate proximal gradient methods for minimizing the sum of a separable smooth function and a (nonseparable) nonsmooth function, both of which are allowed to be nonconvex. The main tool in our analysis is the forward-backward envelope (FBE), which serves as a particularly suitable continuous and real-valued Lyapunov function. Global and linear convergence results are established when the cost function satisfies the Kurdyka-\L ojasiewicz property without imposing convexity requirements on the smooth function. Two prominent special cases of the investigated setting are regularized finite sum minimization and the sharing problem; in particular, an immediate byproduct of our analysis leads to novel convergence results and rates for the popular Finito/MISO algorithm in the nonsmooth and nonconvex setting with very general sampling strategies. This paper analyzes block-coordinate proximal gradient methods for minimizing the sum of a separable smooth function and a (nonseparable) nonsmooth function, both of which are allowed to be nonconvex. The main tool in our analysis is the forward-backward envelope (FBE), which serves as a particularly suitable continuous and real-valued Lyapunov function. Global and linear convergence results are established when the cost function satisfies the Kurdyka-\L ojasiewicz property without imposing convexity requirements on the smooth function. Two prominent special cases of the investigated setting are regularized finite sum minimization and the sharing problem; in particular, an immediate byproduct of our analysis leads to novel convergence results and rates for the popular Finito/MISO algorithm in the nonsmooth and nonconvex setting with very general sampling strategies.
- Published
- 2021
36. Selection of Tightened-Normal-Tightened sampling scheme under the implications of intervened Poisson distribution
- Author
-
Pradeepa Veerakumari and Azarudheen Shahabudheen
- Subjects
Statistics and Probability ,Scheme (programming language) ,Sampling scheme ,Mathematical optimization ,Sampling (statistics) ,Management Science and Operations Research ,Poisson distribution ,symbols.namesake ,Modeling and Simulation ,symbols ,Table (database) ,Statistics, Probability and Uncertainty ,computer ,Selection (genetic algorithm) ,computer.programming_language ,Mathematics - Abstract
Tightened-normal-tightened (TNT) sampling scheme is one of the most frequently used sampling schemes for making decisions about the finished product lots by examining certain samples from the lots. TNT sampling scheme includes two attribute sampling plans, one for tightened inspection and other for normal inspection along with switching rules. This paper introduces a procedure for TNT by incorporating two single sampling plans (SSP) under the conditions of intervened Poisson distribution (IPD) for the lots which may have a possibility of some intervention during the production process. The paper also assesses the performance of the proposed scheme procedure through its operating characteristic curves. Also, the unity value table is provided for certain parameters of specified producer’s risk and consumer’s risk for shop floor conditions. Further, the efficiency of proposed TNT scheme over the individual SSP under the conditions of IPD is demonstrated with illustrations.
- Published
- 2019
37. Two-phase semi-Lagrangian relaxation for solving the uncapacitated distribution centers location problem for B2C E-commerce
- Author
-
Ziying Zhang, Cesar Beltran-Royo, Huizhen Zhang, and Bo Wang
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,business.industry ,Property (programming) ,Applied Mathematics ,0211 other engineering and technologies ,Phase (waves) ,010103 numerical & computational mathematics ,02 engineering and technology ,E-commerce ,Solver ,01 natural sciences ,Computational Mathematics ,symbols.namesake ,Distribution (mathematics) ,Lagrangian relaxation ,symbols ,Relaxation (approximation) ,0101 mathematics ,business ,Integer programming ,Mathematics - Abstract
This paper develops a mixed integer programming model for determining uncapacitated distribution centers location (UDCL) for B2C E-commerce. Based on the characteristics of distribution system for B2C E-commerce firms, the impact of supply cost of multi-commodity is considered in this model. Combining with the properties of the semi-Lagrangian relaxation (SLR) dual problem in the UDCL case, a two-phase SLR algorithm with good convergency property is furthermore developed for solving the UDCL problem. For the sake of contrastive analysis, this paper has performed computational experiments on 15 UDCL instances by the mixed integer programming solver, CPLEX, and the approach obtained by combining two-phase SLR algorithm with CPLEX, respectively. The numerical results show that the two-phase SLR algorithm not only can improve the solving speed of the CPLEX solver, but also can provide better results in reasonable time for most instances.
- Published
- 2019
38. Estimation and inference with a (nearly) singular Jacobian
- Author
-
Adam McCloskey and Sukjin Han
- Subjects
Mathematical optimization ,Economics and Econometrics ,Rank (linear algebra) ,deficient rank Jacobian ,Inference ,Parameter space ,Wald test ,weak identification ,symbols.namesake ,Extremum estimator ,0502 economics and business ,ddc:330 ,ECON Econometrics ,Applied mathematics ,C15 ,underidentification ,Reparameterization ,050207 economics ,C12 ,050205 econometrics ,Mathematics ,Statistical hypothesis testing ,extremum estimators ,05 social sciences ,asymptotic size ,subvector inference ,deficient rank Jacobian ,Identification (information) ,identification ,underidentification ,weak identification ,Jacobian matrix and determinant ,uniform inference ,symbols ,identification ,nonlinear models - Abstract
This paper develops extremum estimation and inference results for nonlinear models with very general forms of potential identification failure when the source of this identification failure is known. We examine models that may have a general deficient rank Jacobian in certain parts of the parameter space. When identification fails in one of these models, it becomes underidentified and the identification status of individual parameters is not generally straightforward to characterize. We provide a systematic reparameterization procedure that leads to a reparametrized model with straightforward identification status. Using this reparameterization, we determine the asymptotic behavior of standard extremum estimators and Wald statistics under a comprehensive class of parameter sequences characterizing the strength of identification of the model parameters, ranging from nonidentification to strong identification. Using the asymptotic results, we propose hypothesis testing methods that make use of a standard Wald statistic and data‐dependent critical values, leading to tests with correct asymptotic size regardless of identification strength and good power properties. Importantly, this allows one to directly conduct uniform inference on low‐dimensional functions of the model parameters, including one‐dimensional subvectors. The paper illustrates these results in three examples: a sample selection model, a triangular threshold crossing model, and a collective model for household expenditures. Reparameterization deficient rank Jacobian asymptotic size uniform inference subvector inference extremum estimators identification nonlinear models Wald test weak identification underidentification C12 C15
- Published
- 2019
39. Distributed constrained optimization via continuous-time mirror design
- Author
-
Wei Ni and Rui Sheng
- Subjects
Lyapunov function ,0209 industrial biotechnology ,Mathematical optimization ,Stability (learning theory) ,MathematicsofComputing_NUMERICALANALYSIS ,02 engineering and technology ,Multiagent ,symbols.namesake ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Distributed convex optimization ,Projection (set theory) ,Constrained optimization ,Mathematics ,Algebra and Number Theory ,Partial differential equation ,Applied Mathematics ,lcsh:Mathematics ,020206 networking & telecommunications ,lcsh:QA1-939 ,Nonlinear system ,Convex optimization ,symbols ,Mirror descent ,Gradient method ,Analysis - Abstract
Recently, distributed convex optimization using a multiagent system has received much attention by many researchers. This problem is frequently approached by combing the consensus algorithms in the multiagent literature and the gradient algorithms in the convex optimization literature. Compared with unconstrained distributed optimization, the constrained case is more challenging, and it is usually tackled by the projected gradient method. However, the projected gradient algorithm involves projection nonlinearity and thus is hard to analyze. To avoid gradient projection, in this paper, we present a novel distributed convex optimization algorithm in continuous time by using mirror design. The resulting optimization dynamics is smooth without using gradient projection and is designed in a primal-dual framework, where the primal and dual dynamics are respectively aided by the mirror descent and the mirror ascent. As for the merit of mirror design in our paper, it avoids gradient projection in the optimization dynamics design and removes the difficulty of analyzing projection nonlinearity. Furthermore, the mirror base primal-dual optimization dynamics facilitates more convenience construction of Lyapunov functions in the stability analysis.
- Published
- 2018
40. Beyond Relaxation and Newton–Raphson: Solving AC OPF for Multi-Phase Systems With Renewables
- Author
-
Ahmed S. Zamzam, Nicholas D. Sidiropoulos, and Emiliano Dall'Anese
- Subjects
Quadratic growth ,Mathematical optimization ,General Computer Science ,020209 energy ,Approximation algorithm ,02 engineering and technology ,Transmission system ,AC power ,symbols.namesake ,Quadratic equation ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Voltage regulation ,Relaxation (approximation) ,Newton's method ,Mathematics - Abstract
This paper focuses on the AC Optimal Power Flow (OPF) problem for multi-phase systems. Particular emphasis is given to systems with high integration of renewables, where adjustments of the real and reactive output powers from renewable sources of energy are necessary in order to enforce voltage regulation. The AC OPF problem is known to be nonconvex (and, in fact, NP-hard). Convex relaxation techniques have been recently explored to solve the OPF task with reduced computational burden; however, sufficient conditions for tightness of these relaxations are only available for restricted classes of system topologies and problem setups. Identifying feasible power-flow solutions remains hard in more general problem formulations, especially in unbalanced multi-phase systems with renewables. To identify feasible and optimal AC OPF solutions in challenging scenarios where existing methods may fail, this paper leverages the Feasible Point Pursuit - Successive Convex Approximation algorithm—a powerful approach for general nonconvex quadratically constrained quadratic programs. The merits of the approach are illustrated using single- and multi-phase distribution networks with renewables, as well as several transmission systems.
- Published
- 2018
41. On inexact ADMMs with relative error criteria
- Author
-
Jiaxin Xie
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Karush–Kuhn–Tucker conditions ,Applied Mathematics ,0211 other engineering and technologies ,Solution set ,010103 numerical & computational mathematics ,02 engineering and technology ,Variation (game tree) ,01 natural sciences ,Image (mathematics) ,Computational Mathematics ,symbols.namesake ,Approximation error ,Error tolerance ,Lagrange multiplier ,Convergence (routing) ,symbols ,0101 mathematics ,Mathematics - Abstract
In this paper, we develop two inexact alternating direction methods of multipliers (ADMMs) with relative error criteria for which only a few parameters are needed to control the error tolerance. In many practical applications, the numerical performance is often improved if a larger step-length is used. Hence in this paper we also consider to seek a larger step-length to update the Lagrangian multiplier for better numerical efficiency. Specifically, if we only allow one subproblem in the classic ADMM to be solved inexactly by a certain relative error criterion, then a larger step-length can be used to update the Lagrangian multiplier. Related convergence analysis of those proposed algorithms is also established under the assumption that the solution set to the KKT system of the problem is not empty. Numerical experiments on solving total variation (TV)-based image denosing and analysis sparse recovery problems are provided to demonstrate the effectiveness of the proposed methods and the advantage of taking a larger step-length.
- Published
- 2018
42. On multilevel RBF collocation to solve nonlinear PDEs arising from endogenous stochastic volatility models
- Author
-
Maryam Vahid Dastgerdi, Ali Foroush Bastani, and Abolfazl Mighani
- Subjects
Numerical Analysis ,Mathematical optimization ,Stochastic volatility ,Applied Mathematics ,Mathematical finance ,010103 numerical & computational mathematics ,01 natural sciences ,010101 applied mathematics ,symbols.namesake ,Nonlinear system ,Rate of convergence ,Elliptic partial differential equation ,Modeling and Simulation ,symbols ,Radial basis function ,Boundary value problem ,0101 mathematics ,Newton's method ,Mathematics - Abstract
The main aim of this paper is the analytical and numerical study of a time-dependent second-order nonlinear partial differential equation (PDE) arising from the endogenous stochastic volatility model, introduced in [Bensoussan, A., Crouhy, M. and Galai, D., Stochastic equity volatility related to the leverage effect (I): equity volatility behavior. Applied Mathematical Finance, 1, 63–85, 1994]. As the first step, we derive a consistent set of initial and boundary conditions to complement the PDE, when the firm is financed by equity and debt. In the sequel, we propose a Newton-based iteration scheme for nonlinear parabolic PDEs which is an extension of a method for solving elliptic partial differential equations introduced in [Fasshauer, G. E., Newton iteration with multiquadrics for the solution of nonlinear PDEs. Computers and Mathematics with Applications, 43, 423–438, 2002]. The scheme is based on multilevel collocation using radial basis functions (RBFs) to solve the resulting locally linearized elliptic PDEs obtained at each level of the Newton iteration. We show the effectiveness of the resulting framework by solving a prototypical example from the field and compare the results with those obtained from three different techniques: (1) a finite difference discretization; (2) a naive RBF collocation and (3) a benchmark approximation, introduced for the first time in this paper. The numerical results confirm the robustness, higher convergence rate and good stability properties of the proposed scheme compared to other alternatives. We also comment on some possible research directions in this field.
- Published
- 2018
43. Moment matrices in conditional heteroskedastic models under elliptical distributions with applications in AR-ARCH models
- Author
-
Shuangzhe Liu, Chris C. Heyde, and Wing-Keung Wong
- Subjects
Statistics and Probability ,Hessian matrix ,Heteroscedasticity ,Mathematical optimization ,Maximum likelihood ,Arch models ,Moment (mathematics) ,symbols.namesake ,symbols ,Applied mathematics ,Statistics, Probability and Uncertainty ,Elliptical distribution ,Newton's method ,Mathematics - Abstract
It is well known that moment matrices play a very important role in econometrics and statistics. Liu and Heyde (Stat Pap 49:455–469, 2008) give exact expressions for two-moment matrices, including the Hessian for ARCH models under elliptical distributions. In this paper, we extend the theory by establishing two additional moment matrices for conditional heteroskedastic models under elliptical distributions. The moment matrices established in this paper implement the maximum likelihood estimation by some estimation algorithms like the scoring method. We illustrate the applicability of the additional moment matrices established in this paper by applying them to establish an AR-ARCH model under an elliptical distribution.
- Published
- 2009
44. Wavelet-Based Multiscale Anisotropic Diffusion With Adaptive Statistical Analysis for Image Restoration
- Author
-
Junmei Zhong and Huifang Sun
- Subjects
Mathematical optimization ,Anisotropic diffusion ,Noise reduction ,Gaussian blur ,Wavelet transform ,Scale space ,symbols.namesake ,Noise ,Computer Science::Computer Vision and Pattern Recognition ,symbols ,Electrical and Electronic Engineering ,Algorithm ,Smoothing ,Image restoration ,Mathematics - Abstract
The anisotropic diffusion techniques are in general efficient to preserve image edges when they are used to reduce noise. However, they are not very effective to denoise those images that are corrupted by a high level of noise mainly for the lack of a reliable edge-stopping criterion in the partial differential equation (PDE). In this paper, a new algorithm is developed to tackle this problem. The main contribution of this paper is in the construction of a new regularization method for the PDE by using the over-complete dyadic wavelet transform (DWT). It proposes to perform anisotropic diffusion in the more stationary DWT domain rather than directly in the raw noisy image domain. In the DWT domain, since noise tends to decrease as the scale increases, at each scale, noise has less influence on the PDE than that in the raw noisy image domain. As a result, the edge-stopping criterion and other partial derivative measurements in the PDE become more reliable. Furthermore, there is no need to do Gaussian smoothing or any other smoothing operations. Experiment results show that the proposed algorithm can significantly reduce noise while preserving image edges.
- Published
- 2008
45. Reduction in Sampling-Point Numbers for 2-D Discrete Fourier Transform Used in Harmonic Balance Method
- Author
-
T. Hori
- Subjects
Mathematical optimization ,Discrete-time Fourier transform ,Non-uniform discrete Fourier transform ,Short-time Fourier transform ,Fractional Fourier transform ,symbols.namesake ,Discrete Fourier transform (general) ,Fourier transform ,Discrete sine transform ,symbols ,Electrical and Electronic Engineering ,Harmonic wavelet transform ,Algorithm ,Mathematics - Abstract
This paper discusses how to reduce the numbers of sampling points to obtain the 2-D discrete Fourier transform used in harmonic balance method. A method of embedding a 2-D Fourier transform into a 1-D one has already been proposed. This paper proposes a method of reducing the numbers of sampling points in a 1-D Fourier transform, by using bandpass sampling. A 2-D Fourier transform for harmonic balance method can be embedded into a 1-D one with a reduced number of sampling points by combining the previously proposed and the present methods. The extension of these methods to higher dimensional cases is also briefly discussed.
- Published
- 2008
46. On generalized semi-Pareto and semi-Burr distributions and random coefficient minification processes
- Author
-
K. Jayakumar, R. P. Gupta, and D. Michele Cifarelli
- Subjects
TheoryofComputation_MISCELLANEOUS ,Statistics and Probability ,Computer Science::Computer Science and Game Theory ,Mathematical optimization ,Pareto interpolation ,Burr distribution ,Pareto principle ,Characterizations ,Pareto distribution ,symbols.namesake ,Heavy-tailed distribution ,symbols ,Generalized integer gamma distribution ,Applied mathematics ,Lomax distribution ,ComputingMethodologies_GENERAL ,Burr processes ,Statistics, Probability and Uncertainty ,Inverse distribution ,Mathematics - Abstract
A characterization of Pareto type III distribution is obtained. As a generalization of the Pareto distribution, a new class of distributions called the generalized Pareto distributions is introduced and a characterization of this class is obtained. We introduce a new class of autoregressive models with semi-Pareto marginals and study its properties. As a generalization of semi-Pareto distribution, we introduce semi-Burr distribution and develop a random coefficient autoregressive model with semi-Burr marginal distributions.
- Published
- 2008
47. A new design approach for solving linear quadratic nash games of multiparameter singularly perturbed systems
- Author
-
Hiroaki Mukaidani
- Subjects
Mathematical optimization ,Generalized cross-coupled multiparameter algebraic Riccati equations (GCMARE) ,Linear quadratic Nash games ,Optimal control ,symbols.namesake ,Newton's method ,Rate of convergence ,Control theory ,Nash equilibrium ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Convergence (routing) ,symbols ,Electrical and Electronic Engineering ,Algebraic number ,Multiparameter singularly perturbed systems (MSPS) ,Mathematics ,Nash games - Abstract
In this paper, the linear quadratic Nash games for infinite horizon nonstandard multiparameter singularly perturbed systems (MSPS) without the nonsingularity assumption that is needed for the existing result are discussed. The new strategies are obtained by solving the generalized cross-coupled multiparameter algebraic Riccati equations (GCMARE). Firstly, the asymptotic expansions for the GCMARE are newly established. The main result in this paper is that the proposed algorithm which is based on the Newton's method for solving the GCMARE guarantees the quadratic convergence. In fact, the simulation results show that the proposed algorithm succeed in improving the convergence rate dramatically compared with the previous results. It is also shown that the resulting controller achieves O(/spl par//spl mu//spl par//sup 2n/) approximation of the optimal cost.
- Published
- 2005
48. Optimal Allocation for Extreme Value Regression Under Time Censoring
- Author
-
Wei Gao, Hon Yiu So, Hon Keung Tony Ng, and Ping Shing Chan
- Subjects
Mathematical optimization ,symbols.namesake ,Censoring (clinical trials) ,Optimal allocation ,symbols ,Estimator ,Regression analysis ,Extreme value theory ,Fisher information ,Regression ,Mathematics ,Accelerated life testing - Abstract
In this paper, we discuss the optimal allocation problem in a multi-level accelerated life testing experiment under time (Type-I) censoring when an extreme-value regression model is used for statistical analysis. We derive the expected Fisher information and the asymptotic variance-covariance matrix of the maximum likelihood estimators. Three optimality criteria are considered and the corresponding optimal allocations are determined. Under Type-I censoring, because the optimal allocations are depending on the model parameters, the sensitivity of the optimal allocations due to mis-specification of the model parameters is studied. A numerical example is used to illustrate the methodologies developed in this paper.
- Published
- 2020
49. Interpreting models of infectious diseases in terms of integral input-to-state stability
- Author
-
Hiroshi Ito
- Subjects
Lyapunov function ,0209 industrial biotechnology ,Mathematical optimization ,Control and Optimization ,Stability (learning theory) ,02 engineering and technology ,Dynamical Systems (math.DS) ,01 natural sciences ,Domain (software engineering) ,symbols.namesake ,020901 industrial engineering & automation ,Development (topology) ,Component (UML) ,FOS: Mathematics ,Mathematics - Dynamical Systems ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics ,Applied Mathematics ,010102 general mathematics ,Control and Systems Engineering ,Optimization and Control (math.OC) ,Control system ,Ordinary differential equation ,Signal Processing ,symbols ,State (computer science) - Abstract
This paper aims to develop a system-theoretic approach to ordinary differential equations which deterministically describe dynamics of prevalence of epidemics. The equations are treated as interconnections in which component systems are connected by signals. The notions of integral input-to-state stability (iISS) and input-to-state stability (ISS) have been effective in addressing nonlinearities globally without domain restrictions in analysis and design of control systems. They provide useful tools of module-based methods integrating characteristics of component systems. This paper expresses fundamental properties of models of infectious diseases and vaccination through the language of iISS and ISS of components and whole systems. The systematic treatment is expected to facilitate development of effective schemes of controlling the disease spread via non-conventional Lyapunov functions.
- Published
- 2020
- Full Text
- View/download PDF
50. Research on a Cournot–Bertrand Game Model with Relative Profit Maximization
- Author
-
Yu-Hao Zhang, Yimin Huang, Qiuxiang Li, and Yan-yan Guo
- Subjects
Mathematical optimization ,Computer Science::Computer Science and Game Theory ,021103 operations research ,Multidisciplinary ,General Computer Science ,Article Subject ,Profit maximization ,0211 other engineering and technologies ,Chaotic ,02 engineering and technology ,Product differentiation ,QA75.5-76.95 ,Cournot competition ,01 natural sciences ,Stability (probability) ,symbols.namesake ,Nash equilibrium ,Product (mathematics) ,Bounded function ,Electronic computers. Computer science ,0103 physical sciences ,symbols ,010301 acoustics ,Mathematics - Abstract
This paper considers a Cournot–Bertrand game model based on the relative profit maximization with bounded rational players. The existence and stability of the Nash equilibrium of the dynamic model are investigated. The influence of product differentiation degree and the adjustment speed on the stability of the dynamic system is discussed. Furthermore, some complex properties and global stability of the dynamic system are explored. The results find that the higher degree of product differentiation enlarges the stable range of the dynamic system, while the higher unit product cost decreases the stable range of price adjustment and increases the one of output adjustment; period cycles and aperiodic oscillation (quasi-period and chaos) occur via period-doubling or Neimark–Sacker bifurcation, and the attraction domain shrinks with the increase of adjustment speed values. By selecting appropriate control parameters, the chaotic system can return to the stable state. The research of this paper is of great significance to the decision-makers’ price decision and quantity decision.
- Published
- 2020
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.