13 results
Search Results
2. Limit theorems for reflected Ornstein-Uhlenbeck processes
- Abstract
This paper studies one-dimensional Ornstein-Uhlenbeck (OU) processes, with the distinguishing feature that they are reflected on a single boundary (put at level 0) or two boundaries (put at levels 0 and d > 0). In the literature, they are referred to as reflected OU (ROU) and doubly reflected OU (DROU), respectively. For both cases, we explicitly determine the decay rates of the (transient) probability to reach a given extreme level. The methodology relies on sample-path large deviations, so that we also identify the associated most likely paths. For DROU, we also consider the ‘idleness process’ Lt and the ‘loss process’ Ut, which are the minimal non-decreasing processes, which make the OU process remain ≥ 0 and ≤ d, respectively. We derive central limit theorems (CLTs) for Ut and Lt, using techniques from stochastic integration and the martingale CLT.
- Published
- 2014
3. Limit theorems for reflected Ornstein-Uhlenbeck processes
- Abstract
This paper studies one-dimensional Ornstein-Uhlenbeck (OU) processes, with the distinguishing feature that they are reflected on a single boundary (put at level 0) or two boundaries (put at levels 0 and d>0). In the literature, they are referred to as reflected OU (ROU) and doubly reflected OU (DROU), respectively. For both cases, we explicitly determine the decay rates of the (transient) probability to reach a given extreme level. The methodology relies on sample-path large deviations, so that we also identify the associated most likely paths. For DROU, we also consider the 'idleness process' Lt and the 'loss process' Ut, which are the minimal non-decreasing processes, which make the OU process remain =0 and =d, respectively. We derive central limit theorems (CLTs) for Ut and Lt, using techniques from stochastic integration and the martingale CLT. Keywords: Central limit theorems; Large deviations; Ornstein-Uhlenbeck processes; Reflection
- Published
- 2014
4. A survey on performance analysis of warehouse carousel systems
- Abstract
This paper gives an overview of recent research on the performance evaluation and design of carousel systems. We discuss picking strategies for problems involving one carousel, consider the throughput of the system for problems involving two carousels, give an overview of related problems in this area, and present an extensive literature review. Emphasis has been given on future research directions in this area.
- Published
- 2010
5. Recent sojourn time results for multilevel processor-sharing scheduling disciplines
- Abstract
Multilevel Processor-Sharing (MLPS) disciplines refer to a family of age-based scheduling disciplines introduced decades ago. A time-discretized version of an MLPS discipline is applied in the scheduler of the traditional UNIX operating system. In recent years, MLPS disciplines have been used to study the way that packet level scheduling mechanisms impact the performance perceived at the flow level in the Internet. Inspired by this latter application, many new sojourn time results have been discovered for these disciplines in the context of the M/G/1 queue. The aim of this paper was to give a consistent overview of these new results. In addition, it points out some intriguing open problems for further research.
- Published
- 2008
6. The Candy model : properties and inference
- Author
-
van Marie-Colette Lieshout and Radu S. Stoica
- Subjects
Statistics and Probability ,Markov chain ,Stability (learning theory) ,Inference ,Markov process ,Markov chain Monte Carlo ,Model parameters ,Markov model ,symbols.namesake ,Econometrics ,symbols ,Applied mathematics ,Marked point process ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
In this paper we study the Candy model, a marked point process introduced by STOICA et al. (2000). We prove Ruelle and local stability, investigate its Markov properties, and discuss how the model may be sampled. Finally, we consider estimation of the model parameters and present a simulation study.
- Published
- 2003
7. On the relation between cost and service models for general inventory systems
- Author
-
Whm Henk Zijm, van Gjjan Geert-Jan Houtum, Faculty of Behavioural, Management and Social Sciences, and Operations Planning Acc. & Control
- Subjects
Statistics and Probability ,Service (business) ,Generality ,IR-58423 ,Relation (database) ,Operations research ,Management science ,Computer science ,METIS-206013 ,Discount points ,Service model ,Implicit cost ,Type of service ,Relevant cost ,Statistics, Probability and Uncertainty - Abstract
In this paper, we present a systematic overview of possible relations between cost and service models for fairly general single- and multi-stage inventory systems. In particular, we relate various types of penalty costs in pure cost models to equivalent types of service measures in service models. We show how an optimal policy for a service model may be obtained from cost-optimal policies in a related pure cost model. Pure cost models have been studied extensively in the literature. By our results it seems possible to transform many of the known optimal solutions for pure cost models to service models, which are more appropriate from a practical point of view. A number of examples are discussed to show the generality and the possibly far reaching consequences of the results.
- Published
- 2000
8. Bridging the gap between a stationary point process and its Palm distribution
- Subjects
inversion formula ,ergodicity ,local characterization ,Radon-Nikodym derivative - Abstract
In the context of stationary point processes measurements are usually made from a time point chosen at random or from an occurrence chosen at random. That is, either the stationary distribution P or its Palm distribution P° is the ruling probability measure. In this paper an approach is presented to bridge the gap between these distributions. We consider probability measures which give exactly the same events zero probability as P°, having simple relations with P. Relations between P and P° are derived with these intermediate measures as bridges. With the resulting Radon-Nikodym densities several well-known results can be proved easily. New results are derived. As a corollary of cross ergodic theorems a conditional version of the well-known inversion formula is proved. Several approximations of P° are considered, for instance the local characterization of Po as a limit of conditional probability measures P° N The total variation distance between P° and P1 can be expressed in terms of the P-distribution function of the forward recurrence time.
- Published
- 1994
9. An analytical theory of multi-echelon production/distribution systems
- Author
-
Ljg Ludo Langenhoff, Whm Henk Zijm, and Mathematics and Computer Science
- Subjects
Statistics and Probability ,Inventory control ,Mathematical optimization ,Stationary process ,Stochastic process ,Process (engineering) ,Order (exchange) ,Production (economics) ,Statistics, Probability and Uncertainty ,Mathematical economics ,Average cost ,Mathematics ,Supply and demand - Abstract
In this paper, we study inventory control problems arising in multi-echelon production/distribution chains. In these chains, material is delivered by outside suppliers, proceeds through a number of manufacturing stages, and is distributed finally among a number of local warehouses in order to meet market demand. Each stage requires a fixed leadtime; furthermore, we assume a stochastic, stationary end-item demand process. The problem to balance inventory levels and service degrees can be modelled and analyzed by defining appropriate cost functions. Under an average cost criterion, we study the three most important structures arising in multi-echelon systems: assembly systems, serial systems and distribution systems. For all three systems, it is possible to prove exact decomposition results which reduce complex multi-dimensional control problems to simple one-dimensional problems. In addition, we establish the optimality of base-stock control policies.
- Published
- 1990
10. Optimal designs for linear mixture models
- Author
-
E.J. Mendieta, R. Doornbos, H.N. Linssen, and Mathematics and Computer Science
- Subjects
Statistics and Probability ,Combinatorics ,Optimal design ,Convex hull ,Polyhedron ,Variables ,media_common.quotation_subject ,Applied mathematics ,Subject (documents) ,Statistics, Probability and Uncertainty ,Mixture model ,Mathematics ,media_common - Abstract
Summary In a recent paper Snee and Marquardt [8] considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of this paper is to describe a numerical procedure resulting in a design of fixed size N, which is approximately D-optimal, and where the components may be subject to linear constraints (f.e. upper or lower bounds). The proposed method is more generally applicable for models linear in the independent variables and the parameters and the convex hull of the experimental region is a polyhedron whose vertices are known.
- Published
- 1975
11. Algorithmic methods for single server systems with repeated attempts
- Author
-
de Ag Ton Kok and Operations Planning Acc. & Control
- Subjects
Statistics and Probability ,Service (business) ,business.industry ,Computer science ,Recursion (computer science) ,Single server ,Queueing system ,Single server queue ,Exponential function ,Join (sigma algebra) ,Statistics, Probability and Uncertainty ,business ,Queue ,Computer network - Abstract
In many practical situations customers applying for service and finding the server busy will not join a queue, but make a new attempt to enter service after some time. In this paper we study single server systems with repeated attempts both for infinite-source input and finite-source input where the service times are general and the reattempt times are exponential. Numerically stable recursion schemes are derived by which the time-average and customer-average steady-state probabilities can be effectively computed.
- Published
- 1984
12. Forty years of statistics and operations research in The Netherlands
- Author
-
R. Doornbos and Mathematics and Computer Science
- Subjects
Statistics and Probability ,History ,Operations research ,Statistics ,Statistics, Probability and Uncertainty ,Period (music) - Abstract
In 1985 we commemorate the fortieth anniversary of the V“VS, the Netherlands Society for Statistics and Operations Research. In this paper the contributions in our country during the post–war period to the theory of quantitative methods and to their applications are reviewed briefly.
- Published
- 1985
13. Principal components, analysis of variance and data structure
- Author
-
John Mandel
- Subjects
Statistics and Probability ,One-way analysis of variance ,Extension (metaphysics) ,Multiple correspondence analysis ,Principal component analysis ,Econometrics ,Variance (accounting) ,Statistics, Probability and Uncertainty ,Field (geography) ,Correspondence analysis ,Terminology ,Mathematics - Abstract
The relation between principal components and analysis of variance is examined. It is shown that the model underlying the extended analysis of variance developed by GOLLOB and MANDEL is useful also as a model for principal component analysis. The elucidation of structure of two-factor data using the new analysis of variance model is illustrated by an example taken from thermodynamics. It has been may good fortune to have spent a full year in close association with Professor HAMAKER at the Technological University of Eindhoven. That year was among the most pleasant and most rewarding of my career. I feel honored to be able to join with Professor HAMAKER'S many friends and colleagues in dedicating this issue of Statistica Neerlandica to him. The method of principal components goes back to ideas proposed by PEARSON as early as 1901 (12) and developed systematically by HOTELLING in 1933 [4]. Since then the method has been applied to numerous sets of data, more particularly in the field of psychology, but also in numerous other areas of research, including the physical sciences [e.g. 1,2, 5, 7, 8, 10, 13, 14, IS, 16). In 1968 and 1969 respectively, GOLLOB (3) and MANDEL (8) proposed, independently of each other, an extension of the analysis of variance approach, which GOLLOB called "Fanova", because it combined features of analysis of variance and of factor analysis. This method, too, had been anticipated by some earlier authors [13, 17). It is immediately apparent that this extension of the analysis of variance involves the same matrix calculations as the method of principal components. The question then arises whether a deeper conceptual relationship exists between the two methods. In this paper this question is examined. The result is not only a positive answer to this question but also a clarification of the method of principal components. The author believes, as a result of this work, that interpretations of principal component analyses found in the literature are sometimes incorrect. We will attempt to show that such misinterpretations are due, in no small measure, to a particular terminology that has acquired common usage in inferences drawn from principal component analysis.
- Published
- 1972
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.