222 results on '"62C99"'
Search Results
2. Current philosophical perspectives on drug approval in the real world
- Author
-
Landes Jürgen and Auker-Howlett Daniel J.
- Subjects
drug approval ,causal inference ,randomised controlled trials ,causation ,evidence synthesis ,62d20 ,62r07 ,62a99 ,60a99 ,62c99 ,62f15 ,Mathematics ,QA1-939 ,Probabilities. Mathematical statistics ,QA273-280 - Abstract
The evidence-based medicine approach to causal medical inference is the dominant account among medical methodologists. Competing approaches originating in the philosophy of medicine seek to challenge this account. In order to see how successful these challenges are, we need to assess the performance of all approaches in real world medical inference. One important real world problem all approaches could be applied to is the assessment of drugs for approval by drug regulation agencies. This study assesses the success of the status quo against an empirical non-systematically obtained body of evidence and we scrutinise the alternative approaches from the armchair, contemplating how they would fare in the real world. We tentatively conclude that the status quo is regularly not successful at its primary task as it regularly fails to correctly assess effectiveness and safety and suggest that this is due to inherent factors of the “messy real world.” However, while all alternatives hold promise, they are at least as susceptible to the real world issues that beset the status quo. We also make recommendations for changes to current drug approval procedures, identify lacunae to fill in the alternatives, and finally, call for a continuation of the development of alternative approaches to causal medical inference and recommendations for changes to current drug approval procedures.
- Published
- 2024
- Full Text
- View/download PDF
3. Personalized decision making – A conceptual introduction
- Author
-
Mueller Scott and Pearl Judea
- Subjects
causality ,individual treatment effect ,conditional average treatment effect ,pns ,monotonicity ,62c99 ,62d20 ,Mathematics ,QA1-939 ,Probabilities. Mathematical statistics ,QA273-280 - Abstract
Personalized decision making targets the behavior of a specific individual, while population-based decision making concerns a subpopulation resembling that individual. This article clarifies the distinction between the two and explains why the former leads to more informed decisions. We further show that by combining experimental and observational studies, we can obtain valuable information about individual behavior and, consequently, improve decisions over those obtained from experimental studies alone. In particular, we show examples where such a combination discriminates between individuals who can benefit from a treatment and those who cannot – information that would not be revealed by experimental studies alone. We outline areas where this method could be of benefit to both policy makers and individuals involved.
- Published
- 2023
- Full Text
- View/download PDF
4. A metaheuristic for inferring a ranking model based on multiple reference profiles
- Author
-
Khannoussi, Arwa, Olteanu, Alexandru-Liviu, Meyer, Patrick, and Pasdeloup, Bastien
- Published
- 2024
- Full Text
- View/download PDF
5. Decision-theoretic foundations for statistical causality: Response to Pearl
- Author
-
Dawid Philip
- Subjects
augmented dag ,causal inference ,extended conditional independence ,pearlian dag ,62a01 ,62c99 ,Mathematics ,QA1-939 ,Probabilities. Mathematical statistics ,QA273-280 - Abstract
I thank Judea Pearl for his discussion of my paper and respond to the points he raises. In particular, his attachment to unaugmented directed acyclic graphs has led to a misapprehension of my own proposals. I also discuss the possibilities for developing a non-manipulative understanding of causality.
- Published
- 2022
- Full Text
- View/download PDF
6. Causation and decision: On Dawid’s 'Decision theoretic foundation of statistical causality'
- Author
-
Pearl Judea
- Subjects
directed acyclic graphs ,conditional independence ,potential outcome ,ladder of causation ,causal bayesian network ,decision theory ,structural causal models ,do-calculus ,62a01 ,62c99 ,Mathematics ,QA1-939 ,Probabilities. Mathematical statistics ,QA273-280 - Abstract
In a recent issue of this journal, Philip Dawid (2021) proposes a framework for causal inference that is based on statistical decision theory and that is, in many aspects, compatible with the familiar framework of causal graphs (e.g., Directed Acyclic Graphs (DAGs)). This editorial compares the methodological features of the two frameworks as well as their epistemological basis.
- Published
- 2022
- Full Text
- View/download PDF
7. Decision-theoretic foundations for statistical causality: Response to Shpitser
- Author
-
Dawid Philip
- Subjects
causal inference ,extended conditional independence ,front-door formula ,graphical models ,intention to treat ,partially intervenable model ,62a01 ,62c99 ,Mathematics ,QA1-939 ,Probabilities. Mathematical statistics ,QA273-280 - Abstract
I thank Ilya Shpitser for his comments on my article, and discuss the use of models with restricted interventions.
- Published
- 2022
- Full Text
- View/download PDF
8. Comment on: 'Decision-theoretic foundations for statistical causality'
- Author
-
Shpitser Ilya
- Subjects
causal inference ,decision theory ,graphical models ,62a01 ,62c99 ,Mathematics ,QA1-939 ,Probabilities. Mathematical statistics ,QA273-280 - Published
- 2022
- Full Text
- View/download PDF
9. Shrinkage Estimation of a Location Parameter for a Multivariate Skew Elliptic Distribution.
- Author
-
Fourdrinier, Dominique, Kubokawa, Tatsuya, and Strawderman, William E.
- Abstract
The multivariate skew elliptic distributions include the multivariate skew-t distribution, which is represented as a mean- and scale-mixture distribution and is useful for analyzing skewed data with heavy tails. In the estimation of location parameters in the multivariate skew elliptic distributions, we derive minimax shrinkage estimators improving on the minimum risk location equivariant estimator relative to the quadratic loss function. Especially in the skew-t distribution, we suggest specific improved estimators where the conditions for their minimaxity do not depend on the degrees of freedom. We also study the case of a general elliptically symmetrical distribution when the covariance matrix is known up to an unknown multiple, but a residual vector is available to estimate the scale. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. An axiomatic approach to Markov decision processes.
- Author
-
Jonsson, Adam
- Subjects
MARKOV processes ,DYNAMIC programming ,DISCOUNT prices ,AUTOMATIC control systems ,ECONOMIC development - Abstract
This paper presents an axiomatic approach to finite Markov decision processes where the discount rate is zero. One of the principal difficulties in the no discounting case is that, even if attention is restricted to stationary policies, a strong overtaking optimal policy need not exists. We provide preference foundations for two criteria that do admit optimal policies: 0-discount optimality and average overtaking optimality. As a corollary of our results, we obtain conditions on a decision maker's preferences which ensure that an optimal policy exists. These results have implications for disciplines where dynamic programming problems arise, including automatic control, dynamic games, and economic development. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Employing Game theory and Multilevel Analysis to Predict the Factors that affect Collaborative Learning Outcomes: An Empirical Study
- Author
-
Taraman, Sara, Hassan, Yasmin, Shawky, Doaa, and Badawi, Ashraf H.
- Subjects
Computer Science - Computers and Society ,Computer Science - Computer Science and Game Theory ,62C99 - Abstract
The purpose of this study is to propose a model that predicts the social and psychological factors that affect the individuals collaborative learning outcome in group projects. The model is established on the basis of two theories, namely, the multilevel analysis and the cooperative game theory (CGT). In CGT, a group of players form a coalition and a set of payoffs for each member in the coalition. Shapely values is one of the most important solution concepts in CGT. It represents a fair and efficient distribution of payoffs among members of a coalition. The proposed approach was applied on a sample that consisted of 78 freshman students, in their first semester, who were studying philosophical thinking course and instructed by the same professor. Tools for the data collection included self-assessments, peer assessments, quizzes and observations. The research concluded that learning outcome and contribution are best prophesied by the extent of engagement the content is purveying. Whereas personality traits, as well as, learning styles have the least impact on contribution. In addition, results show that Shapley values can be used as good vaticinators for individuals learning outcomes. These results indicate that CGT can be used as a good engine for analyzing interactions that recur in collaborative learning., Comment: 17 pages
- Published
- 2016
12. A new example for a proper scoring rule.
- Author
-
Barczy, Mátyás
- Subjects
- *
DISTRIBUTION (Probability theory) , *MOTIVATION (Psychology) - Abstract
We give a new example for a proper scoring rule motivated by the form of Anderson–Darling distance of distribution functions and an example of Brehmer and Gneiting. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Decadal climate predictions using sequential learning algorithms
- Author
-
Strobach, Ehud and Bel, Golan
- Subjects
Physics - Atmospheric and Oceanic Physics ,Physics - Data Analysis, Statistics and Probability ,Statistics - Machine Learning ,62C99 - Abstract
Ensembles of climate models are commonly used to improve climate predictions and assess the uncertainties associated with them. Weighting the models according to their performances holds the promise of further improving their predictions. Here, we use an ensemble of decadal climate predictions to demonstrate the ability of sequential learning algorithms (SLAs) to reduce the forecast errors and reduce the uncertainties. Three different SLAs are considered, and their performances are compared with those of an equally weighted ensemble, a linear regression and the climatology. Predictions of four different variables--the surface temperature, the zonal and meridional wind, and pressure--are considered. The spatial distributions of the performances are presented, and the statistical significance of the improvements achieved by the SLAs is tested. Based on the performances of the SLAs, we propose one to be highly suitable for the improvement of decadal climate predictions.
- Published
- 2015
- Full Text
- View/download PDF
14. Combining Probability Forecasts and Understanding Probability Extremizing through Information Diversity
- Author
-
Satopää, Ville, Pemantle, Robin, and Ungar, Lyle
- Subjects
Statistics - Methodology ,Mathematics - Statistics Theory ,62C99 - Abstract
Randomness in scientific estimation is generally assumed to arise from unmeasured or uncontrolled factors. However, when combining subjective probability estimates, heterogeneity stemming from people's cognitive or information diversity is often more important than measurement noise. This paper presents a novel framework that uses partially overlapping information sources. A specific model is proposed within that framework and applied to the task of aggregating the probabilities given by a group of forecasters who predict whether an event will occur or not. Our model describes the distribution of information across forecasters in terms of easily interpretable parameters and shows how the optimal amount of extremizing of the average probability forecast (shifting it closer to its nearest extreme) varies as a function of the forecasters' information overlap. Our model thus gives a more principled understanding of the historically ad hoc practice of extremizing average forecasts., Comment: This paper has been withdrawn because it was meant to be a revision to arXiv:1406.2148, not an independent submission. This was discovered on 27 May, 2015, when preparing a replacement version to arXiv:1406.2148. This replacement will supersede both that version and the one withdrawn here
- Published
- 2015
15. Decision-theoretic foundations for statistical causality
- Author
-
Dawid Philip
- Subjects
directed acyclic graph ,exchangeability ,extended conditional independence ,ignorability ,potential outcome ,single-world intervention graph ,62a01 ,62c99 ,Mathematics ,QA1-939 ,Probabilities. Mathematical statistics ,QA273-280 - Abstract
We develop a mathematical and interpretative foundation for the enterprise of decision-theoretic (DT) statistical causality, which is a straightforward way of representing and addressing causal questions. DT reframes causal inference as “assisted decision-making” and aims to understand when, and how, I can make use of external data, typically observational, to help me solve a decision problem by taking advantage of assumed relationships between the data and my problem. The relationships embodied in any representation of a causal problem require deeper justification, which is necessarily context-dependent. Here we clarify the considerations needed to support applications of the DT methodology. Exchangeability considerations are used to structure the required relationships, and a distinction drawn between intention to treat and intervention to treat forms the basis for the enabling condition of “ignorability.” We also show how the DT perspective unifies and sheds light on other popular formalisations of statistical causality, including potential responses and directed acyclic graphs.
- Published
- 2021
- Full Text
- View/download PDF
16. Sequential Wald Test Employing a Constrained Filter Bank: Application to Spacecraft Conjunctions.
- Author
-
Carpenter, J. Russell and Markley, F. Landis
- Subjects
- *
FILTER banks , *KALMAN filtering , *TEST systems , *DECISION theory , *AIR filters - Abstract
A binary Wald sequential probability ratio test that uses the residuals of two norm-inequality-constrained Kalman filters for its likelihood ratio is employed for a class of compound hypothesis tests on non-stationary systems. The hypotheses concern an inequality constraint on the norm of some elements of the system state. Each of the two constrained Kalman filters minimizes the summed squares of its estimation errors subject to one or the other direction of the inequality constraint. This test is applied to the problem of spacecraft conjunction assessment, wherein the constraint concerns the close approach distance between a spacecraft and another space object. The outcome of the test can inform decisions concerning risk mitigation maneuvers by an active spacecraft. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Improved estimators for functions of scale parameters in mixture models.
- Author
-
Patra, Lakshmi Kanta, Kumar, Somesh, and Petropoulos, Constantinos
- Abstract
Estimation of the scale parameter of the scale mixture of a location–scale family under the scale-invariant loss function is considered. The technique of Strawderman (Ann Stat 2(1):190–198, 1974) is used to obtain a class of estimators improving upon the best affine equivariant estimator of the scale parameter under certain conditions. Further, integral expressions of risk difference approach of Kubokawa (Ann Stat 22(1):290–299, 1994) is used to derive similar improvements for the reciprocal of the scale parameter. Using the improved estimator of the scale parameter and the improved estimator of the reciprocal of the scale parameter, classes of improved estimators for the ratio of scale parameters of two populations have been derived. In particular, Stein type and Brewster–Zidek type estimators are provided for the ratio of scale parameters of two mixture models. These results are applied to the scale mixture of exponential distributions, this includes the multivariate Lomax and the modified Lomax distributions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Heuristic Rating Estimation Approach to The Pairwise Comparisons Method
- Author
-
Kułakowski, Konrad
- Subjects
Computer Science - Discrete Mathematics ,62C99 ,H.4.2 ,G.1.3 - Abstract
The Heuristic Ratio Estimation (HRE) approach proposes a new way of using the pairwise comparisons matrix. It allows the assumption that the weights of some alternatives (herein referred to as concepts) are known and fixed, hence the weight vector needs to be estimated only for the other unknown values. The main purpose of this paper is to extend the previously proposed iterative HRE algorithm and present all the heuristics that create a generalized approach. Theoretical considerations are accompanied by a few numerical examples demonstrating how the selected heuristics can be used in practice., Comment: 15 pages, 2 figures
- Published
- 2013
- Full Text
- View/download PDF
19. Combining Predictive Distributions
- Author
-
Gneiting, Tilmann and Ranjan, Roopesh
- Subjects
Mathematics - Statistics Theory ,62C99 - Abstract
Predictive distributions need to be aggregated when probabilistic forecasts are merged, or when expert opinions expressed in terms of probability distributions are fused. We take a prediction space approach that applies to discrete, mixed discrete-continuous and continuous predictive distributions alike, and study combination formulas for cumulative distribution functions from the perspectives of coherence, probabilistic and conditional calibration, and dispersion. Both linear and non-linear aggregation methods are investigated, including generalized, spread-adjusted and beta-transformed linear pools. The effects and techniques are demonstrated theoretically, in simulation examples, and in case studies on density forecasts for S&P 500 returns and daily maximum temperature at Seattle-Tacoma Airport.
- Published
- 2011
20. The risk function of the goodness-of-fit tests for tail models.
- Author
-
Hoffmann, Ingo and Börner, Christoph J.
- Subjects
GOODNESS-of-fit tests ,EXTREME value theory ,PARETO distribution ,ERROR functions ,RISK assessment - Abstract
This paper contributes to answering a question that is of crucial importance in risk management and extreme value theory: How to select the threshold above which one assumes that the tail of a distribution follows a generalized Pareto distribution. This question has gained increasing attention, particularly in finance institutions, as the recent regulative norms require the assessment of risk at high quantiles. Recent methods answer this question by multiple uses of the standard goodness-of-fit tests. These tests are based on a particular choice of symmetric weighting of the mean square error between the empirical and the fitted tail distributions. Assuming an asymmetric weighting, which rates high quantiles more than small ones, we propose new goodness-of-fit tests and automated threshold selection procedures. We consider a parameterized family of asymmetric weight functions and calculate the corresponding mean square error as a loss function. Then we explicitly determine the risk function as the expected value of the loss function for finite sample. Finally, the risk function can be used to discuss whether a symmetric or asymmetric weight function should be chosen. With this the goodness-of-fit test which should be used in a new method for determining the threshold value is specified. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Componentwise estimation of ordered scale parameters of two exponential distributions under a general class of loss function.
- Author
-
Patra, Lakshmi Kanta, Kumar, Somesh, and Petropoulos, Constantinos
- Subjects
- *
DISTRIBUTION (Probability theory) , *BAYES' estimation , *DECISION theory - Abstract
In many real life situations, prior information about the parameters is available, such as the ordering of the parameters. Incorporating this prior information about the order restrictions on parameters leads to more efficient estimators. In the present communication, we investigate estimation of the ordered scale parameters of two shifted exponential distributions with unknown location parameters under a class of bowl-shaped loss functions. We have proved that the best affine equivariant estimator (BAEE) is inadmissible. Various non smooth and smooth estimators has been obtained which improve upon the BAEE. In particular we have derived the improved estimators for some well known loss functions. Finally numerical comparison is carried out to compare the risk performance of the proposed estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. Fast rate of convergence in high dimensional linear discriminant analysis
- Author
-
Girard, Robin
- Subjects
Mathematics - Statistics Theory ,62C99 - Abstract
This paper gives a theoretical analysis of high dimensional linear discrimination of Gaussian data. We study the excess risk of linear discriminant rules. We emphasis on the poor performances of standard procedures in the case when dimension p is larger than sample size n. The corresponding theoretical results are non asymptotic lower bounds. On the other hand, we propose two discrimination procedures based on dimensionality reduction and provide associated rates of convergence which can be O(log(p)/n) under sparsity assumptions. Finally all our results rely on a theorem that provides simple sharp relations between the excess risk and an estimation error associated to the geometric parameters defining the used discrimination rule.
- Published
- 2009
23. Improved estimation of the MSEs and the MSE matrices for shrinkage estimators of multivariate normal means and their applications
- Author
-
Hara, Hisayuki
- Subjects
Mathematics - Statistics Theory ,62H12 ,62C99 - Abstract
In this article we provide some nonnegative and positive estimators of the mean squared errors(MSEs) for shrinkage estimators of multivariate normal means. Proposed estimators are shown to improve on the uniformly minimum variance unbiased estimator(UMVUE) under a quadratic loss criterion. A similar improvement is also obtained for the estimators of the MSE matrices for shrinkage estimators. We also apply the proposed estimators of the MSE matrix to form confidence sets centered at shrinkage estimators and show their usefulness through numerical experiments., Comment: 29 pages
- Published
- 2007
24. Sparse Estimators and the Oracle Property, or the Return of Hodges' Estimator
- Author
-
Leeb, Hannes and Poetscher, Benedikt M.
- Subjects
Mathematics - Statistics Theory ,Statistics - Methodology ,62J07 ,62C99 ,62E20 ,62F10 ,62F12 - Abstract
We point out some pitfalls related to the concept of an oracle property as used in Fan and Li (2001, 2002, 2004) which are reminiscent of the well-known pitfalls related to Hodges' estimator. The oracle property is often a consequence of sparsity of an estimator. We show that any estimator satisfying a sparsity property has maximal risk that converges to the supremum of the loss function; in particular, the maximal risk diverges to infinity whenever the loss function is unbounded. For ease of presentation the result is set in the framework of a linear regression model, but generalizes far beyond that setting. In a Monte Carlo study we also assess the extent of the problem in finite samples for the smoothly clipped absolute deviation (SCAD) estimator introduced in Fan and Li (2001). We find that this estimator can perform rather poorly in finite samples and that its worst-case performance relative to maximum likelihood deteriorates with increasing sample size when the estimator is tuned to sparsity., Comment: 18 pages, 5 figures
- Published
- 2007
- Full Text
- View/download PDF
25. On the elicitability of range value at risk.
- Author
-
Fissler, Tobias and Ziegel, Johanna F.
- Subjects
VALUE at risk ,INTERPOLATION - Abstract
The debate of which quantitative risk measure to choose in practice has mainly focused on the dichotomy between value at risk (VaR) and expected shortfall (ES). Range value at risk (RVaR) is a natural interpolation between VaR and ES, constituting a tradeoff between the sensitivity of ES and the robustness of VaR, turning it into a practically relevant risk measure on its own. Hence, there is a need to statistically assess, compare and rank the predictive performance of different RVaR models, tasks subsumed under the term "comparative backtesting" in finance. This is best done in terms of strictly consistent loss or scoring functions, i.e., functions which are minimized in expectation by the correct risk measure forecast. Much like ES, RVaR does not admit strictly consistent scoring functions, i.e., it is not elicitable. Mitigating this negative result, we show that a triplet of RVaR with two VaR-components is elicitable. We characterize all strictly consistent scoring functions for this triplet. Additional properties of these scoring functions are examined, including the diagnostic tool of Murphy diagrams. The results are illustrated with a simulation study, and we put our approach in perspective with respect to the classical approach of trimmed least squares regression. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
26. Quantile estimation for a progressively censored exponential distribution.
- Author
-
Mani Tripathi, Yogesh, Petropoulos, Constantinos, and Sen, Tanmay
- Subjects
- *
CENSORING (Statistics) , *BAYES' estimation , *ABSOLUTE value , *COST functions , *CONVEX functions , *RISK assessment - Abstract
In this paper, we consider the problem of estimating the quantile of a two-parameter exponential distribution with respect to an arbitrary strictly convex loss function under progressive type II censoring. Inadmissibility of the best affine equivariant (BAE) estimator is established through a conditional risk analysis. In particular we provide dominance results for quadratic, linex and absolute value loss functions. Further, a class of dominating estimators is derived using the IERD (integral expression of risk difference) approach of Kubokawa (1994). In sequel the generalized Bayes estimator is shown to improve the BAE estimator. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
27. Benchmark policies for utility-carrying queues with impatience.
- Author
-
Deutsch, Yael and David, Israel
- Subjects
- *
POISSON processes , *PATIENCE , *EXPECTED returns , *TARDINESS , *DECISION making , *DILEMMA - Abstract
Men and jobs alike are characterized by a single trait, which may take on categorical values according to given population frequencies. Men arrive to the system following a Poisson process and wait till jobs are assigned to them. Jobs arrive to the system following another, independent, Poisson process. An arriving job must be assigned to a waiting man immediately, or be discarded, ensuing no gain. An assignment of a job to a man yields a higher gain if they match in trait, and a lower one if not. Each man waits a limited time for a job and leaves the system if unassigned by that time limit. It is stipulated that a man who arrives first has priority to either accept the pending job, or to pass it to the next man, who makes a similar decision. The last man in the line takes the job, or it is discarded. The individually optimal policy for each man is defined by some critical time for accepting a mismatched job. We solve for the critical times, depending on the mens' place in the queue, and obtain expressions for the ensuing optimal value functions of this system, for expected gain. The model originates from the utility-equity dilemma in assigning live organs to patients on the national waiting list. The paper reports numerical comparison of the above policy with alternative ones, for several performance measures. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. A PARAMETRIC, RESOURCE-BOUNDED GENERALIZATION OF LÖB'S THEOREM, AND A ROBUST COOPERATION CRITERION FOR OPEN-SOURCE GAME THEORY.
- Author
-
CRITCH, ANDREW
- Subjects
GAME theory ,NASH equilibrium ,GENERALIZATION ,COOPERATION ,SOURCE code ,SOFTWARE verification - Abstract
This article presents two theorems: (1) a generalization of Löb's Theorem that applies to formal proof systems operating with bounded computational resources, such as formal verification software or theorem provers, and (2) a theorem on the robust cooperation of agents that employ proofs about one another's source code as unexploitable criteria for cooperation. The latter illustrates a capacity for outperforming classical Nash equilibria and correlated equilibria, attaining mutually cooperative program equilibrium in the Prisoner's Dilemma while remaining unexploitable, i.e., sometimes achieving the outcome (Cooperate, Cooperate), and never receiving the outcome (Cooperate, Defect) as player 1. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
29. PMSE performance of two different types of preliminary test estimators under a multivariate t error term.
- Author
-
Xu, Haifeng and Ohtani, Kazuhiro
- Subjects
- *
CONFORMANCE testing , *MULTIVARIATE analysis , *DEGREES of freedom , *KEYWORDS - Abstract
In this paper, assuming that the error terms follow a multivariate t distribution, we derive the exact formula for the predictive mean squared error (PMSE) of two different types of pretest estimators. It is shown analytically that one of the pretest estimator dominates the SR estimator if a critical value of the pretest is chosen appropriately. Also, we compare the PMSE of the pretest estimators with the MMSE, AMMSE, SR and PSR estimators by numerical evaluations. Our results show that the pretest estimators dominate the OLS estimator for all combinations when the degrees of freedom is not more than 5. Key Words: Predictive mean squared error; Homogeneous preliminary test estimator; Heterogeneous preliminary test estimator; Multivariate t error term. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. Nonparametric product partition models for multiple change-points analysis.
- Author
-
García, Eunice Campirán and Gutiérrez-Peña, Eduardo
- Subjects
- *
MARGINAL distributions , *RANDOM measures , *SKEWNESS (Probability theory) , *PARTITION functions , *DISTRIBUTION (Probability theory) , *PARTITIONS (Mathematics) , *MISSING data (Statistics) - Abstract
We propose an extension of parametric product partition models. We name our proposal nonparametric product partition models because we associate a random measure instead of a parametric kernel to each set within a random partition. Our methodology does not impose any specific form on the marginal distribution of the observations, allowing us to detect shifts of behaviour even when dealing with heavy-tailed or skewed distributions. We propose a suitable loss function and find the partition of the data having minimum expected loss. We then apply our nonparametric procedure to multiple change-point analysis and compare it with PPMs and with other methodologies that have recently appeared in the literature. Also, in the context of missing data, we exploit the product partition structure in order to estimate the distribution function of each missing value, allowing us to detect change points using the loss function mentioned above. Finally, we present applications to financial as well as genetic data. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. On the impossibility of unambiguously selecting the best model for fitting data.
- Author
-
Miranda-Quintana, Ramón Alain, Kim, Taewon David, Heidar-Zadeh, Farnaz, and Ayers, Paul W.
- Subjects
- *
DATA modeling , *DEFINITIONS - Abstract
We analyze the problem of selecting the model that best describes a given dataset. We focus on the case where the best model is the one with the smallest error, respect to the reference data. To select the best model, we consider two components: (a) an error measure to compare individual data points, and (b) a function that combines the individual errors for all the points. We show that working with the most general definition of consistency, it is impossible to extend individual error measures in a way that provides a unanimous consensus about which is the best model. We also prove that, in the best case, modifying the notion of consistency leads to expressions that are too ill-behaved to be of any practical utility. These results show that selecting the model that best describes a dataset depends heavily on the way one measures the individual errors, even if these measures are consistent. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
32. Estimation of the smallest scale parameter of two-parameter exponential distributions.
- Author
-
Bobotas, Panayiotis
- Subjects
- *
ADAPTIVE sampling (Statistics) , *FIX-point estimation , *DECISION theory , *PARAMETER estimation , *CENSORSHIP , *ENTROPY (Information theory) - Abstract
Improved point and interval estimation of the smallest scale parameter of n independent populations following two-parameter exponential distributions are studied. The model is formulated in such a way that allows for treating the estimation of the smallest scale parameter as a problem of estimating an unrestricted scale parameter in the presence of a nuisance parameter. The classes of improved point and interval estimators are enriched with Stein-type, Brewster and Zidek-type, Maruyama-type and Strawderman-type improved estimators under both quadratic and entropy losses, whereas using as a criterion the coverage probability, with Stein-type, Brewster and Zidek-type, and Maruyama-type improved intervals. The sampling framework considered incorporates important life-testing schemes such as i.i.d. sampling, type-II censoring, progressive type-II censoring, adaptive progressive type-II censoring, and record values. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
33. New types of shrinkage estimators of Poisson means under the normalized squared error loss.
- Author
-
Chang, Yuan-Tsung and Shinozaki, Nobuo
- Subjects
- *
ISOTONIC regression , *ORDER statistics , *ERROR - Abstract
In estimating p(⩾ 2) independent Poisson means, Clevenson and Zidek (1975) have proposed a class of estimators that shrink the unbiased estimator to the origin and dominate the unbiased one under the normalized squared error loss. This class of estimators was subsequently enlarged in several directions. This article deals with the problem and proposes new classes of dominating estimators using prior information pertinently. Dominance is shown by partitioning the sample space into disjoint subsets and averaging the loss difference over each subset. Estimation of several Poisson mean vectors is also discussed. Further, simultaneous estimation of Poisson means under order restriction is treated and estimators which dominate the isotonic regression estimator are proposed for some types of order restrictions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. A Behavioral Interpretation of Belief Functions.
- Author
-
Kerkvliet, Timber and Meester, Ronald
- Abstract
Shafer's belief functions were introduced in the seventies of the previous century as a mathematical tool in order to model epistemic probability. One of the reasons that they were not picked up by mainstream probability was the lack of a behavioral interpretation. In this paper, we provide such a behavioral interpretation and re-derive Shafer's belief functions via a betting interpretation reminiscent of the classical Dutch Book Theorem for probability distributions. We relate our betting interpretation of belief functions to the existing literature. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
35. On Rereading Stein’s Lemma: Its Intrinsic Connection with Cramér-Rao Identity and Some New Identities
- Author
-
Mukhopadhyay, Nitis
- Published
- 2021
- Full Text
- View/download PDF
36. On the Linear and Nonlinear Generalized Bayesian Disorder Problem (Discrete Time Case)
- Author
-
Shiryaev, Albert N. and Zryumov, Pavel Y.
- Published
- 2010
- Full Text
- View/download PDF
37. Estimating a linear parametric function of a doubly censored exponential distribution.
- Author
-
Tripathi, Yogesh Mani, Petropoulos, Constantinos, Sultana, Farha, and Rastogi, Manoj Kumar
- Subjects
- *
CONVEX functions , *PARAMETRIC equations , *REAL variables , *SUBDIFFERENTIALS , *PARAMETRIC modeling - Abstract
For an arbitrary strictly convex loss function, we study the problem of estimating a linear parametric function μ + kσ, k is a known constant, when a doubly censored sample is available from a two-parameter exponential E(μ, σ) population. We establish the inadmissibility of the best affine equivariant (BAE) estimator by deriving an improved estimator. We provide various implications for quadratic and linex loss functions in detail. Improvements are obtained for the absolute value loss function as well. Further a new class of estimators improving upon the BAE estimator is derived using the Kubokawa method. This class is shown to include some benchmark estimators from the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Properties of adaptive clinical trial signature design in the presence of gene and gene-treatment interaction.
- Author
-
Cambon, A. C., Baumgartner, K. B., Brock, G. N., Cooper, N. G. F., Wu, D., and Rai, S. N.
- Subjects
- *
RANDOMIZED controlled trials , *MACHINE learning , *IMMUNOTHERAPY , *TOXICITY testing , *CANCER chemotherapy , *BREAST cancer diagnosis , *MATHEMATICAL models - Abstract
Traditional phase III clinical trials are powered to detect an overall treatment effect. However, it has increasingly been shown that many treatments are effective only for a subset of a population. The adaptive signature design uses genomic/proteomic information to prospectively predict a subset of patients more sensitive to treatment. Tests for overall treatment effect and for treatment effect in the predicted subset are conducted. In this work properties of the adaptive signature design are investigated through simulation. It was found that models which excluded expression main effect terms had higher empirical power than models which included them. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
39. Decision-theoretic foundations for statistical causality
- Author
-
A. Philip Dawid
- Subjects
Statistics and Probability ,62a01 ,Computer science ,ignorability ,extended conditional independence ,01 natural sciences ,potential outcome ,QA273-280 ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,QA1-939 ,030212 general & internal medicine ,0101 mathematics ,Representation (mathematics) ,Structure (mathematical logic) ,62c99 ,Management science ,single-world intervention graph ,Decision problem ,exchangeability ,Directed acyclic graph ,Causality ,Ignorability ,Embodied cognition ,Causal inference ,directed acyclic graph ,Statistics, Probability and Uncertainty ,Probabilities. Mathematical statistics ,Mathematics - Abstract
We develop a mathematical and interpretative foundation for the enterprise of decision-theoretic (DT) statistical causality, which is a straightforward way of representing and addressing causal questions. DT reframes causal inference as “assisted decision-making” and aims to understand when, and how, I can make use of external data, typically observational, to help me solve a decision problem by taking advantage of assumed relationships between the data and my problem. The relationships embodied in any representation of a causal problem require deeper justification, which is necessarily context-dependent. Here we clarify the considerations needed to support applications of the DT methodology. Exchangeability considerations are used to structure the required relationships, and a distinction drawn between intention to treat and intervention to treat forms the basis for the enabling condition of “ignorability.” We also show how the DT perspective unifies and sheds light on other popular formalisations of statistical causality, including potential responses and directed acyclic graphs.
- Published
- 2021
40. An affective decision-making model with applications to social robotics
- Author
-
Liu, Si and Ríos Insua, David
- Published
- 2020
- Full Text
- View/download PDF
41. On the elicitability of range value at risk
- Author
-
Tobias Fissler and Johanna F. Ziegel
- Subjects
101029 Mathematische Statistik ,Statistics and Probability ,101018 Statistik ,101018 Statistics ,Risk measure ,Rank (computer programming) ,Truncated mean ,401117 Viticulture ,101029 Mathematical statistics ,101007 Financial mathematics ,62C99 ,62G35 ,62P05 ,91G70 [MSC 2010] ,Term (time) ,Expected shortfall ,Range (mathematics) ,510 Mathematics ,101007 Finanzmathematik ,Modeling and Simulation ,Backtesting ,consistency ,expected shortfall ,point forecasts ,scoring functions ,trimmed mean ,Econometrics ,Statistics, Probability and Uncertainty ,Robustness (economics) ,Value at risk ,401117 Weinbau ,Mathematics - Abstract
The debate of which quantitative risk measure to choose in practice has mainly focused on the dichotomy between value at risk (VaR) and expected shortfall (ES). Range value at risk (RVaR) is a natural interpolation between VaR and ES, constituting a tradeoff between the sensitivity of ES and the robustness of VaR, turning it into a practically relevant risk measure on its own. Hence, there is a need to statistically assess, compare and rank the predictive performance of different RVaR models, tasks subsumed under the term “comparative backtesting” in finance. This is best done in terms of strictly consistent loss or scoring functions, i.e., functions which are minimized in expectation by the correct risk measure forecast. Much like ES, RVaR does not admit strictly consistent scoring functions, i.e., it is not elicitable. Mitigating this negative result, we show that a triplet of RVaR with two VaR-components is elicitable. We characterize all strictly consistent scoring functions for this triplet. Additional properties of these scoring functions are examined, including the diagnostic tool of Murphy diagrams. The results are illustrated with a simulation study, and we put our approach in perspective with respect to the classical approach of trimmed least squares regression.
- Published
- 2021
42. Estimation of two ordered normal means when a covariance matrix is known.
- Author
-
Chang, Yuan-Tsung, Fukuda, Kazufumi, and Shinozaki, Nobuo
- Subjects
- *
COVARIANCE matrices , *RESTRICTED maximum likelihood (Statistics) , *PITMAN'S measure of closeness - Abstract
Estimation of two normal means with an order restriction is considered when a covariance matrix is known. It is shown that restricted maximum likelihood estimator (MLE) stochastically dominates both estimators proposed by Hwang and Peddada [Confidence interval estimation subject to order restrictions. Ann Statist. 1994;22(1):67–93] and Peddada et al. [Estimation of order-restricted means from correlated data. Biometrika. 2005;92:703–715]. The estimators are also compared under the Pitman nearness criterion and it isshownthat the MLE is closer to ordered means than the other two estimators. Estimation of linear functions of ordered means is also considered and a necessary and sufficient condition on the coefficients is given for the MLE to dominate the other estimators in terms of mean squared error. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
43. Unbiased risk estimates for matrix estimation in the elliptical case.
- Author
-
Canu, Stéphane and Fourdrinier, Dominique
- Subjects
- *
RANDOM noise theory , *UNBIASED estimation (Statistics) , *COVARIANCE matrices , *GAUSSIAN distribution , *ROBUST control - Abstract
This paper is concerned with additive models of the form Y = M + E , where Y is an observed n × m matrix with m < n , M is an unknown n × m matrix of interest with low rank, and E is a random noise whose distribution is elliptically symmetric. For general estimators M ̂ of M , we develop unbiased risk estimates, including in the special case where E is Gaussian with covariance matrix proportional to the identity matrix. To this end, we develop a new Stein–Haff type identity. We apply the theory to a model selection framework with estimators defined through a soft-thresholding function. We establish the robustness of our approach within a large subclass of elliptical distributions. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
44. Estimation of the order restricted scale parameters for two populations from the Lomax distribution.
- Author
-
Petropoulos, Constantinos
- Subjects
- *
NONPARAMETRIC estimation , *PARETO distribution , *MULTIVARIATE analysis , *SAMPLING (Process) , *ACQUISITION of data - Abstract
The usual methods of estimating the unknown parameters of a distribution, use only the information given from the sample data. In many cases, there is, also, another important information for estimating the unknown parameters of our model, such as the order of these parameters, and this last information improves the quality of estimation. In this paper, we deal with the problem of estimating the ordered scale parameters from two populations of the multivariate Lomax distribution, with unknown location parameters. It is proved that the best equivariant estimators of the scale parameters (in the unrestricted case) are not admissible and we construct estimators that improve upon the usual ones (when these parameters are known to be ordered). [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
45. How to reduce the number of rating scale items without predictability loss?
- Author
-
Koczkodaj, W., Kakiashvili, T., Szymańska, A., Montero-Marin, J., Araya, R., Garcia-Campayo, J., Rutkowski, K., and Strzałka, D.
- Abstract
Rating scales are used to elicit data about qualitative entities (e.g., research collaboration). This study presents an innovative method for reducing the number of rating scale items without the predictability loss. The 'area under the receiver operator curve method' (AUC ROC) is used. The presented method has reduced the number of rating scale items (variables) to 28.57% (from 21 to 6) making over 70% of collected data unnecessary. Results have been verified by two methods of analysis: Graded Response Model (GRM) and Confirmatory Factor Analysis (CFA). GRM revealed that the new method differentiates observations of high and middle scores. CFA proved that the reliability of the rating scale has not deteriorated by the scale item reduction. Both statistical analysis evidenced usefulness of the AUC ROC reduction method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
46. Estimation of the smallest normal variance with applications to variance components models.
- Author
-
Bobotas, Panayiotis and Kourouklis, Stavros
- Subjects
- *
ANALYSIS of variance , *ESTIMATION theory , *INTERVAL analysis , *DECISION theory , *ERROR analysis in mathematics - Abstract
It is proved that improved point and interval estimators for the smallest normal variance can be directly obtained from the unrestricted normal variance estimation counterparts. Applications to estimating the error variance in general random or mixed effects models are given. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
47. Estimating an Exponential Scale Parameter Under Double Censoring
- Author
-
Tripathi, Yogesh Mani, Petropoulos, Constantinos, and Sultana, Farha
- Published
- 2019
- Full Text
- View/download PDF
48. Estimation of the inverse scatter matrix of an elliptically symmetric distribution.
- Author
-
Fourdrinier, Dominique, Mezoued, Fatiha, and Wells, Martin T.
- Subjects
- *
SCATTERING (Mathematics) , *MATRICES (Mathematics) , *INVERSE scattering transform , *VECTOR algebra , *COVARIANCE matrices - Abstract
We consider estimation of the inverse scatter matrices Σ − 1 for high-dimensional elliptically symmetric distributions. In high-dimensional settings the sample covariance matrix S may be singular. Depending on the singularity of S , natural estimators of Σ − 1 are of the form a S − 1 or a S + where a is a positive constant and S − 1 and S + are, respectively, the inverse and the Moore–Penrose inverse of S . We propose a unified estimation approach for these two cases and provide improved estimators under the quadratic loss tr ( Σ ˆ − 1 − Σ − 1 ) 2 . To this end, a new and general Stein–Haff identity is derived for the high-dimensional elliptically symmetric distribution setting. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
49. A note on Bayesian model selection for discrete data using proper scoring rules.
- Author
-
Dawid, A. Philip, Musio, Monica, and Columbu, Silvia
- Subjects
- *
BAYESIAN analysis , *DISCRETE systems , *COMPUTER simulation , *HOMOGENEOUS spaces , *MATHEMATICAL statistics - Abstract
We consider homogeneous scoring rules for selecting between Bayesian models for discrete data with possibly improper priors. Simulations indicate that, applied prequentially, the method will consistently select the true model. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
50. Dichotomic lattices and local discretization for Galois lattices.
- Author
-
Girard, Nathalie, Bertet, Karell, and Visani, Muriel
- Abstract
The present paper deals with supervised classification methods based on Galois lattices and decision trees. Such ordered structures require attributes discretization and it is known that, for decision trees, local discretization improves the classification performance compared with global discretization. While most literature on discretization for Galois lattices relies on global discretization, the presented work introduces a new local discretization algorithm for Galois lattices which hinges on a property of some specific lattices that we introduce as dichotomic lattices. Their properties, co-atomicity and $$\vee $$ -complementarity are proved along with their links with decision trees. Finally, some quantitative and qualitative evaluations of the local discretization are proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.