263 results on '"Goodness of fit"'
Search Results
2. Sensitivity of Goodness of Fit Indices to Lack of Measurement Invariance with Categorical Indicators and Many Groups
- Author
-
Boris Sokolov
- Subjects
Goodness of fit ,Statistics ,Monte Carlo method ,Estimator ,Measurement invariance ,Survey research ,Sensitivity (control systems) ,Categorical variable ,Structural equation modeling ,Mathematics - Abstract
Using Monte Carlo simulation experiments, this paper examines the performance of popular SEM goodness-of-fit indices, namely CFI, TLI, RMSEA, and SRMR, with respect to a specific task of measurement invariance testing with categorical data and many groups (10-50 groups). Study factors include the number of groups, the level of non-invariance in the data, and the absence/presence of model misspecifications other than non-invariance. In sum, the study design yields a total of 81 conditions. All simulated data sets are analyzed using two popular SEM estimators, MLR and WLSMV. The main contribution of this paper to the methodological literature on cross-cultural survey research is that it produces revised guidelines for evaluating the goodness of fit of invariance MGCFA models with many groups.
- Published
- 2019
3. Goodness-of-Fit for Regime-Switching Copula Models with Application to Option Pricing
- Author
-
Mamadou Yamar Thioub, Bruno Rémillard, and Bouchra R. Nasri
- Subjects
R package ,Mathematical optimization ,Goodness of fit ,Series (mathematics) ,Computer science ,Valuation of options ,Copula (linguistics) ,Parametric model ,Expectation–maximization algorithm ,Regime switching - Abstract
We consider several time series and for each of them, we fit an appropriate dynamic parametric model. This produces serially independent error terms for each time series. The dependence between these error terms is then modeled by a regime-switching copula. The EM algorithm is used for estimating the parameters and a sequential goodness-of- fit procedure based on Cramer-von Mises statistics is proposed to select the appropriate number of regimes. Numerical experiments are performed to assess the validity of the proposed methodology. As an example of application, we evaluate a European put-on-max option on the returns of two assets. In order to facilitate the use of our methodology, we have built a R package HMMcopula available on CRAN.
- Published
- 2019
4. Some New Goodness of Fit Tests Based on Sample Spacings
- Author
-
Rahul Singh and Neeraj Misra
- Published
- 2022
5. Goodness-of-Fit Two-Phase Sampling Designs for Time-to-Event Outcomes
- Author
-
Mengling Liu, Myeonggyun Lee, Jinbo Chen, and Anne Zeleniuch-Jacquotte
- Subjects
History ,Polymers and Plastics ,Business and International Management ,Industrial and Manufacturing Engineering - Published
- 2022
6. Performance Analysis of Metric Multi Dimensional Scaling with Respect to Different Goodness-of-Fit Criteria in the Context of Dimensionality Reduction
- Author
-
Sudha Dr.T. and P Nagendra Kumar
- Subjects
Goodness of fit ,Spacetime ,Computer science ,Dimensionality reduction ,Metric (mathematics) ,Context (language use) ,Multidimensional scaling ,Algorithm ,Curse of dimensionality - Abstract
Data Mining has become one of the most prominent areas of research. Most of the Data Mining tasks are affected by the curse of dimensionality. Dimensionality reduction is one of the solutions to cope up with the curse of dimensionality. The present work analyzes the performance of Metric Multidimensional Scaling technique with respect to different goodness-of-fit criteria such as Stress, Squared stress, Sammon and Strain in the context of dimensionality Reduction. Time and space have been considered as parameters for analyzing the performance of Metric Multidimensional Scaling technique. Images of different sizes have been considered and Metric Multi dimensional Scaling has been applied to them for dimensionality reduction. The results obtained show that the time taken for dimensionality reduction by Metric Multidimensional Scaling with strain criterion is higher than time taken for dimensionality reduction by Metric Multi dimensional Scaling with stress, squared stress and sammon criteria. The size of the images obtained after applying Metric Multidimensional Scaling remains same for all goodness-of-fit criteria.
- Published
- 2018
7. A Note on Karl Pearson’s 1900 Chi-Squared Test: Two Derivations of the Asymptotic Distribution, and Uses in Goodness of Fit and Contingency Tests of Independence, and a Comparison with the Exact Sample Variance Chi-Square Result
- Author
-
Timothy Falcon Crack
- Subjects
Contingency table ,Goodness of fit ,Statistics ,Chi-square test ,Test statistic ,Asymptotic distribution ,Estimator ,Sample variance ,Mathematics ,Central limit theorem - Abstract
Karl Pearson’s chi-squared test is widely known and used, both as a goodness-of-fit test for hypothesized distributions or frequencies, and in tests of independence in contingency tables. The test was introduced in Pearson (1900), but the derivation in that paper is almost incomprehensible. Two derivations of the asymptotic distribution are given here. The first uses joint characteristic functions, and the second uses a multivariate central limit theorem. Goodness-of-fit tests and contingency table tests of independence are discussed, and the asymptotic chi-square distribution result for Pearson’s test statistic is compared and contrasted with the exact chi-square result for the sample variance estimator.
- Published
- 2018
8. Estimation and Goodness-of-Fit for Regime-Switching Copula Models with Application
- Author
-
Mamadou Yamar Thioub, Bouchra R. Nasri, and Bruno Rémillard
- Subjects
R package ,Mathematical optimization ,Goodness of fit ,Computer science ,Valuation of options ,Parametric model ,Copula (linguistics) ,Expectation–maximization algorithm ,Regime switching - Abstract
We consider several time series and for each of them, we fit an appropriate dynamic parametric model. This produces serially independent error terms for each time series. The dependence between these error terms is then modeled by a regime-switching copula. The EM algorithm is used for estimating the parameters and a sequential goodness-of-fit procedure based on Cramer-von Mises statistics is proposed to select the appropriate number of regimes. Numerical experiments are performed to assess the validity of the proposed methodology. As an example of application, we evaluate a European put-on-max option on the returns of two assets. In order to facilitate the use of our methodology, we have built a R package HMMcopula available on CRAN.
- Published
- 2018
9. Corrected Goodness-of-Fit Test in Covariance Structure Analysis
- Author
-
Kazuhiko Hayakawa
- Subjects
Analysis of covariance ,Models, Statistical ,Monte Carlo method ,Context (language use) ,Variance (accounting) ,Biostatistics ,Covariance ,Structural equation modeling ,Goodness of fit ,Autoregressive model ,Sample size determination ,Data Interpretation, Statistical ,Statistics ,Econometrics ,Chi-square test ,Humans ,Psychology ,Psychology (miscellaneous) ,Mathematics - Abstract
Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
- Published
- 2017
10. Goodness-of-Fit Tests and Selection Methods for Operational Risk
- Author
-
Vincent Lehérissé and Sophie Lavaud
- Subjects
Economics and Econometrics ,Computer science ,Operational risk ,Goodness of fit ,Bayesian information criterion ,Sample size determination ,Log-normal distribution ,Statistics ,Econometrics ,Selection method ,Sensitivity (control systems) ,Business and International Management ,Frequency distribution ,Annual loss ,Finance ,Mathematics - Abstract
Within the Loss Distribution Approach (LDA) framework, the required capital is the 99.9% Value-at-Risk of the annual loss distribution which is based on the fit of severity and frequency distributions using internal data. Supervisory guidelines for the Advanced Measurement Approaches address the issue of the sensitivity of goodness-of-fit (GOF) tests to the sample size, the number of parameters estimated and to the tail of the distributions. They suggest that a bank should consider selection methods that use the relative performance of the distributions at different confident levels. In this paper, a study is conducted to investigate selection methods such as the Bayesian Information Criterion and the violation ratio as alternatives to the GOF tests. Attention is also given to the main properties of the usual GOF tests performed in operational risks in order to figure out the cases where the sensitivity raised by the guidelines is encountered and if those tests could be reliable though.
- Published
- 2014
11. Spectral Goodness of Fit for Network Models
- Author
-
Benjamin Lubin and Jesse Shore
- Subjects
FOS: Computer and information sciences ,Physics - Physics and Society ,Sociology and Political Science ,Computer science ,Applications of spectral graph theory ,G.3 ,FOS: Physical sciences ,Social Sciences(all) ,G.2.2 ,Physics and Society (physics.soc-ph) ,Model selection ,Measure (mathematics) ,Methodology (stat.ME) ,Goodness of fit ,Linear regression ,Econometrics ,Models of network structure ,Psychology(all) ,Statistics - Methodology ,General Psychology ,Statistic ,Mathematics ,Network model ,Social and Information Networks (cs.SI) ,Spectral graph theory ,91D30, 05C82 ,General Social Sciences ,Computer Science - Social and Information Networks ,Probability and statistics ,Statistical model ,Physics - Data Analysis, Statistics and Probability ,Anthropology ,Laplacian matrix ,Algorithm ,Data Analysis, Statistics and Probability (physics.data-an) - Abstract
We introduce a new statistic, ‘spectral goodness of fit’ (SGOF) to measure how well a network model explains the structure of the pattern of ties in an observed network. SGOF provides a measure of fit analogous to the standard R 2 in linear regression. Additionally, as it takes advantage of the properties of the spectrum of the graph Laplacian, it is suitable for comparing network models of diverse functional forms, including both fitted statistical models and algorithmic generative models of networks. After introducing, defining, and providing guidance for interpreting SGOF, we illustrate the properties of the statistic with a number of examples and comparisons to existing techniques. We show that such a spectral approach to assessing model fit fills gaps left by earlier methods and can be widely applied.
- Published
- 2014
12. Goodness-of-Fit Tests for the Frailty Distribution in Proportional Hazards Models with Shared Frailty
- Author
-
Candida Geerdens, Paul Janssen, and Gerda Claeskens
- Subjects
Statistics and Probability ,Hazard (logic) ,Statistics::Theory ,Polynomial ,Time Factors ,Computer science ,Biostatistics ,Frailty model ,Gamma distribution ,Goodness-of-fit ,Order selection test ,Orthogonal polynomials ,Goodness of fit ,Pregnancy ,Statistics ,Econometrics ,Animals ,Cluster Analysis ,Cluster analysis ,Mastitis, Bovine ,Insemination, Artificial ,Proportional Hazards Models ,Mathematics ,Event (probability theory) ,Likelihood Functions ,Models, Statistical ,Series (mathematics) ,Statistics::Applications ,Proportional hazards model ,General Medicine ,Marginal likelihood ,Cattle ,Female ,Statistics, Probability and Uncertainty - Abstract
Frailty models account for the clustering present in grouped event time data. A proportional hazards model with shared frailties expresses the hazard for each subject. Often a one-parameter gamma distribution is assumed for the frailties. In this paper we construct formal goodness-of-fit tests to test for gamma frailties. We construct a new class of frailty models that extend the gamma frailty model by using certain polynomial expansions that are orthogonal with respect to the gamma density. For this extended family we obtain an explicit expression for the marginal likelihood of the data. The order selection test is based on finding the best fitting model in such a series of expanded models. A bootstrap is used to obtain p-values for the tests. Simulations and data examples illustrate the test's performance.
- Published
- 2012
13. Patching vs Packaging: Complementary Effects, Goodness of Fit, Degrees of Freedom and Intentionality in Policy Portfolio Design
- Author
-
Michael Howlett and Jeremy Rayner
- Subjects
Consistency (negotiation) ,Goodness of fit ,Work (electrical) ,Management science ,Order (exchange) ,Political science ,Portfolio ,Coherence (philosophical gambling strategy) ,Positive economics ,Policy analysis ,Policy Sciences - Abstract
Thinking about policy mixes is at the forefront of current research work in the policy sciences and raises many significant questions with respect to policy tools and instruments, processes of policy formulation, and the evolution of tool choices over time. Not least among these is the potential for multiple policy tools to achieve policy goals in an efficient and effective way. Previous work on policy mixes has highlighted evaluative criteria such as "consistency" (the ability of multiple policy tools to reinforce rather than undermine each other in the pursuit of individual policy goals), "coherence" (or the ability of multiple policy goals to co-exist with each other in a logical fashion), and 'congruence" (or the ability of multiple goals and instruments to work together in a uni-directional or mutually supportive fashion) as important design principles and measures of optimality in policy mixes. And previous work on the evolution of policy mixes has highlighted how these three criteria are often lacking in mixes which have evolved over time as well as those which have otherwise been consciously designed. This paper revisits work in this latter tradition in order to more clearly assess the reasons why many policy mixes are sub-optimal and the consequences this has for thinking about, and practices of policy design. Adding the dimensions of 'Intentionality', ‘Context’, 'Goodness of Fit’ and ‘Degrees of Freedom’ to earlier thinking about processes such as policy layering, conversion, and drift, it is argued, helps to make sense out of these different processes and how they relate to 'design'. More precise specification of the nature of policy change reveals the need to distinguish different design processes such as 'policy patching' from the usual assumptions made about wholesale policy replacement, the conditions under which they are likely to emerge and the practical activities required to enhance policy consistency, coherence and congruence.
- Published
- 2013
14. On the Robustness of Goodness-of-Fit Tests for Copulas
- Author
-
Gregor N. F. Weiss
- Subjects
Goodness of fit ,Outlier ,Copula (linguistics) ,Statistics ,Econometrics ,Robust statistics ,Tail dependence ,Statistics::Methodology ,Anomaly detection ,Statistics::Other Statistics ,Mathematics ,Parametric statistics ,Nominal level - Abstract
This paper proposes the use of outlier detection methods from robust statistics and copula goodness-of-fit tests to identify components of mixture copulas. We first consider simulated data samples in which the true dependence structure is given by a mixture of two parametric copulas: one copula that is presumed to represent the true dependence structure and one disturbing copula. The Monte Carlo simulations show that the goodness-of-fit tests we consider lose significantly in power when applied to mixtures of copulas with different tail dependence. Several goodness-of-fit tests are shown to hold their nominal level when multivariate outliers are excluded, although this improvement comes at the price of a further loss in the tests' power. The usefulness of excluding outliers in copula goodness-of-fit testing is exemplified in an empirical risk management application.
- Published
- 2011
15. A Comparison of Alternative Approaches to Supremum-Norm Goodness of Fit Tests with Estimated Parameters
- Author
-
Thomas Parker
- Subjects
Economics and Econometrics ,Random field ,Score ,Khmaladze transformation ,symbols.namesake ,Transformation (function) ,Goodness of fit ,Statistics ,Parametric model ,symbols ,Applied mathematics ,Null hypothesis ,Gaussian process ,Gauss–Markov process ,Social Sciences (miscellaneous) ,Empirical process ,Mathematics - Abstract
Goodness-of-fit tests based on parametric empirical processes have nonstandard limiting distributions when the null hypothesis is composite — that is, when parameters of the null model are estimated. Several analytic solutions to this problem have been suggested, including the calculation of adjusted critical values for these nonstandard distributions and the transformation of the empirical process such that statistics based on the transformed process are asymptotically distribution-free. The approximation methods proposed by Durbin (1985, Journal of Applied Probability 22(1), 99–122) can be applied to conduct inference for tests based on supremum-norm statistics. The resulting tests have quite accurate size, a fact that has gone unrecognized in the econometrics literature. Some justification for this accuracy lies in the similar features that Durbin’s approximation methods share with the theory of extrema for Gaussian random fields and for Gauss-Markov processes. These adjustment techniques are also related to the transformation methodology proposed by Khmaladze (1981, Theory of Probability and Its Applications26(2), 240–257) through the score function of the parametric model. Simulation experiments suggest that in small samples, Durbin-style adjustments result in tests that have higher power than tests based on transformed processes, and in some cases they have higher power than parametric bootstrap procedures.
- Published
- 2012
16. A New Goodness-of-Fit Test for Event Forecasting and its Application to Credit Default Models
- Author
-
Markus Leippold and Andreas Bloechlinger
- Subjects
Goodness of fit ,Computer science ,Component (UML) ,Econometrics ,Test statistic ,Independence (probability theory) ,Statistic ,Statistical hypothesis testing ,Credit risk ,Test (assessment) - Abstract
We develop a new goodness-of-fit test for validating the performance of probability forecasts. Our test statistic is particularly powerful under sparseness and dependence in the observed data. To build our test statistic, we start from a formal definition of calibrated forecasts, which we operationalize by introducing two components. The first component tests the level of the estimated probabilities. The second component validates the shape, measuring the differentiation between high and low robability events. After constructing test statistics for both level and shape, we provide a global goodness-of-fit statistic, which is asymptotically x^2 distributed. In a simulation exercise, we find that our approach is correctly sized and more powerful than alternative statistics. In particular, our shape statistic is significantly more powerful than the Kolmogorov-Smirnov test. Under independence our global test has significantly greater power than the popular Hosmer and Lemeshow's x^2 test. Moreover, even under dependence our global test remains correctly sized and consistent. As a timely and important empirical application of our method, we study the validation of a forecasting model for credit default events.
- Published
- 2009
17. Evaluating the Goodness of Fit of Stochastic Mortality Models
- Author
-
Andrew J. G. Cairns, Guy Coughlan, David Epstein, David Blake, Marwa Khalaf-Allah, and Kevin Dowd
- Subjects
Statistics and Probability ,Economics and Econometrics ,Series (mathematics) ,media_common.quotation_subject ,Extension (predicate logic) ,Residual ,Data set ,Cohort effect ,Goodness of fit ,Statistics ,Econometrics ,Statistics, Probability and Uncertainty ,Standard normal table ,Null hypothesis ,Normality ,media_common ,Mathematics - Abstract
This study sets out a framework to evaluate the goodness of fit of stochastic mortality models and applies it to six different models estimated using English & Welsh male mortality data over ages 64–89 and years 1961–2007. The methodology exploits the structure of each model to obtain various residual series that are predicted to be iid standard normal under the null hypothesis of model adequacy. Goodness of fit can then be assessed using conventional tests of the predictions of iid standard normality. The models considered are: Lee and Carter ’s ( 1992 ) one-factor model, a version of Renshaw and Haberman ’s ( 2006 ) extension of the Lee–Carter model to allow for a cohort-effect, the age-period-cohort model, which is a simplified version of the Renshaw–Haberman model, the 2006 Cairns–Blake–Dowd two-factor model and two generalized versions of the latter that allow for a cohort-effect. For the data set considered, there are some notable differences amongst the different models, but none of the models performs well in all tests and no model clearly dominates the others.
- Published
- 2009
18. Validity of the Parametric Bootstrap for Goodness-of-Fit Testing in Dynamic Models
- Author
-
Bruno Rémillard
- Subjects
Multivariate statistics ,Series (mathematics) ,Goodness of fit ,Autoregressive conditional heteroskedasticity ,Parametric model ,Econometrics ,Statistics::Methodology ,Projection (set theory) ,Parametric statistics ,Semiparametric model ,Mathematics - Abstract
It is shown that parametric bootstrap can be used for computing P-values of goodness-of-fit tests of multivariate time series parametric models. These models include Markovian models, GARCH models with non-Gaussian innovations, regime-switching models, as well as semi parametric models involving copulas of multivariate time series. The methodology is intuitive, easy to implement, and provides an interesting alternative to Khmaladze's transform or other projection methods.
- Published
- 2011
19. A Goodness-of-Fit Test with Focus on Conditional Value at Risk
- Author
-
José Renato Haas Ornelas, José Fajardo, and Aquiles Farias
- Subjects
goodness-of-fit ,Monte Carlo method ,Monte Carlo Simulation ,Empirical distribution function ,Inverse Gaussian distribution ,symbols.namesake ,Expected shortfall ,conditional value at risk ,Goodness of fit ,Sample size determination ,lcsh:Finance ,lcsh:HG1-9999 ,Statistics ,symbols ,Econometrics ,Focus (optics) ,Statistical hypothesis testing ,Mathematics - Abstract
To verify whether an empirical distribution has a specific theoretical distribution, several tests have been used like the Kolmogorov-Smirnov and the Kuiper tests. These tests try to analyze if all parts of the empirical distribution has a specific theoretical shape. But, in a Risk Management framework, the focus of analysis should be on the tails of the distributions, since we are interested on the extreme returns of financial assets. This paper proposes a new goodness-of-fit hypothesis test with focus on the tails of the distribution. The new test is based on the Conditional Value at Risk measure. Then we use Monte Carlo Simulations to assess the power of the new test with different sample sizes, and then compare with the Crnkovic and Drachman, Kolmogorov-Smirnov and the Kuiper tests. Results showed that the new distance has a better performance than the other distances on small samples. We also performed hypothesis tests using financial data. We have tested the hypothesis that the empirical distribution has a Normal, Scaled Student-t, Generalized Hyperbolic, Normal Inverse Gaussian and Hyperbolic distributions, based on the new distance proposed on this paper.
- Published
- 2008
20. Stichproben Goodness-of-Fit-Statistics of the Logistic Regression in Unbalanced Samples (Gutemabe der logistischen Regression bei unbalancierten Stichproben)
- Author
-
Björn Christensen, Dennis Proppe, Dominik Papies, and Michel Clement
- Subjects
Variables ,Goodness of fit ,media_common.quotation_subject ,Statistics ,Sensitivity (control systems) ,Logistic regression ,Base (exponentiation) ,Regression ,media_common ,Mathematics - Abstract
Using a simulation we investigate established goodness-of-fit-statistics that are used to evaluate logistic regression outcomes with regard to their sensitivity to the base rate (proportion of events in the dichotomous dependent variable). Our results indicate that especially the pseudo-R2-statistics cannot be properly interpreted in the case of a low base rate. We find that the area under the ROC-curve is a stable indicator of goodness-of-fit (Article in german language).
- Published
- 2008
21. Gaussian Slug - Simple Nonlinearity Enhancement to the 1-Factor and Gaussian Copula Models in Finance, with Parametric Estimation and Goodness-of-Fit Tests on US and Thai Equity Data
- Author
-
Nacaskul, PhD, Dic, Cfa, Poomjai and Worawut Sabborriboon
- Subjects
Finance ,business.industry ,Estimation theory ,Gaussian ,Probability density function ,Multivariate normal distribution ,Bivariate analysis ,Copula (probability theory) ,symbols.namesake ,Goodness of fit ,Econometrics ,symbols ,business ,Statistical hypothesis testing ,Mathematics - Abstract
A bivariate normal distribution, with the attendant non-analytically integrable p.d.f., lies at the hearts of many financial theories. Its derived Gaussian copula ostensibly does away with the normality assumptions, only to retain the linear (Pearson’s) correlation measure implicit to said bivariate normal p.d.f. In financial modelling context, the Gaussian copula suffer from at least three setbacks, namely its inability to capture (extreme) tail, asymmetric (upside vs. downside), and nonlinear (diminishing) dependency structures. Noting that various fixes have been proposed w.r.t. the former two issues, (i) this paper attempts to address the nonlinearity with the proposal of a bivariate ‘Gaussian Slug’ distribution (ii) from which a derived copula density function quite naturally and parsimoniously captures a particular nonlinear dependency structure. In addition, (iii) this paper devises a simple, intuitive formulation of copula parameter estimation as a minimisation of a chi-square test statistics, (iv) whose resultant value readily lends itself to the widely popular statistical goodness-of-fit testing. Tests were performed comparing independent vs. Gaussian vs. ‘Gaussian Slug’ copulas on weekly US and Thai equity market index and individual stock returns data, all available on Reuters.
- Published
- 2009
22. Nonparametric Analysis of Household Labor Supply: Goodness-of-Fit and Power of the Unitary and the Collective Model
- Author
-
Frederic Vermeulen and Laurens Cherchye
- Subjects
Power (social and political) ,Microeconomics ,Power analysis ,Household survey ,Goodness of fit ,Nonparametric statistics ,Collective model ,Economics ,Unitary state ,Complement (set theory) - Abstract
We compare the empirical performance of unitary and collective labor supply models, using representative data from the Dutch DNB Household Survey. We conduct a nonparametric analysis that avoids the distortive impact of an erroneously specified functional form for the preferences and/or the intrahousehold bargaining process. Our analysis focuses on the goodness-of-fit of the two behavioral models. To guarantee a fair comparison, we complement this goodness-of-fit analysis with a power analysis. Our results strongly favor the collective approach to modeling the behavior of multi-person households.
- Published
- 2006
23. Limited Information Goodness-of-Fit Testing in Multidimensional Contingency Tables
- Author
-
Albert Maydeu-Olivares and Harry Joe
- Subjects
Contingency table ,Response model ,Applied Mathematics ,Maximum likelihood ,Estimator ,Multivariate normal distribution ,Type (model theory) ,Categorical data analysis ,Goodness of fit ,Item response theory ,Consistent estimator ,Statistics ,Null hypothesis ,Categorical variable ,General Psychology ,Statistical hypothesis testing ,Mathematics - Abstract
We introduce a family of goodness-of-fit statistics for testing composite null hypotheses in multidimensional contingency tables. These statistics are quadratic forms in marginal residuals up to order r. They are asymptotically chi-square under the null hypothesis when parameters are estimated using any asymptotically normal consistent estimator. For a widely used item response model, when r is small and multidimensional tables are sparse, the proposed statistics have accurate empirical Type I errors, unlike Pearson’s X2. For this model in nonsparse situations, the proposed statistics are also more powerful than X2. In addition, the proposed statistics are asymptotically chi-square when applied to subtables, and can be used for a piecewise goodness-of-fit assessment to determine the source of misfit in poorly fitting models.
- Published
- 2005
24. Goodness-of-Fit Tests in Nonparametric Regression
- Author
-
Ingrid Van Keilegom and John H. J. Einmahl
- Subjects
Weak convergence ,Goodness of fit ,Statistics ,Estimator ,Sample (statistics) ,Function (mathematics) ,Empirical process ,Independence (probability theory) ,Mathematics ,Nonparametric regression - Abstract
Consider the nonparametric regression model Y = m(X) + e, where the function m is smooth, but unknown, and e is independent of X. We construct omnibus goodness-of-fit tests, based on n independent copies of (X, Y ), for the independence of e and X and establish asymptotic results for the proposed tests statistics. We investigate their finite sample properties through a simulation study and present an econometric application to household data. One testing procedure is based on differences of neighboring Y ’s, whereas the other one makes use of an estimator of m. The proofs are based on delicate weighted empirical process theory.
- Published
- 2006
25. Limited and Full Information Estimation and Goodness-of-Fit Testing in 2n Contingency Tables
- Author
-
Harry Joe and Alberto Maydeu Olivares
- Subjects
Contingency table ,Multivariate statistics ,Goodness of fit ,Pooling ,Statistics ,Parametric model ,Estimator ,Lack-of-fit sum of squares ,Statistical hypothesis testing ,Mathematics - Abstract
High-dimensional contingency tables tend to be sparse and standard goodness-of-fit statistics such as X2 cannot be used without pooling categories. As an improvement on arbitrary pooling, for goodness-of-fit of large 2n contingency tables, we propose a class of quadratic form statistics based on the residuals of margins or multivariate moments up to order r. Further, the marginal residuals are useful for diagnosing lack of fit of parametric models. These classes of test statistics are asymptotically chisquare and have better small sample properties than X2. We also show that these classes of test statistics have better power than X2 for some useful multivariate binary models. Related to this class of test statistics is a class of limited information estimators based on low-dimensional margins. We show that these estimators have high efficiency for one commonly used latent trait model for binary data.
- Published
- 2003
26. Asymptotically Distribution-Free Goodness-of-Fit Testing for Copulas
- Author
-
Sami Umut Can, John H. J. Einmahl, and Roger J. A. Laeven
- Published
- 2017
27. Modified Distribution-Free Goodness of Fit Test Statistic
- Author
-
Alexander Shapiro, Michael W. Browne, and So Yeon Chun
- Subjects
symbols.namesake ,PRESS statistic ,F-test ,Sampling distribution ,Statistics ,Ancillary statistic ,Pearson's chi-squared test ,symbols ,Test statistic ,Econometrics ,Completeness (statistics) ,Statistic ,Mathematics - Abstract
Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness-of-fit of a model under analysis. One of the most popular test statistics is the asymptotically distribution free (ADF) test statistic introduced by Browne in 1984. The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic can perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill conditioning of the involved large scale covariance matrices.
- Published
- 2015
28. A Modified Regularized Goodness-of-Fit Test for Copulas
- Author
-
Jean-Marie Dufour, Christian Genest, and Wanling Huang
- Published
- 2012
29. Using Histograms for Goodness-of-Fit Testing, Model Indexing and Parameter Estimation in Stock Markets
- Author
-
Robert Dochow, Mike Kersch, and Günter Schmidt
- Published
- 2011
30. Chi-Square Goodness-of-Fit Test
- Author
-
Phillip E. Pfeifer
- Published
- 2008
31. Kernel Based Goodness-of-Fit Tests for Copulas with Fixed Smoothing Parameters
- Author
-
O. Scaillet
- Published
- 2005
32. Attenuation of Agglomeration Economies: Evidence from the Universe of Chinese Manufacturing Firms
- Author
-
Shimeng Liu, Jing Li, and Liyao Li
- Subjects
Distance decay ,Economics and Econometrics ,History ,Polymers and Plastics ,Economies of agglomeration ,Attenuation ,Pooling ,State sector ,Industrial and Manufacturing Engineering ,Urban Studies ,Goodness of fit ,Economics ,Econometrics ,Manufacturing firms ,Business and International Management ,China - Abstract
This paper examines the industry-specific attenuation speed of agglomeration economies and its interplay with the large presence of state-owned enterprises in China. We achieve this focus by taking advantage of unique geo-coded administrative data on the universe of Chinese manufacturing firms. The full-spectrum analysis also allows us to assess the goodness of fit of various spatial decay functional forms and to systematically evaluate the micro-foundations that govern the decay patterns across industry types. We obtain three main findings. First, agglomeration economies attenuate sharply with spatial distance in China, with large heterogeneity in the attenuation speed across ownership and industry types. Second, the spatial decay speed is positively linked with proxies for knowledge spillovers and labor market pooling but negatively linked with proxies for input sharing and the share of the state sector. Last, the inverse square distance decay function presents the best goodness of fit among the tested functional forms.
- Published
- 2020
33. A Fitting Return to Fitting Returns: Cryptocurrency Distributions Revisited
- Author
-
Savva Shanaev and Binam Ghimire
- Subjects
Cryptocurrency ,Goodness of fit ,Skewness ,Harmonic mean ,Kurtosis ,Applied mathematics ,Cauchy distribution ,Power function ,Power law ,Mathematics - Abstract
This study fits 22 theoretical distribution functions, four of them originally derived, onto 772 cryptocurrency daily returns with goodness-of-fit evaluated using Cramer-von Mises, Anderson-Darling, Kuiper, Kolmogorov-Smirnov, and Chi-squared tests, as well as a harmonic mean p-value synthetic criterion. Most cryptocurrency return distributions can be sufficiently approximated with a Johnson SU function or an asymmetric power function. Johnson SU, asymmetric Student, and asymmetric Laplace distributions have better fit for larger cryptocurrencies, while error, generalised Cauchy, and Hampel (a Gaussian-Cauchy mixture) distributions are more characteristic of smaller cryptocurrencies, with larger coins demonstrating better overall fit. Less than 8% of sample coins and less than 4% of the top quartile by size do not fit into any of the investigated distributions, three largest “misbehaving” cryptocurrencies being Litecoin, Dogecoin, and Decred. Bitcoin and Ethereum are best modelled with error and asymmetric power law distributions, respectively, with asymmetric power law distributions stable through time. More than 30% of sample cryptocurrencies, and 26% from the top quartile, have infinite theoretical variance, severely limiting the diversification potential with such cryptoassets. Three most prominent infinite-variance coins are Bitcoin SV, Tezos, and ZCash. This study has substantial implications for risk management, portfolio management, and cryptocurrency derivative pricing.
- Published
- 2021
34. On the Shape of Timing Distributions in Free-Text Keystroke Dynamics Profiles
- Author
-
Nahuel Francisco Gonzalez, Enrique Calot, Jorge Salvador Ierache, and Waldo Hasperué
- Subjects
Keystroke dynamics ,Goodness of fit ,Ranking ,Skewness ,business.industry ,Computer science ,Histogram ,Soft biometrics ,Probability distribution ,Pattern recognition ,Artificial intelligence ,business ,Central element - Abstract
Keystroke dynamics is a soft biometric trait. Although the shape of the timing distributions in keystroke dynamics profiles is a central element for the accurate modeling of the behavioral patterns of the user, a simplified approach has been to presuppose normality. Careful consideration of the individual shapes for the timing models could lead to improvements in the error rates of current methods or possibly inspire new ones. The main objective of this study is to compare several heavy-tailed and positively skewed candidate distributions in order to rank them according to their merit for fitting timing histograms in keystroke dynamics profiles. Results are summarized in three ways: counting how many times each candidate distribution provides the best fit and ranking them in order of success, measuring average information content, and ranking candidate distributions according to the frequency of hypothesis rejection with an Anderson-Darling goodness of fit test. Seven distributions with two parameters and seven with three were evaluated against three publicly available free-text keystroke dynamics datasets. The results confirm the established use in the research community of the log-normal distribution, in its two- and three-parameter variations, as excellent choices for modeling the shape of timings histograms in keystroke dynamics profiles. However, the log-logistic distribution emerges as a clear winner among all two- and three-parameter candidates, consistently surpassing the log-normal and all the rest under the three evaluation criteria for both hold and flight times.
- Published
- 2021
35. Global Assessment of the Impact of Masking on COVID-19: A Country Level Comparative and Retrospective Analyses Using the Richards Model
- Author
-
Vejerano Ep and Mamun Mm
- Subjects
Masking (art) ,History ,Polymers and Plastics ,Coronavirus disease 2019 (COVID-19) ,Industrial and Manufacturing Engineering ,Confidence interval ,law.invention ,Surgical mask ,Transmission (mechanics) ,Goodness of fit ,law ,Statistics ,Range (statistics) ,Curve fitting ,Business and International Management ,Mathematics - Abstract
Background: Within four months since the first reported case in Wuhan, China, corona virus disease 2019 (COVID-19) spread to more than 200 countries. Since the initially reported cases in each country until mid-August 2020, country-specific interventions on masking were decentralized. Many types of masks are effective under laboratory conditions. But people wear mask inconsistently and imperfectly in the real-world. The extent by which the effectiveness of masking when mandated at a country level for multiple countries/regions reduces severe acute respiratory syndrome 2 (SARS-CoV-2) transmission has not been analyzed. Additionally, the question exists if the type of mask worn, i.e., cloth mask, surgical mask, and bandana, were effective in halting the transmission of SARS-CoV-2. Therefore, using the Richards model, a phenomenological method, we investigated differences in the infection rate (r), turning point (ti), and curve steepness (α) of the COVID-19 outbreak among 177 countries to assess the impact of masking policy and the type of mask in containing the COVID-19. Methods: We used the daily cumulative infection cases from the first reported case until August 19, 2020 for 177 countries/region taken from www.ourworldindata.org , a publicly available COVID-19 data repository. Using data for each country, we derived the r, ti, and α of COVID-19 by fitting them to the Richards model. Data fitting was performed manually in IgorPro software. We evaluated goodness of fit by minimizing the χ2 . Findings: Our analysis revealed that global COVID-19 estimates of α = 0.009 – 4.3 (95% confidence interval (CI95%), 0.005 – 0.680), r = 0.008 – 0.50 (CI95%: 0.0001 – 0.0045), and ti = 0.58 – 315.92 (CI95%: 0.02 – 36.85). The estimated range was within the limit for both countries with and without mask mandates for single and multiple wave cases. Additionally, an early masking mandate did not correlate with a shorter ti. Interpretation: Based on the phenomenological Richards model, this retrospective study’s findings indicate no significant difference in r, ti, and α between countries with and without mask mandate. Therefore, many laboratory and modeling studies on masking did not translate to a measurable difference in the real world for many countries based on our fitting. Effectiveness of masking coupled with other non-pharmaceutical interventions depends on the measures carried out by a specific country/region. This result implies that mask enforcement policy and the type of mask use (e.g., surgical, cloth) could not have significantly reduced SARS-CoV-2 transmission. Therefore, more stringent protection such as N95 combined with other control measures such as enhanced indoor ventilation, a longer social distancing recommendation may be necessary to disrupt SARS-CoV-2 transmission effectively. Therefore, selecting the mask type is critical to effectively disrupt the transmission of SARS-COV-2, which is primarily transmitted via aerosols. Funding Information: None. Declaration of Interests: We have no competing interest to declare.
- Published
- 2021
36. Modelling and Forecasting of the Nigerian Stock Exchange
- Author
-
Ibraheem Abiodun Yahayah
- Subjects
stomatognathic system ,Goodness of fit ,business.industry ,Stock exchange ,Econometrics ,Economics ,Distribution (economics) ,business ,Variance gamma process ,health care economics and organizations ,Stock price ,Laplace distribution ,Variance-gamma distribution - Abstract
In this research work, we discuss Nigerian stock price and model it using Variance-Gamma distribution. We compare the model with closely related distributions and test the goodness of fit. Finally, we compare Nigerian stock price model with Johannesburg stock exchange model.
- Published
- 2020
37. The Development of Instructional Leadership Indicators of Private School Principal of Laos
- Author
-
Souksamone Pathammavong
- Subjects
Consistency (negotiation) ,Goodness of fit ,Professional development ,Applied psychology ,Curriculum development ,Organisation climate ,Psychology ,Structural equation modeling ,Confirmatory factor analysis ,Instructional leadership - Abstract
The objectives of this study were: 1) to develop instructional leadership indicators, 2) to examine the goodness of fit for the structural model of instructional leadership indicators with the empirical data, and 3) to propose guidelines for the instructional leadership indicators, as developed with the empirical data. There were 728 samples. Data were collected using a rating-scale questionnaire. The statistics included mean, standard deviation, Chi-square, GFI. And AGFI. The findings of this study can be summarized as follows: 1) the instructional leadership indicators of private school principle were classified into 5 core factors, 21 sub-factors. And 89 indicators. 2) The consistency of examining the indicators’ confirmatory factor analysis model found that the models were fit to the empirical data following the hypotheses based on the statistics as follows: = 115.69, df = 100, P-value = 0.135, /df= 1.16, GFI = 0.98, AGFI = 0.96, CFI = 1.00, SRMR = 0.01, RMSEA = 0.02 and CN = 713.86. 3) The results of the instructional leadership development comprised 69 guidelines as follows: (1) 16 guidelines for vision, (2) 15 guidelines for organizational climate, (3) 13 guidelines for curriculum development, (4) 13 guidelines for the professional development of teachers, and (5) 12 guidelines for management innovation development.
- Published
- 2020
38. Error Correction: An Interactive Model of Executive Functions
- Author
-
Jorge Cruz Cárdenas and Carlos Ramos-Galarza
- Subjects
Correlation ,Variables ,Goodness of fit ,media_common.quotation_subject ,Statistics ,Regression analysis ,Cognition ,Variance (accounting) ,Executive functions ,Structural equation modeling ,media_common ,Mathematics - Abstract
The ability to correct errors is a high complexity executive function that can be explained by an interactive model of high-level mental skills such as the internal language regulating behavior, regulation of the limbic system, decision-making, verification of cognitive and behavioral activity and inhibitory control of automatic responses. Based on this relationship of executive functions, a model was hypothesized and tested with structural equations modelling. The model was built with a random sample of 771 subjects (Mage = 39.86, SD = 15.47; 50.5%, Women, 50.50%). The results showed a significant correlation between the variables (r = .22 and .42, p =
- Published
- 2021
39. Exploring The Factors Influencing Millennials Intention-To-Purchase of Facebook Advertising in Bangladesh
- Author
-
Mollika Ghosh
- Subjects
Questionnaire based survey ,Customer engagement ,Theory of reasoned action ,Goodness of fit ,media_common.quotation_subject ,Critical success factor ,Conceptual model ,Theory of planned behavior ,Advertising ,Psychology ,media_common ,Event management - Abstract
This study investigates the factors influencing of Facebook advertising on millennials purchase behavior focusing fashion accessories, photography and event management services in Bangladesh. The study uses both quantitative and qualitative approach to collect data through questionnaire based survey from 327 millennial Facebook users to determine their purchase intention while interacting with Facebook advertising. This paper focuses theory of planned behavior, theory of reasoned action developed by Ajzen and Fishbein (1980), hierarchy-of-effects model by Lavidge and Steiner (1961) and builds on adapting previous models proposed by several researchers on purchase intention. This paper presents a new conceptual model adding two new constructs grasping Bangladeshi millennials interest with Facebook adverts. Reliability and validity analysis, factor analysis, goodness of fit, analysis of variance and linear regression analysis measure hypothesis, using SPSS 22. The result reveals, businesses should carefully manage Facebook ads with personalized customer engagement and reward influencer customers.
- Published
- 2019
40. Semiparametric Regression for Dual Population Mortality
- Author
-
Gary Venter and Şule Şahin
- Subjects
Statistics and Probability ,Shrinkage estimator ,Economics and Econometrics ,Maximum likelihood ,Actuarial science ,Population ,Bayesian statistical decision theory ,Context (language use) ,Statistics - Applications ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,Goodness of fit ,0502 economics and business ,Statistics ,Parameter estimation ,Econometrics ,Mathematics::Metric Geometry ,Statistics::Methodology ,Semiparametric regression ,0101 mathematics ,education ,Projection (set theory) ,Computer Science::Databases ,Shrinkage ,Mathematics ,education.field_of_study ,050208 finance ,Statistics::Applications ,05 social sciences ,Markov chain Monte Carlo ,Mortality--Mathematical models ,Dual (category theory) ,Statistics::Computation ,symbols ,Statistics, Probability and Uncertainty - Abstract
Parameter shrinkage applied optimally can always reduce error and projection variances from those of maximum likelihood estimation. Many variables that actuaries use are on numerical scales, like age or year, which require parameters at each point. Rather than shrinking these towards zero, nearby parameters are better shrunk towards each other. Semiparametric regression is a statistical discipline for building curves across parameter classes using shrinkage methodology. It is similar to but more parsimonious than cubic splines. We introduce it in the context of Bayesian shrinkage and apply it to joint mortality modeling for related populations. Bayesian shrinkage of slope changes of linear splines is an approach to semiparametric modeling that evolved in the actuarial literature. It has some theoretical and practical advantages, like closed-form curves, direct and transparent determination of degree of shrinkage and of placing knots for the splines, and quantifying goodness of fit. It is also relatively easy to apply to the many nonlinear models that arise in actuarial work. We find that it compares well to a more complex state-of-the-art statistical spline shrinkage approach on a popular example from that literature., Comment: 39 pages, 8 graphs
- Published
- 2019
41. The Consumption Function - Re-Modelled Using Logarithmic Regression
- Author
-
Praveen Bandla
- Subjects
Data point ,Logarithm ,Goodness of fit ,Linear model ,Applied mathematics ,Derivative ,Function (mathematics) ,Autonomous consumption ,Nonlinear regression ,Mathematics - Abstract
The conventional linear consumption function was compared to a proposed logarithmic consumption function using collected data as the basis for studying the accuracy of each function. To kick things off, the current linear model was studied in full to lay the groundwork for sound comparison. Following after, the data collected from 30 survey respondents was presented along with an assimilation to fit the data points in the form of the variables used in the study. Then, using the data points presented, two curves – a linear consumption function and a logarithmic consumption function were developed using regression and technology. The next step of the process was to estimate the goodness of fit for each function against the real data collected. Results concluded that the logarithmic model was 36.6% more accurate than its linear counterpart. For the logarithmic model to stand, it needs to abide by certain properties of the consumption function, i.e, the requirement for the derivative to lie between 0 and 1, and the existence of a finite multiplier effect. The validity of these two properties were tested in the logarithmic model, both of which proved to be true, thus concluding the investigation – the logarithmic model is a viable alternative as it meets all the properties, and it is far more accurate than the linear model. The applications of the study were then discussed following an elaborate conclusion, that explored the limitations of the experiment and ways to improve it should it be conducted again at a later point of time.
- Published
- 2019
42. Sama Circular Model on Forecasting Foreign Guest Nights in Anuradhapura of Sri Lanka
- Author
-
Udaya Konarasinghe
- Subjects
Geography ,Occupancy ,Goodness of fit ,Statistics ,Univariate ,Data series ,Autoregressive integrated moving average ,Sri lanka ,Tourism ,Model validation - Abstract
The Sama Circular Model (SCM) is a modern development in univariate time series. It is a powerful technique to capture wave-like patterns with the trend of a data series. Forecasting occupancy is very useful for the tourism industry in Sri Lanka. Anuradhapura is one of the leading ancient cities of Sri Lanka which is highly occupied by a foreign guest. Therefore, the study was a focus on forecasting occupancy guest nights. Monthly data of foreign guest nights for the period of January 2008 to December 2017 were obtained from the Sri Lanka Tourism Development Authority (SLTDA). The SCM and Seasonal Autoregressive Integrated Moving Average (SARIMA) models were tested for forecasting. The Anderson–Darling test, Auto-Correlation Function (ACF), and Ljung-Box Q (LBQ)-test were used as the goodness of fit tests in model validation. The best-fitting model was selected by comparing by absolute measurements of errors. The study concluded that Sama Circular Model performed better than SARIMA. Therefore, it is recommended to test the Sama Circular Model for other business activities in the tourism industry of Sri Lanka.
- Published
- 2019
43. Estimation of Nested and Zero-Inflated Ordered Probit Models
- Author
-
David Dale, Andrei Sirchenko, RS: GSBE other - not theme-related research, and Data Analytics and Digitalisation
- Subjects
nop ,Zero inflation ,nested ordered probit ,Monte Carlo method ,Vuong test ,Ordered probit ,Information Criteria ,Mathematics (miscellaneous) ,Goodness of fit ,0502 economics and business ,Econometrics ,050207 economics ,ordinal outcomes ,Mathematics ,Estimation ,050208 finance ,05 social sciences ,Zero (complex analysis) ,Probabilistic logic ,Estimator ,zero inflation ,ziop3 ,sto0625 ,ziop2 ,Vuong's closeness test ,zero-inflated ordered probit ,federal funds rate target ,Akaike information criterion ,endogenous switching ,st0625 - Abstract
We introduce three new commands—nop, ziop2, and ziop3—for the estimation of a three-part nested ordered probit model, the two-part zero-inflated ordered probit models of Harris and Zhao (2007, Journal of Econometrics 141: 1073–1099) and Brooks, Harris, and Spencer (2012, Economics Letters 117: 683–686), and a three-part zero-inflated ordered probit model of Sirchenko (2020, Studies in Nonlinear Dynamics and Econometrics 24: 1) for ordinal outcomes, with both exogenous and endogenous switching. The three-part models allow the probabilities of positive, neutral (zero), and negative outcomes to be generated by distinct processes. The zero-inflated models address a preponderance of zeros and allow them to emerge in different latent regimes. We provide postestimation commands to compute probabilistic predictions and various measures of their accuracy, to assess the goodness of fit, and to perform model comparison using the Vuong test (Vuong, 1989, Econometrica 57: 307–333) with the corrections based on the Akaike and Schwarz information criteria. We investigate the finite-sample performance of the maximum likelihood estimators by Monte Carlo simulations, discuss the relations among the models, and illustrate the new commands with an empirical application to the U.S. federal funds rate target.
- Published
- 2018
44. A Multifactor Approach to Modelling the Impact of Wind Energy on Electricity Spot Prices
- Author
-
Paulina A. Rowińska, Almut E. D. Veraart, and Pierre Gruet
- Subjects
Variable (computer science) ,Spot contract ,Wind power ,Stochastic volatility ,Goodness of fit ,business.industry ,Econometrics ,Autoregressive–moving-average model ,business ,Futures contract ,Lévy process ,Mathematics - Abstract
We introduce a three-factor model of electricity spot prices, consisting of a deterministic seasonality and trend function as well as short- and long-term stochastic components, and derive a formula for futures prices. The long-term component is modelled as a Levy process with increments belonging to the class of generalised hyperbolic distributions. We describe the short-term factor by Levy semistationary processes: we start from a CARMA(2,1), i.e. a continous-time ARMA model, and generalise it by adding a short-memory stochastic volatility. We further modify the model by including the information about the wind energy production as an exogenous variable. We fit our models to German and Austrian data including spot and futures prices as well as the wind energy production and total load data. Empirical studies reveal that taking into account the impact of the wind energy generation on the prices improves the goodness of fit.
- Published
- 2018
45. Decisions Under Risk Dispersion and Skewness
- Author
-
Oben K. Bayrak and John D. Hey
- Subjects
Goodness of fit ,Skewness ,Utility theory ,Econometrics ,Allais paradox ,Standard theory ,Expected utility hypothesis ,Mathematics ,Valuation (finance) - Abstract
When people take decisions under risk, it is not only the expected utility that is important, but also the shape of the distribution of utility: clearly the dispersion is important, but also the skewness. For given mean and dispersion, decision-makers treat positively and negatively skewed prospects differently. This paper presents a new behaviourally-inspired model for decision making under risk, incorporating both dispersion and skewness. We run a horse-race of this new model against six other models of decision-making under risk and show that it outperforms many in terms of goodness of fit and shows a reasonable performance in predictive ability. It can incorporate the prominent anomalies of standard theory such as the Allais paradox, the valuation gap, and preference reversals, and also the behavioural patterns observed in experiments that cannot be explained by Rank Dependent Utility Theory.
- Published
- 2018
46. Modeling Loss Given Default
- Author
-
Xinlei Shelly Zhao, Xiaofei Zhang, and Phillip Li
- Subjects
Goodness of fit ,Mean squared error ,Computer science ,Model selection ,Parametric model ,Econometrics ,Statistical model ,Stress testing (software) ,Loss given default ,Parametric statistics - Abstract
We investigate the puzzle in the literature that various parametric loss given default (LGD) statistical models perform similarly by comparing their performance in a simulation framework. We find that, even using the full set of explanatory variables from the assumed data generating process, these models still show similar poor performance in terms of predictive accuracy and rank ordering when mean predictions and squared error loss functions are used. Therefore, the findings in the literature that predictive accuracy and rank ordering cluster in a very narrow range across different parametric models are robust. We argue, however, that predicted distributions as well as the models’ ability to accurately capture marginal effects are also important performance metrics for capital models and stress testing. We find that the sophisticated parametric models that are specifically designed to address the bi-modal distributions of LGD outperform the less sophisticated models by a large margin in terms of predicted distributions. Also, we find that stress testing poses a challenge to all LGD models because of limited data and relevant explanatory variable availability, and that model selection criteria based on goodness of fit may not serve the stress testing purpose well. Finally, the evidence here suggests that we do not need to use the most sophisticated parametric methods to model LGD.
- Published
- 2018
47. Consumer Choice with Consideration Set: Threshold Luce Model
- Author
-
Ruxian Wang
- Subjects
Competition (economics) ,symbols.namesake ,Mathematical optimization ,Optimization problem ,Goodness of fit ,Computer science ,Nash equilibrium ,Consumer choice ,Substitution (logic) ,symbols ,Independence of irrelevant alternatives ,Price optimization - Abstract
This paper investigates the threshold Luce model, a recently proposed choice model with a threshold for the consideration-set formation. Under the threshold Luce model, consumers first form their consideration set: If an alternative with significantly low utility is dominated by another one, it will not be included in the consideration set. The threshold Luce model can alleviate the independence of irrelevant alternatives (IIA) property and allow more flexible substitution patterns. We characterize the optimal strategy and develop efficient solutions for price and assortment optimization problems. Under the threshold Luce model, the price competition may have zero, one, two, or infinite Nash equilibria, depending on the magnitude of the threshold effect. Moreover, we also develop an efficient estimation method to calibrate the threshold Luce model. Our numerical study on synthetic and real data sets shows that the new model can improve the goodness of fit and prediction accuracy of consumer choice behavior, which suggests the threshold effect should be taken into account in decision making.
- Published
- 2018
48. Calculating Degrees of Freedom in Multivariate Local Polynomial Regression
- Author
-
Christopher F. Parmeter and Nadine McCloud
- Subjects
Statistics and Probability ,Polynomial regression ,Statistics::Theory ,Polynomial ,Trace (linear algebra) ,Applied Mathematics ,05 social sciences ,Degrees of freedom (statistics) ,Estimator ,Conditional expectation ,01 natural sciences ,010104 statistics & probability ,Matrix (mathematics) ,Goodness of fit ,0502 economics and business ,Statistics::Methodology ,Applied mathematics ,0101 mathematics ,Statistics, Probability and Uncertainty ,050205 econometrics ,Mathematics - Abstract
The matrix that transforms the response variable in a regression to its predicted value is commonly referred to as the hat matrix. The trace of the hat matrix is a standard metric for calculating degrees of freedom. The two prominent theoretical frameworks for studying hat matrices to calculate degrees of freedom in local polynomial regressions – ANOVA and non-ANOVA – abstract from both mixed data and the potential presence of irrelevant covariates, both of which dominate empirical applications. In the multivariate local polynomial setup with a mix of continuous and discrete covariates, which include some irrelevant covariates, we formulate asymptotic expressions for the trace of both the non-ANOVA and ANOVA-based hat matrices from the estimator of the unknown conditional mean. The asymptotic expression of the trace of the non-ANOVA hat matrix associated with the conditional mean estimator is equal up to a linear combination of kernel-dependent constants to that of the ANOVA-based hat matrix. Additionally, we document that the trace of the ANOVA-based hat matrix converges to 0 in any setting where the bandwidths diverge. This attrition outcome can occur in the presence of irrelevant continuous covariates or it can arise when the underlying data generating process is in fact of polynomial order.
- Published
- 2019
49. Causal Paths and Exogeneity Tests in Generalcorr Package for Air Pollution and Monetary Policy
- Author
-
Hrishikesh D. Vinod
- Subjects
Goodness of fit ,Statistics ,Path (graph theory) ,Instrumental variable ,Monetary policy ,Econometrics ,Economics ,Stochastic dominance ,Kernel regression ,Endogeneity ,Partial correlation - Abstract
Since causal paths are important for all sciences, my package 'generalCorr' provides sophisticated R functions using four orders of stochastic dominance and generalized partial correlation coefficients. A new test (in Version 1.0.3) replaces Hausman-Wu medieval-style diagnosis of endogeneity relying on showing that a dubious cure (instrumental variables) works. An updated weighted index summarizes causal path results from three criteria: (Cr1) lower absolute gradients, (Cr2) lower absolute residuals, both quantified by stochastic dominance of four orders, and (Cr3) from goodness of fit. We illustrate with air-pollution data and causal strength of six variables driving 'excess bond premium,' a good predictor of US recessions.
- Published
- 2017
50. Parameter Reduction in Actuarial Triangle Models
- Author
-
Gary Venter, Qian Gao, and Roman Gutkovich
- Subjects
Parameter reduction ,Actuarial science ,Lasso (statistics) ,Goodness of fit ,Lag ,Statistics ,Almost surely ,Projection (set theory) ,Random effects model ,Mathematics - Abstract
Very similar modeling is done for actuarial models in loss reserving and mortality projection. Both start with incomplete data rectangles, traditionally called triangles, and model by year of origin, year of observation, and lag from origin to observation. Actuaries using these models almost always use some form of parameter reduction as there are too many parameters to fit reliably, but usually this is an ad hoc exercise. Here we try two formal statistical approaches to parameter reduction, random effects and Lasso, and discuss methods of comparing goodness of fit.
- Published
- 2017
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.