98 results on '"Alan T. K. Wan"'
Search Results
2. Model averaging for interval-valued data
- Author
-
Yuying Sun, Alan T. K. Wan, Shouyang Wang, and Xinyu Zhang
- Subjects
Information Systems and Management ,General Computer Science ,Mean squared error ,Model selection ,Estimator ,Interval (mathematics) ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Autoregressive model ,Bias of an estimator ,Modeling and Simulation ,Statistics ,Range (statistics) ,Time series ,Mathematics - Abstract
In recent years, model averaging, by which estimates are obtained based on not one single model but a weighted ensemble of models, has received growing attention as an alternative to model selection. To-date, methods for model averaging have been developed almost exclusively for point-valued data, despite the fact that interval-valued data are commonplace in many applications and the substantial body of literature on estimation and inference methods for interval-valued data. This paper focuses on the special case of interval time series data, and assumes that the mid-point and log-range of the interval values are modelled by a two-equation vector autoregressive with exogenous covariates (VARX) model. We develop a methodology for combining models of varying lag orders based on a weight choice criterion that minimises an unbiased estimator of the squared error risk of the model average estimator. We prove that this method yields predictors of mid-points and ranges with an optimal asymptotic property. In addition, we develop a method for correcting the range forecasts, taking into account the forecast error variance. An extensive simulation experiment examines the performance of the proposed model averaging method in finite samples. We apply the method to an interval-valued data series on crude oil future prices.
- Published
- 2022
3. Missing data analysis with sufficient dimension reduction
- Author
-
Siming Zheng, Alan T. K. Wan, and Yong Zhou
- Subjects
Statistics and Probability ,Statistics, Probability and Uncertainty - Published
- 2022
4. Kernel Averaging Estimators
- Author
-
Alan T. K. Wan, Guohua Zou, Rong Zhu, and Xinyu Zhang
- Subjects
Statistics and Probability ,Economics and Econometrics ,Kernel (statistics) ,Applied mathematics ,Estimator ,Statistics, Probability and Uncertainty ,Social Sciences (miscellaneous) ,Mathematics - Published
- 2021
5. AdaBoost Semiparametric Model Averaging Prediction for Multiple Categories
- Author
-
Jun Liao, Alan T. K. Wan, Jing Lv, and Jialiang Li
- Subjects
Statistics and Probability ,Boosting (machine learning) ,Computer science ,business.industry ,05 social sciences ,Pattern recognition ,01 natural sciences ,Semiparametric model ,010104 statistics & probability ,0502 economics and business ,Parametric model ,AdaBoost ,Artificial intelligence ,0101 mathematics ,Statistics, Probability and Uncertainty ,business ,Smoothing ,050205 econometrics - Abstract
Model average techniques are very useful for model-based prediction. However, most earlier works in this field focused on parametric models and continuous responses. In this article, we study varyi...
- Published
- 2020
6. Model averaging in a multiplicative heteroscedastic model
- Author
-
Shangwei Zhao, Shouyang Wang, Alan T. K. Wan, Yanyuan Ma, and Xinyu Zhang
- Subjects
010104 statistics & probability ,Economics and Econometrics ,Frequentist inference ,0502 economics and business ,05 social sciences ,Multiplicative function ,Econometrics ,Heteroscedastic model ,0101 mathematics ,01 natural sciences ,050205 econometrics ,Mathematics - Abstract
In recent years, the body of literature on frequentist model averaging in econometrics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the...
- Published
- 2020
7. A composite nonparametric product limit approach for estimating the distribution of survival times under length-biased and right-censored data
- Author
-
Shuqin Fan, Alan T. K. Wan, Wei Zhao, and Yong Zhou
- Subjects
Statistics and Probability ,Distribution (number theory) ,Applied Mathematics ,Statistics ,Composite number ,Nonparametric statistics ,Strong consistency ,Product limit ,Mathematics - Published
- 2020
8. Jackknife model averaging for high-dimensional quantile regression
- Author
-
Guohua Zou, Xinyu Zhang, Kang You, Alan T. K. Wan, and Miaomiao Wang
- Subjects
Statistics and Probability ,General Immunology and Microbiology ,Applied Mathematics ,Estimator ,General Medicine ,General Biochemistry, Genetics and Molecular Biology ,Quantile regression ,Lasso (statistics) ,Frequentist inference ,Covariate ,Statistics::Methodology ,Applied mathematics ,General Agricultural and Biological Sciences ,Jackknife resampling ,Empirical process ,Quantile ,Mathematics - Abstract
In this paper, we propose a frequentist model averaging method for quantile regression with high-dimensional covariates. Although research on these subjects has proliferated as separate approaches, no study has considered them in conjunction. Our method entails reducing the covariate dimensions through ranking the covariates based on marginal quantile utilities. The second step of our method implements model averaging on the models containing the covariates that survive the screening of the first step. We use a delete-one cross-validation method to select the model weights, and prove that the resultant estimator possesses an optimal asymptotic property uniformly over any compact (0,1) subset of the quantile indices. Our proof, which relies on empirical process theory, is arguably more challenging than proofs of similar results in other contexts owing to the high-dimensional nature of the problem and our relaxation of the conventional assumption of the weights summing to one. Our investigation of finite-sample performance demonstrates that the proposed method exhibits very favorable properties compared to the least absolute shrinkage and selection operator (LASSO) and smoothly clipped absolute deviation (SCAD) penalized regression methods. The method is applied to a microarray gene expression data set.
- Published
- 2021
9. A varying-coefficient partially linear transformation model for length-biased data with an application to HIV vaccine studies
- Author
-
Alan T. K. Wan, Wei Zhao, Peter Gilbert, and Yong Zhou
- Subjects
Statistics and Probability ,General Medicine ,Statistics, Probability and Uncertainty - Abstract
Prevalent cohort studies in medical research often give rise to length-biased survival data that require special treatments. The recently proposed varying-coefficient partially linear transformation (VCPLT) model has the virtue of providing a more dynamic content of the effects of the covariates on survival times than the well-known partially linear transformation (PLT) model by allowing flexible interactions between the covariates. However, no existing analysis of the VCPLT model has considered length-biased sampling. In this paper, we consider the VCPLT model when the data are length-biased and right censored, thereby extending the reach of this flexible and powerful tool. We develop a martingale estimating function-based approach to the estimation of this model, provide theoretical underpinnings, evaluate finite sample performance via simulations, and showcase its practical appeal via an empirical application using data from two HIV vaccine clinical trials conducted by the U.S. National Institute of Allergy and Infectious Diseases.
- Published
- 2021
10. Model averaging estimators for the stochastic frontier model
- Author
-
Christopher F. Parmeter, Xinyu Zhang, and Alan T. K. Wan
- Subjects
Economics and Econometrics ,Frontier ,Feature (computer vision) ,Model selection ,Statistics ,Monte Carlo method ,Econometrics ,Estimator ,Business and International Management ,Inefficiency ,Social Sciences (miscellaneous) ,Mathematics - Abstract
Model uncertainty is a prominent feature in many applied settings. This is certainty true in the efficiency analysis realm where concerns over the proper distributional specification of the error components of a stochastic frontier model is, generally, still open along with which variables influence inefficiency. Given the concern over the impact that model uncertainty is likely to have on the stochastic frontier model in practice, the present research proposes two distinct model averaging estimators, one which averages over nested classes of inefficiency distributions and another that has the ability to average over distinct distributions of inefficiency. Both of these estimators are shown to produce optimal weights when the aim is to uncover conditional inefficiency at the firm level. We study the finite-sample performance of the model average estimator via Monte Carlo experiments and compare with traditional model averaging estimators based on weights constructed from model selection criteria and present a short empirical application.
- Published
- 2019
11. A semiparametric linear transformation model for general biased-sampling and right-censored data
- Author
-
Wenhua Wei, Alan T. K. Wan, and Yong Zhou
- Subjects
Statistics and Probability ,Linear map ,Applied Mathematics ,Statistics ,Estimating equations ,Sampling bias ,Mathematics - Published
- 2019
12. Reducing Simulation Input-Model Risk via Input Model Averaging
- Author
-
Guohua Zou, Alan T. K. Wan, Xinyu Zhang, Barry L. Nelson, and Xi Jiang
- Subjects
010104 statistics & probability ,021103 operations research ,Control theory ,Computer science ,Maximum likelihood ,Input modeling ,Stochastic simulation ,0211 other engineering and technologies ,General Engineering ,02 engineering and technology ,Model risk ,0101 mathematics ,01 natural sciences - Abstract
Input uncertainty is an aspect of simulation model risk that arises when the driving input distributions are derived or “fit” to real-world, historical data. Although there has been significant progress on quantifying and hedging against input uncertainty, there has been no direct attempt to reduce it via better input modeling. The meaning of “better” depends on the context and the objective: Our context is when (a) there are one or more families of parametric distributions that are plausible choices; (b) the real-world historical data are not expected to perfectly conform to any of them; and (c) our primary goal is to obtain higher-fidelity simulation output rather than to discover the “true” distribution. In this paper, we show that frequentist model averaging can be an effective way to create input models that better represent the true, unknown input distribution, thereby reducing model risk. Input model averaging builds from standard input modeling practice, is not computationally burdensome, requires no change in how the simulation is executed nor any follow-up experiments, and is available on the Comprehensive R Archive Network (CRAN). We provide theoretical and empirical support for our approach.
- Published
- 2021
13. On the asymptotic non‐equivalence of efficient‐GMM and MEL estimators in models with missing data
- Author
-
Yong Zhou, Alan T. K. Wan, Xuerong Chen, and Yan Chen
- Subjects
Statistics and Probability ,05 social sciences ,Estimator ,Asymptotic distribution ,Missing data ,01 natural sciences ,Moment (mathematics) ,010104 statistics & probability ,Empirical likelihood ,Kernel (statistics) ,0502 economics and business ,Applied mathematics ,0101 mathematics ,Statistics, Probability and Uncertainty ,050205 econometrics ,Quantile ,Mathematics ,Generalized method of moments - Abstract
The generalized method of moments (GMM) and empirical likelihood (EL) are popular methods for combining sample and auxiliary information. These methods are used in very diverse fields of research, where competing theories often suggest variables satisfying different moment conditions. Results in the literature have shown that the efficient‐GMM (GMME) and maximum empirical likelihood (MEL) estimators have the same asymptotic distribution to order n−1/2 and that both estimators are asymptotically semiparametric efficient. In this paper, we demonstrate that when data are missing at random from the sample, the utilization of some well‐known missing‐data handling approaches proposed in the literature can yield GMME and MEL estimators with nonidentical properties; in particular, it is shown that the GMME estimator is semiparametric efficient under all the missing‐data handling approaches considered but that the MEL estimator is not always efficient. A thorough examination of the reason for the nonequivalence of the two estimators is presented. A particularly strong feature of our analysis is that we do not assume smoothness in the underlying moment conditions. Our results are thus relevant to situations involving nonsmooth estimating functions, including quantile and rank regressions, robust estimation, the estimation of receiver operating characteristic (ROC) curves, and so on.
- Published
- 2018
14. A Mallows-Type Model Averaging Estimator for the Varying-Coefficient Partially Linear Model
- Author
-
Alan T. K. Wan, Guohua Zou, Xinyu Zhang, and Rong Zhu
- Subjects
Statistics and Probability ,Heteroscedasticity ,Work (thermodynamics) ,05 social sciences ,Linear model ,Estimator ,Type (model theory) ,01 natural sciences ,010104 statistics & probability ,Frequentist inference ,0502 economics and business ,Parametric model ,Applied mathematics ,0101 mathematics ,Statistics, Probability and Uncertainty ,050205 econometrics ,Mathematics - Abstract
In the last decade, significant theoretical advances have been made in the area of frequentist model averaging (FMA); however, the majority of this work has emphasized parametric model setups. This...
- Published
- 2018
15. A model averaging approach for the ordered probit and nested logit models with applications
- Author
-
Alan T. K. Wan, Geoffrey K.F. Tso, Xinyu Zhang, and Longmei Chen
- Subjects
Statistics and Probability ,Model selection ,05 social sciences ,Monte Carlo method ,Ordered probit ,01 natural sciences ,010104 statistics & probability ,Empirical research ,0502 economics and business ,Econometrics ,Hit rate ,Range (statistics) ,0101 mathematics ,Statistics, Probability and Uncertainty ,Nested logit ,050205 econometrics ,Mathematics - Abstract
This paper considers model averaging for the ordered probit and nested logit models, which are widely used in empirical research. Within the frameworks of these models, we examine a range of model averaging methods, including the jackknife method, which is proved to have an optimal asymptotic property in this paper. We conduct a large-scale simulation study to examine the behaviour of these model averaging estimators in finite samples, and draw comparisons with model selection estimators. Our results show that while neither averaging nor selection is a consistently better strategy, model selection results in the poorest estimates far more frequently than averaging, and more often than not, averaging yields superior estimates. Among the averaging methods considered, the one based on a smoothed version of the Bayesian Information criterion frequently produces the most accurate estimates. In three real data applications, we demonstrate the usefulness of model averaging in mitigating problems associated with the ‘replication crisis’ that commonly arises with model selection.
- Published
- 2018
16. Partially linear transformation model for length-biased and right-censored data
- Author
-
Yong Zhou, Wenhua Wei, and Alan T. K. Wan
- Subjects
Statistics and Probability ,Biometrics ,Length biased sampling ,05 social sciences ,Estimating equations ,Data subject ,01 natural sciences ,Linear map ,010104 statistics & probability ,0502 economics and business ,Statistics ,0101 mathematics ,Statistics, Probability and Uncertainty ,050205 econometrics ,Mathematics - Abstract
In this paper, we consider a partially linear transformation model for data subject to length-biasedness and right-censoring which frequently arise simultaneously in biometrics and other fields. Th...
- Published
- 2018
17. EcoSta special issue on theoretical econometrics
- Author
-
Alain Hecq, Jean-Marie Dufour, Alan T. K. Wan, QE Econometrics, RS: GSBE Theme Data-Driven Decision-Making, RS: GSBE Theme Learning and Work, and RS: FSE DACS Mathematics Centre Maastricht
- Subjects
Statistics and Probability ,Economics and Econometrics ,PANEL MODELS ,Econometrics ,Economics ,Statistics, Probability and Uncertainty ,COMMON - Published
- 2020
18. A varying coefficient approach to estimating hedonic housing price functions and their quantiles
- Author
-
Shangyu Xie, Alan T. K. Wan, and Yong Zhou
- Subjects
Statistics and Probability ,050208 finance ,05 social sciences ,Kernel density estimation ,Regression analysis ,Function (mathematics) ,Quantile regression ,Nonparametric regression ,0502 economics and business ,Statistics ,Econometrics ,050207 economics ,Statistics, Probability and Uncertainty ,Curse of dimensionality ,Quantile ,Mathematics ,Parametric statistics - Abstract
The varying coefficient (VC) model introduced by Hastie and Tibshirani [26] is arguably one of the most remarkable recent developments in nonparametric regression theory. The VC model is an extension of the ordinary regression model where the coefficients are allowed to vary as smooth functions of an effect modifier possibly different from the regressors. The VC model reduces the modelling bias with its unique structure while also avoiding the ‘curse of dimensionality’ problem. While the VC model has been applied widely in a variety of disciplines, its application in economics has been minimal. The central goal of this paper is to apply VC modelling to the estimation of a hedonic house price function using data from Hong Kong, one of the world's most buoyant real estate markets. We demonstrate the advantages of the VC approach over traditional parametric and semi-parametric regressions in the face of a large number of regressors. We further combine VC modelling with quantile regression to examine ...
- Published
- 2016
19. Semiparametric GMM estimation and variable selection in dynamic panel data models with fixed effects
- Author
-
Jinhong You, Rui Li, and Alan T. K. Wan
- Subjects
Statistics and Probability ,Mathematical optimization ,Applied Mathematics ,05 social sciences ,Sampling (statistics) ,Estimator ,Feature selection ,01 natural sciences ,010104 statistics & probability ,Computational Mathematics ,Computational Theory and Mathematics ,Autoregressive model ,0502 economics and business ,Applied mathematics ,0101 mathematics ,Scad ,Additive model ,050205 econometrics ,Generalized method of moments ,Mathematics ,Parametric statistics - Abstract
Often fixed-effects dynamic panel data model assumes parametric structures and an AR(1) dynamic order. The latter assumption is mainly for convenience and not consistent with many sampling processes especially when longer panels are available. A fixed-effects dynamic partially linear additive model with a finite autoregressive lag order is considered. Based on this setup, semiparametric Generalized Method of Moments (GMM) estimators of the unknown coefficients and functions using the B(asis)-spline approximation are developed. The asymptotic properties of these estimators are established. A procedure to identify the dynamic lag order and significant exogenous variables by employing the smoothly clipped absolute deviation (SCAD) penalty is developed. It is proven that the SCAD-based GMM estimators achieve the oracle property and are selection consistent. The usefulness of proposed procedure is further illustrated in Monte Carlo studies and a real data example.
- Published
- 2016
20. A semiparametric generalized ridge estimator and link with model averaging
- Author
-
Xinyu Zhang, Aman Ullah, Huansha Wang, Alan T. K. Wan, and Guohua Zou
- Subjects
Statistics::Theory ,Economics and Econometrics ,05 social sciences ,Estimator ,Ridge (differential geometry) ,01 natural sciences ,010104 statistics & probability ,Efficient estimator ,Minimum-variance unbiased estimator ,Frequentist inference ,0502 economics and business ,Statistics ,Consistent estimator ,Statistics::Methodology ,Applied mathematics ,0101 mathematics ,Minimax estimator ,Invariant estimator ,050205 econometrics ,Mathematics - Abstract
In recent years, the suggestion of combining models as an alternative to selecting a single model from a frequentist prospective has been advanced in a number of studies. In this paper, we propose a new semi-parametric estimator of regression coe¢ cients, which is in the form of a feasible generalized ridge estimator by Hoerl and Kennard (1970b) but with di¤erent biasing factors. We prove that the generalized ridge estimator is algebraically identical to the model average estimator. Further, the biasing factors that determine the properties of both the generalized ridge and semi-parametric estimators are directly linked to the weights used in model averaging. These are interesting results for the interpretations and applications of both semi-parametric and ridge estimators. Furthermore, we demonstrate that these estimators based on model averaging weights can have properties superior to the well-known feasible generalized ridge estimator in a large region of the parameter space. Two empirical examples are presented.
- Published
- 2015
21. Quantile regression methods with varying-coefficient models for censored data
- Author
-
Alan T. K. Wan, Shangyu Xie, and Yong Zhou
- Subjects
Statistics and Probability ,Statistics::Theory ,Statistics::Applications ,Applied Mathematics ,Inverse probability weighting ,Nonparametric statistics ,Estimator ,Regression analysis ,Censoring (statistics) ,Statistics::Computation ,Quantile regression ,Computational Mathematics ,Computational Theory and Mathematics ,Resampling ,Statistics ,Econometrics ,Statistics::Methodology ,Mathematics ,Quantile - Abstract
Considerable intellectual progress has been made to the development of various semiparametric varying-coefficient models over the past ten to fifteen years. An important advantage of these models is that they avoid much of the curse of dimensionality problem as the nonparametric functions are restricted only to some variables. More recently, varying-coefficient methods have been applied to quantile regression modeling, but all previous studies assume that the data are fully observed. The main purpose of this paper is to develop a varying-coefficient approach to the estimation of regression quantiles under random data censoring. We use a weighted inverse probability approach to account for censoring, and propose a majorize-minimize type algorithm to optimize the non-smooth objective function. The asymptotic properties of the proposed estimator of the nonparametric functions are studied, and a resampling method is developed for obtaining the estimator of the sampling variance. An important aspect of our method is that it allows the censoring time to depend on the covariates. Additionally, we show that this varying-coefficient procedure can be further improved when implemented within a composite quantile regression framework. Composite quantile regression has recently gained considerable attention due to its ability to combine information across different quantile functions. We assess the finite sample properties of the proposed procedures in simulated studies. A real data application is also considered.
- Published
- 2015
22. Efficient Quantile Regression Analysis With Missing Observations
- Author
-
Alan T. K. Wan, Yong Zhou, and Xuerong Chen
- Subjects
Statistics and Probability ,Independent and identically distributed random variables ,Statistics::Theory ,Heteroscedasticity ,Inverse probability weighting ,Statistics ,Nonparametric statistics ,Estimator ,Estimating equations ,Statistics, Probability and Uncertainty ,Missing data ,Quantile regression ,Mathematics - Abstract
This article examines the problem of estimation in a quantile regression model when observations are missing at random under independent and nonidentically distributed errors. We consider three approaches of handling this problem based on nonparametric inverse probability weighting, estimating equations projection, and a combination of both. An important distinguishing feature of our methods is their ability to handle missing response and/or partially missing covariates, whereas existing techniques can handle only one or the other, but not both. We prove that our methods yield asymptotically equivalent estimators that achieve the desirable asymptotic properties of unbiasedness, normality, and n-consistency. Because we do not assume that the errors are identically distributed, our theoretical results are valid under heteroscedasticity, a particularly strong feature of our methods. Under the special case of identical error distributions, all of our proposed estimators achieve the semiparametric efficiency b...
- Published
- 2015
23. Post-J test inference in non-nested linear regression models
- Author
-
Guohua Zou, XinJie Chen, Yanqin Fan, and Alan T. K. Wan
- Subjects
Score test ,symbols.namesake ,Exact test ,F-test ,General Mathematics ,Statistics ,Sequential probability ratio test ,Pearson's chi-squared test ,symbols ,Test statistic ,Wald test ,Goldfeld–Quandt test ,Mathematics - Abstract
This paper considers the post-J test inference in non-nested linear regression models. Post-J test inference means that the inference problem is considered by taking the first stage J test into account. We first propose a post-J test estimator and derive its asymptotic distribution. We then consider the test problem of the unknown parameters, and a Wald statistic based on the post-J test estimator is proposed. A simulation study shows that the proposed Wald statistic works perfectly as well as the two-stage test from the view of the empirical size and power in large-sample cases, and when the sample size is small, it is even better. As a result, the new Wald statistic can be used directly to test the hypotheses on the unknown parameters in non-nested linear regression models.
- Published
- 2015
24. A Seemingly Unrelated Nonparametric Additive Model with Autoregressive Errors
- Author
-
Jinhong You, Riquan Zhang, and Alan T. K. Wan
- Subjects
Economics and Econometrics ,05 social sciences ,Autocorrelation ,Nonparametric statistics ,Asymptotic distribution ,Estimator ,Seemingly unrelated regressions ,01 natural sciences ,010104 statistics & probability ,Spline (mathematics) ,Autoregressive model ,0502 economics and business ,Statistics ,Applied mathematics ,0101 mathematics ,Additive model ,050205 econometrics ,Mathematics - Abstract
This article considers a nonparametric additive seemingly unrelated regression model with autoregressive errors, and develops estimation and inference procedures for this model. Our proposed method first estimates the unknown functions by combining polynomial spline series approximations with least squares, and then uses the fitted residuals together with the smoothly clipped absolute deviation (SCAD) penalty to identify the error structure and estimate the unknown autoregressive coefficients. Based on the polynomial spline series estimator and the fitted error structure, a two-stage local polynomial improved estimator for the unknown functions of the mean is further developed. Our procedure applies a prewhitening transformation of the dependent variable, and also takes into account the contemporaneous correlations across equations. We show that the resulting estimator possesses an oracle property, and is asymptotically more efficient than estimators that neglect the autocorrelation and/or contemporaneous...
- Published
- 2014
25. A varying-coefficient approach to estimating multi-level clustered data models
- Author
-
Jinhong You, Yong Zhou, Alan T. K. Wan, and Shu Liu
- Subjects
Statistics and Probability ,Data set ,Polynomial ,Statistics ,Nonparametric statistics ,Asymptotic distribution ,Estimator ,Statistics, Probability and Uncertainty ,Cluster analysis ,Parametric statistics ,Mathematics ,Curse of dimensionality - Abstract
Most of the literature on clustered data models emphasizes two-level clustering, and within-cluster correlation. While multi-level clustered data models can arise in practice, analysis of multi-level clustered data models poses additional difficulties owing to the existence of error correlations both within and across the clusters. It is perhaps for this reason that existing approaches to multi-level clustered data models have been mostly parametric. The purpose of this paper is to develop a varying-coefficient nonparametric approach to the analysis of three-level clustered data models. Because the nonparametric functions are restricted only to some of the variables, this approach has the appeal of avoiding many of the curse of dimensionality problems commonly associated with other nonparametric methods. By applying an undersmoothing technique, taking into account the correlations within and across clusters, we develop an efficient two-stage local polynomial estimation procedure for the unknown coefficient functions. The large and finite sample properties of the resultant estimators are examined; in particular, we show that the resultant estimators are asymptotically normal, and exhibit considerably smaller asymptotic variability than the traditional local polynomial estimators that neglect the correlations within and among clusters. An application example is presented based on a data set extracted from the World Bank’s STARS database.
- Published
- 2014
26. Frequentist model averaging for multinomial and ordered logit models
- Author
-
Alan T. K. Wan, Xinyu Zhang, and Shouyang Wang
- Subjects
Mathematical optimization ,Mean squared error ,Frequentist inference ,Model selection ,Monte Carlo method ,Range (statistics) ,Econometrics ,Estimator ,Multinomial distribution ,Ordered logit ,Business and International Management ,Mathematics - Abstract
Multinomial and ordered Logit models are quantitative techniques which are used in a range of disciplines nowadays. When applying these techniques, practitioners usually select a single model using either information-based criteria or pretesting. In this paper, we consider the alternative strategy of combining models rather than selecting a single model. Our strategy of weight choice for the candidate models is based on the minimization of a plug-in estimator of the asymptotic squared error risk of the model average estimator. Theoretical justifications of this model averaging strategy are provided, and a Monte Carlo study shows that the forecasts produced by the proposed strategy are often more accurate than those produced by other common model selection and model averaging strategies, especially when the regressors are only mildly to moderately correlated and the true model contains few zero coefficients. An empirical example based on credit rating data is used to illustrate the proposed method. To reduce the computational burden, we also consider a model screening step that eliminates some of the very poor models before averaging.
- Published
- 2014
27. On estimation and inference in a partially linear hazard model with varying coefficients
- Author
-
Alan T. K. Wan, Yun-bei Ma, Xuerong Chen, and Yong Zhou
- Subjects
Statistics and Probability ,Estimation ,Mathematical optimization ,Proportional hazards model ,Convergence (routing) ,Covariate ,Hazard model ,Estimator ,Inference ,Asymptotic distribution ,Mathematics - Abstract
We study estimation and inference in a marginal proportional hazards model that can handle (1) linear effects, (2) non-linear effects and (3) interactions between covariates. The model under consideration is an amalgamation of three existing marginal proportional hazards models studied in the literature. Developing an estimation and inference procedure with desirable properties for the amalgamated model is rather challenging due to the co-existence of all three effects listed above. Much of the existing literature has avoided the problem by considering narrow versions of the model. The object of this paper is to show that an estimation and inference procedure that accommodates all three effects is within reach. We present a profile partial-likelihood approach for estimating the unknowns in the amalgamated model with the resultant estimators of the unknown parameters being root- $$n$$ consistent and the estimated functions achieving optimal convergence rates. Asymptotic normality is also established for the estimators.
- Published
- 2013
28. Model averaging by jackknife criterion in models with dependent data
- Author
-
Alan T. K. Wan, Xinyu Zhang, and Guohua Zou
- Subjects
Economics and Econometrics ,Heteroscedasticity ,Asymptotically optimal algorithm ,Mean squared error ,Frequentist inference ,Applied Mathematics ,Econometrics ,Statistics::Methodology ,Estimator ,Covariance ,Jackknife resampling ,Cross-validation ,Mathematics - Abstract
The past decade witnessed a literature on model averaging by frequentist methods. For the most part, the asymptotic optimality of various existing frequentist model averaging estimators has been established under i.i.d. errors. Recently, Hansen and Racine [Hansen, B.E., Racine, J., 2012. Jackknife model averaging. Journal of Econometrics 167, 38–46] developed a jackknife model averaging (JMA) estimator, which has an important advantage over its competitors in that it achieves the lowest possible asymptotic squared error under heteroscedastic errors. In this paper, we broaden Hansen and Racine’s scope of analysis to encompass models with (i) a non-diagonal error covariance structure, and (ii) lagged dependent variables, thus allowing for dependent data. We show that under these set-ups, the JMA estimator is asymptotically optimal by a criterion equivalent to that used by Hansen and Racine. A Monte Carlo study demonstrates the finite sample performance of the JMA estimator in a variety of model settings.
- Published
- 2013
29. Adaptive LASSO for varying-coefficient partially linear measurement error models
- Author
-
Alan T. K. Wan, HaiYing Wang, and Guohua Zou
- Subjects
Statistics and Probability ,Observational error ,Estimation theory ,Applied Mathematics ,Model selection ,Linear model ,Feature selection ,Lasso (statistics) ,Statistics ,Covariate ,Statistics, Probability and Uncertainty ,Scad ,Algorithm ,Mathematics - Abstract
This paper extends the adaptive LASSO (ALASSO) for simultaneous parameter estimation and variable selection to a varying-coefficient partially linear model where some of the covariates are subject to measurement errors of an additive form. We draw comparisons with the SCAD, and prove that both the ALASSO and the SCAD attain the oracle property under this setup. We further develop an algorithm in the spirit of LARS for finding the solution path of the ALASSO in practical applications. Finite sample properties of the proposed methods are examined in a simulation study, and a real data example based on the U.S. Department of Agriculture's Continuing Survey of Food Intakes by Individuals (CSFII) is considered.
- Published
- 2013
30. Focused Information Criteria, Model Selection, and Model Averaging in a Tobit Model With a Nonzero Threshold
- Author
-
Xinyu Zhang, Sherry Z. Zhou, and Alan T. K. Wan
- Subjects
Statistics and Probability ,Censored regression model ,Economics and Econometrics ,Mean squared error ,Model selection ,Focused information criterion ,Estimator ,Feature selection ,Parametric model ,Statistics ,Econometrics ,Tobit model ,Statistics, Probability and Uncertainty ,Social Sciences (miscellaneous) ,Mathematics - Abstract
Claeskens and Hjort (2003) have developed a focused information criterion (FIC) for model selection that selects different models based on different focused functions with those functions tailored to the parameters singled out for interest. Hjort and Claeskens (2003) also have presented model averaging as an alternative to model selection, and suggested a local misspecification framework for studying the limiting distributions and asymptotic risk properties of post-model selection and model average estimators in parametric models. Despite the burgeoning literature on Tobit models, little work has been done on model selection explicitly in the Tobit context. In this article we propose FICs for variable selection allowing for such measures as mean absolute deviation, mean squared error, and expected expected linear exponential errors in a type I Tobit model with an unknown threshold. We also develop a model average Tobit estimator using values of a smoothed version of the FIC as weights. We study the finite...
- Published
- 2012
31. Combining least-squares and quantile regressions
- Author
-
Yuan Yuan, Alan T. K. Wan, and Yong Zhou
- Subjects
Statistics and Probability ,Applied Mathematics ,Estimator ,Estimating equations ,Method of moments (statistics) ,Moment (mathematics) ,Empirical likelihood ,Statistics ,Econometrics ,Statistics::Methodology ,Statistics, Probability and Uncertainty ,Smoothing ,Mathematics ,Generalized method of moments ,Quantile - Abstract
Least-squares and quantile regressions are method of moments techniques that are typically used in isolation. A leading example where efficiency may be gained by combining least-squares and quantile regressions is one where some information on the error quantiles is available but the error distribution cannot be fully specified. This estimation problem may be cast in terms of solving an over-determined estimating equation (EE) system for which the generalized method of moments (GMM) and empirical likelihood (EL) are approaches of recognized importance. The major difficulty with implementing these techniques here is that the EEs associated with the quantiles are non-differentiable. In this paper, we develop a kernel-based smoothing technique for non-smooth EEs, and derive the asymptotic properties of the GMM and maximum smoothed EL (MSEL) estimators based on the smoothed EEs. Via a simulation study, we investigate the finite sample properties of the GMM and MSEL estimators that combine least-squares and quantile moment relationships. Applications to real datasets are also considered.
- Published
- 2011
32. Optimal Weight Choice for Frequentist Model Average Estimators
- Author
-
Alan T. K. Wan, Xinyu Zhang, Guohua Zou, and Hua Liang
- Subjects
Statistics and Probability ,Mathematical optimization ,Bias of an estimator ,Mean squared error ,Frequentist inference ,Model selection ,Monte Carlo method ,Linear regression ,Estimator ,Statistics, Probability and Uncertainty ,Weighting ,Mathematics - Abstract
There has been increasing interest recently in model averaging within the frequentist paradigm. The main benefit of model averaging over model selection is that it incorporates rather than ignores the uncertainty inherent in the model selection process. One of the most important, yet challenging, aspects of model averaging is how to optimally combine estimates from different models. In this work, we suggest a procedure of weight choice for frequentist model average estimators that exhibits optimality properties with respect to the estimator’s mean squared error (MSE). As a basis for demonstrating our idea, we consider averaging over a sequence of linear regression models. Building on this base, we develop a model weighting mechanism that involves minimizing the trace of an unbiased estimator of the model average estimator’s MSE. We further obtain results that reflect the finite sample as well as asymptotic optimality of the proposed mechanism. A Monte Carlo study based on simulated and real data evaluates...
- Published
- 2011
33. Weighted average least squares estimation with nonspherical disturbances and an application to the Hong Kong housing market
- Author
-
Alan T. K. Wan, Jan R. Magnus, Xinyu Zhang, Research Group: Econometrics, and Econometrics and Operations Research
- Subjects
Statistics and Probability ,Applied Mathematics ,Monte Carlo method ,Bayesian probability ,Sampling (statistics) ,Estimator ,Bayesian inference ,Least squares ,Computational Mathematics ,Computational Theory and Mathematics ,Frequentist inference ,Statistics ,Econometrics ,Weighted arithmetic mean ,Mathematics - Abstract
The recently proposed ‘weighted average least squares’ (WALS) estimator is a Bayesian combination of frequentist estimators. It has been shown that the WALS estimator possesses major advantages over standard Bayesian model averaging (BMA) estimators: the WALS estimator has bounded risk, allows a coherent treatment of ignorance and its computational effort is negligible. However, the sampling properties of the WALS estimator as compared to BMA estimators are heretofore unexamined. The WALS theory is further extended to allow for nonspherical disturbances, and the estimator is illustrated with data from the Hong Kong real estate market. Monte Carlo evidence shows that the WALS estimator performs significantly better than standard BMA and pretest alternatives.
- Published
- 2011
34. Frequentist Model Averaging with missing observations
- Author
-
Alan T. K. Wan, Michael Schomaker, and Christian Heumann
- Subjects
Statistics and Probability ,Logistic distribution ,Applied Mathematics ,Model selection ,Estimator ,Regression analysis ,Missing data ,Computational Mathematics ,Computational Theory and Mathematics ,Frequentist inference ,Prior probability ,Statistics ,Imputation (statistics) ,Mathematics - Abstract
Model averaging or combining is often considered as an alternative to model selection. Frequentist Model Averaging (FMA) is considered extensively and strategies for the application of FMA methods in the presence of missing data based on two distinct approaches are presented. The first approach combines estimates from a set of appropriate models which are weighted by scores of a missing data adjusted criterion developed in the recent literature of model selection. The second approach averages over the estimates of a set of models with weights based on conventional model selection criteria but with the missing data replaced by imputed values prior to estimating the models. For this purpose three easy-to-use imputation methods that have been programmed in currently available statistical software are considered, and a simple recursive algorithm is further adapted to implement a generalized regression imputation in a way such that the missing values are predicted successively. The latter algorithm is found to be quite useful when one is confronted with two or more missing values simultaneously in a given row of observations. Focusing on a binary logistic regression model, the properties of the FMA estimators resulting from these strategies are explored by means of a Monte Carlo study. The results show that in many situations, averaging after imputation is preferred to averaging using weights that adjust for the missing data, and model average estimators often provide better estimates than those resulting from any single model. As an illustration, the proposed methods are applied to a dataset from a study of Duchenne muscular dystrophy detection.
- Published
- 2010
35. Wavelet analysis of change-points in a non-parametric regression with heteroscedastic variance
- Author
-
Yong Zhou, Xiaojing Wang, Alan T. K. Wan, and Shangyu Xie
- Subjects
Normal distribution ,Economics and Econometrics ,Heteroscedasticity ,Applied Mathematics ,Statistics ,Generalized extreme value distribution ,Test statistic ,Asymptotic distribution ,Applied mathematics ,Extreme value theory ,Conditional variance ,Nonparametric regression ,Mathematics - Abstract
In this paper we develop wavelet methods for detecting and estimating jumps and cusps in the mean function of a non-parametric regression model. An important characteristic of the model considered here is that it allows for conditional heteroscedastic variance, a feature frequently encountered with economic and financial data. Wavelet analysis of change-points in this model has been considered in a limited way in a recent study by Chen et al. (2008) with a focus on jumps only. One problem with the aforementioned paper is that the test statistic developed there has an extreme value null limit distribution. The results of other studies have shown that the rate of convergence to the extreme value distribution is usually very slow, and critical values derived from this distribution tend to be much larger than the true ones. Here, we develop a new test and show that the test statistic has a convenient null limit N(0,1) distribution. This feature gives the proposed approach an appealing advantage over the existing approach. Another attractive feature of our results is that the asymptotic theory developed here holds for both jumps and cusps. Implementation of the proposed method for multiple jumps and cusps is also examined. The results from a simulation study show that the new test has excellent power and the estimators developed also yield very accurate estimates of the positions of the discontinuities.
- Published
- 2010
36. An empirical model of daily highs and lows of West Texas Intermediate crude oil prices
- Author
-
Angela W.W. He, Alan T. K. Wan, and Jerry T.K. Kwok
- Subjects
Economics and Econometrics ,General Energy ,Exchange rate ,Cointegration ,Technical analysis ,West Texas Intermediate ,Economics ,Econometrics ,Trading strategy ,Price level ,Autoregressive integrated moving average ,Volatility (finance) - Abstract
There is a large collection of literature on energy price forecasting, but most studies typically use monthly average or close-to-close daily price data. In practice, the daily price range constructed from the daily high and low also contains useful information on price volatility and is used frequently in technical analysis. The interaction between the daily high and low and the associated daily range has been examined in several recent studies on stock price and exchange rate forecasts. The present paper adopts a similar approach to analyze the behaviour of the West Texas Intermediate (WTI) crude oil price over a ten-year period. We find that daily highs and lows of the WTI oil price are cointegrated, with the error correction term being closely approximated by the daily price range. Two forecasting models, one based on a vector error correction mechanism and the other based on a transfer function framework with the range taken as a driver variable, are presented for forecasting the daily highs and lows. The results show that both of these models offer significant advantages over the naive random walk and univariate ARIMA models in terms of out-of-sample forecast accuracy. A trading strategy that makes use of the daily high and low forecasts is further developed. It is found that this strategy generally yields very reasonable trading returns over an evaluation period of about two years.
- Published
- 2010
37. Least squares model averaging by Mallows criterion
- Author
-
Alan T. K. Wan, Guohua Zou, and Xinyu Zhang
- Subjects
Economics and Econometrics ,Mathematical optimization ,Mean squared error ,Basis (linear algebra) ,Continuous modelling ,Applied Mathematics ,Estimator ,Mallows's Cp ,Stepwise regression ,Linear combination ,Least squares ,Mathematics - Abstract
This paper is in response to a recent paper by Hansen (2007) who proposed an optimal model average estimator with weights selected by minimizing a Mallows criterion. The main contribution of Hansen’s paper is a demonstration that the Mallows criterion is asymptotically equivalent to the squared error, so the model average estimator that minimizes the Mallows criterion also minimizes the squared error in large samples. We are concerned with two assumptions that accompany Hansen’s approach. The first is the assumption that the approximating models are strictly nested in a way that depends on the ordering of regressors. Often there is no clear basis for the ordering and the approach does not permit non-nested models which are more realistic from a practical viewpoint. Second, for the optimality result to hold the model weights are required to lie within a special discrete set. In fact, Hansen noted both difficulties and called for extensions of the proof techniques. We provide an alternative proof which shows that the result on the optimality of the Mallows criterion in fact holds for continuous model weights and under a non-nested set-up that allows any linear combination of regressors in the approximating models that make up the model average estimator. These results provide a stronger theoretical basis for the use of the Mallows criterion in model averaging by strengthening existing findings.
- Published
- 2010
38. A trading strategy based on Callable Bull/Bear Contracts
- Author
-
Yin-Wong Cheung, Yan Leung Stephen Cheung, Alan T. K. Wan, and Angela W.W. He
- Subjects
Economics and Econometrics ,Alternative trading system ,Financial economics ,Pairs trade ,computer.software_genre ,Electronic trading ,Callable bond ,Open outcry ,Trading strategy ,Business ,Algorithmic trading ,Volatility (finance) ,computer ,Finance - Abstract
The Callable Bull/Bear Contract is a barrier options contract recently introduced to the Hong Kong market. In this study, we propose a trading strategy that defines the entry point and exit point using information on the contract's call price and mandatory call event. Using data on contracts based on the Hong Kong Hang Seng Index, it is shown that the proposed trading strategy, on average, yields some decent trading returns that vary quite substantially across individual trades. Exploratory analyses indicate that trading returns are associated with volatility observed during a contract's lifespan and, to a lesser extent, with volatility in the pre-issuance period. Further, an issuer's relative issuing frequency may bear some implications for the trading strategy's performance.
- Published
- 2010
39. Robustness of Stein-type estimators under a non-scalar error covariance structure
- Author
-
Ti Chen, Alan T. K. Wan, Guohua Zou, and Xinyu Zhang
- Subjects
Statistics and Probability ,Shrinkage estimator ,Numerical Analysis ,Extremum estimator ,Statistics ,Autocorrelation ,Linear regression ,Estimator ,White noise ,Statistics, Probability and Uncertainty ,Covariance ,M-estimator ,Mathematics - Abstract
The Stein-rule (SR) and positive-part Stein-rule (PSR) estimators are two popular shrinkage techniques used in linear regression, yet very little is known about the robustness of these estimators to the disturbances’ deviation from the white noise assumption. Recent studies have shown that the OLS estimator is quite robust, but whether this is so for the SR and PSR estimators is less clear as these estimators also depend on the F statistic which is highly susceptible to covariance misspecification. This study attempts to evaluate the effects of misspecifying the disturbances as white noise on the SR and PSR estimators by a sensitivity analysis. Sensitivity statistics of the SR and PSR estimators are derived and their properties are analyzed. We find that the sensitivity statistics of these estimators exhibit very similar properties and both estimators are extremely robust to MA(1) disturbances and reasonably robust to AR(1) disturbances except for the cases of severe autocorrelation. The results are useful in light of the rising interest of the SR and PSR techniques in the applied literature.
- Published
- 2009
40. Predicting daily highs and lows of exchange rates: a cointegration analysis
- Author
-
Angela W.W. He and Alan T. K. Wan
- Subjects
Statistics and Probability ,Exchange rate ,Cointegration ,Currency ,Statistics ,Range (statistics) ,Econometrics ,Autoregressive integrated moving average ,Statistics, Probability and Uncertainty ,Implied volatility ,Random walk ,Term (time) ,Mathematics - Abstract
This article presents empirical evidence that links the daily highs and lows of exchange rates of the US dollar against two other major currencies over a 15 year period. We find that the log high and log low of an exchange rate are cointegrated, and the error correction term is well-approximated by the range, which is defined as the difference between the log high and log low. We further assess the empirical relevance of jointly analyzing the highs, lows and the ranges by comparing the range forecasts generated from the cointegration framework with those from random walk and autoregressive integrated moving average (ARIMA) specifications. The ability of range forecasts as predictors of implied volatility for a European style currency option is also evaluated. Our results show that aside from a limited set of exceptions, the cointegration framework generally outperforms the random walk and ARIMA models in an out-of-sample forecast contest.
- Published
- 2009
41. On the sensitivity of the one-sided t test to covariance misspecification
- Author
-
Guohua Zou, Huaizhen Qin, and Alan T. K. Wan
- Subjects
Statistics and Probability ,Numerical Analysis ,Null (mathematics) ,MA(1) ,Covariance ,AR(1) ,Sensitivity ,Size ,F-test ,Power ,Statistics ,Econometrics ,Nuisance parameter ,Sensitivity (control systems) ,Linear regression ,Statistics, Probability and Uncertainty ,Rule of thumb ,Statistic ,Statistical hypothesis testing ,Mathematics ,t-statistic - Abstract
Sensitivity analysis stands in contrast to diagnostic testing in that sensitivity analysis aims to answer the question of whether it matters that a nuisance parameter is non-zero, whereas a diagnostic test ascertains explicitly if the nuisance parameter is different from zero. In this paper, we introduce and derive the finite sample properties of a sensitivity statistic measuring the sensitivity of the t statistic to covariance misspecification. Unlike the earlier work by Banerjee and Magnus [A. Banerjee, J.R. Magnus, On the sensitivity of the usual t- and F-tests to covariance misspecification, Journal of Econometrics 95 (2000) 157–176] on the sensitivity of the F statistic, the theorems derived in the current paper hold under both the null and alternative hypotheses. Also, in contrast to Banerjee and Magnus’ [see the above cited reference] results on the F test, we find that the decision to accept the null using the OLS based one-sided t test is not necessarily robust against covariance misspecification and depends much on the underlying data matrix. Our results also indicate that autocorrelation does not necessarily weaken the power of the OLS based t test.
- Published
- 2009
42. On the Use of Model Averaging in Tourism Research
- Author
-
Xinyu Zhang and Alan T. K. Wan
- Subjects
Computer science ,Tourism, Leisure and Hospitality Management ,Regional science ,Development ,Tourism - Published
- 2009
43. Stein-type improved estimation of standard error under asymmetric LINEX loss function
- Author
-
Guohua Zou, Alan T. K. Wan, Jie Zeng, and Zhong Guan
- Subjects
Statistics and Probability ,Bayes estimator ,Minimum-variance unbiased estimator ,Efficient estimator ,Bias of an estimator ,Mean squared error ,Stein's unbiased risk estimate ,Statistics ,James–Stein estimator ,Applied mathematics ,Stein's example ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
This paper considers the estimation of standard error. More than 40 years ago, Stein [C. Stein, Inadmissibility of the usual estimator for the variance of a normal distribution with unknown mean, Ann. Institute Statist. Math. 16 (1964), pp. 155–160] proposed a classical improved estimator over the minimum risk equivariant estimator under quadratic loss. This is a textbook result. A generalization of quadratic loss is LINEX loss which considers asymmetric penalty of overestimation and underestimation. What is the corresponding version to Stein's improved estimator under LINEX loss? The problem has not been solved yet. This paper gives us an answer. Our method also applies to some other loss functions such as quadratic loss and entropy loss.
- Published
- 2009
44. Wage Earnings of Chinese Immigrants: A Semi-Parametric Analysis
- Author
-
Alan T. K. Wan
- Subjects
Earnings ,media_common.quotation_subject ,Immigration ,Economics ,Wage ,Demographic economics ,Classical economics ,Semiparametric model ,media_common - Published
- 2008
45. Estimating Equations Inference With Missing Data
- Author
-
Alan T. K. Wan, Xiaojing Wang, and Yong Zhou
- Subjects
Statistics and Probability ,Mathematics::History and Overview ,Inference ,Estimator ,Estimating equations ,Missing data ,Mathematics::Algebraic Geometry ,Empirical likelihood ,Econometrics ,Kernel regression ,Imputation (statistics) ,Statistics, Probability and Uncertainty ,Generalized method of moments ,Mathematics - Abstract
There is a large and growing body of literature on estimating equation (EE) as an estimation approach. One basic property of EE that has been universally adopted in practice is that of unbiasedness, and there are deep conceptual reasons why unbiasedness is a desirable EE characteristic. This article deals with inference from EEs when data are missing at random. The investigation is motivated by the observation that direct imputation of missing data in EEs generally leads to EEs that are biased and, thus, violates a basic assumption of the EE approach. The main contribution of this article is that it goes beyond existing imputation methods and proposes a procedure whereby one mitigates the effects of missing data through a reformulation of EEs imputed through a kernel regression method. These (modified) EEs then constitute a basis for inference by the generalized method of moments (GMM) and empirical likelihood (EL). Asymptotic properties of the GMM and EL estimators of the unknown parameters are derived a...
- Published
- 2008
46. On the sensitivity of the restricted least squares estimators to covariance misspecification
- Author
-
Alan T. K. Wan, Huaizhen Qin, and Guohua Zou
- Subjects
Economics and Econometrics ,Autocorrelation ,Ordinary least squares ,Econometrics ,Estimator ,Generalized least squares ,Sensitivity (control systems) ,Variance (accounting) ,Covariance ,Least squares ,Mathematics - Abstract
Summary Traditional econometrics has long stressed the serious consequences of non-spherical disturbances for the estimation and testing procedures under the spherical disturbance setting, that is, the procedures become invalid and can give rise to misleading results. In practice, it is not unusual, however, to find that the parameter estimates do not change much after fitting the more general structure. This suggests that the usual procedures may well be robust to covariance misspecification. Banerjee and Magnus (1999) proposed sensitivity statistics to decide if the Ordinary Least Squares estimators of the coefficients and the disturbance variance are sensitive to deviations from the spherical error assumption. This paper extends their work by investigating the sensitivity of the restricted least squares estimator to covariance misspecification where the restrictions may or may not be correct. Large sample results giving analytical evidence to some of the numerical findings reported in Banerjee and Magnus (1999) are also obtained.
- Published
- 2007
47. Estimation of regression coefficients of interest when other regression coefficients are of no interest: The case of non-normal errors
- Author
-
Xiaoyong Wu, Guohua Zou, Alan T. K. Wan, and Ti Chen
- Subjects
Statistics and Probability ,Normal distribution ,Estimation ,Probability theory ,Optimal estimation ,Statistics ,Linear regression ,Estimator ,Statistics, Probability and Uncertainty ,Equivalence (measure theory) ,Large sample ,Mathematics - Abstract
This note considers the problem of estimating regression coefficients when some other coefficients in the model are of no interest. For the case of normal errors, Magnus and Durbin [1999. Estimation of regression coefficients of interest when other regression coefficients are of no interest. Econometrica 67, 639–643] and Danilov and Magnus [2004. On the harm that ignoring pretesting can cause. J. Econometrics 122, 27–46] studied this problem and established an equivalence theorem which states that the problem of estimating the coefficients of interest is equivalent to that of finding an optimal estimator of the vector of coefficients of no interest given a single observation from a normal distribution. The aim of this note is to generalize their findings to the large sample non-normal errors case. Some applications of our results are also given.
- Published
- 2007
48. Comparison of the Stein and the usual estimators for the regression error variance under the Pitman nearness criterion when variables are omitted
- Author
-
Alan T. K. Wan and Kazuhiro Ohtani
- Subjects
Statistics and Probability ,Efficiency ,Efficient estimator ,Mean squared error ,Stein's unbiased risk estimate ,Statistics ,James–Stein estimator ,Estimator ,Regression analysis ,Stein's example ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
This paper compares the Stein and the usual estimators of the error variance under the Pitman nearness (PN) criterion in a regression model which is mis-specified due to missing relevant explanatory variables. The exact expression of the PN-probability is derived and numerically evaluated. Contrary to the well-known result under mean squared errors (MSE), with the PN criterion the Stein variance estimator is uniformly dominated by the usual estimator when no relevant variables are excluded from the model. With an increased degree of model mis-specification, neither estimator strictly dominates the other.
- Published
- 2007
49. The power of autocorrelation tests near the unit root in models with possibly mis-specified linear restrictions
- Author
-
Anurag N. Banerjee, Guohua Zou, and Alan T. K. Wan
- Subjects
Economics and Econometrics ,Proper linear model ,Linear regression ,Autocorrelation ,Statistics ,Linear model ,Applied mathematics ,Unit root ,Constant (mathematics) ,Finance ,Term (time) ,Mathematics ,Power (physics) - Abstract
It is well known that the Durbin–Watson and several other tests for first-order autocorrelation have limiting power of either zero or one in a linear regression model without an intercept, and a constant lying strictly between these values when an intercept term is present. This paper considers the limiting power of these tests in models with possibly incorrect restrictions on the coefficients. It is found that with linear restrictions on the coefficients, the limiting power can still drop to zero even with the inclusion of an intercept in the regression. Our results also accommodate the situation of a possibly mis-specified linear model.
- Published
- 2007
50. Improved Estimators of Hedonic Housing Price Models
- Author
-
Helen X. H. Bao and Alan T. K. Wan
- Subjects
Estimation ,Computer science ,Economics, Econometrics and Finance (miscellaneous) ,Hedonic index ,Hedonic pricing ,Estimator ,Real estate ,Variance (accounting) ,jel:L85 ,Lease ,Value (economics) ,Economics ,Econometrics ,Valuation (finance) - Abstract
In hedonic housing price modeling, real estate researchers and practitioners are often not completely ignorant about the parameters to be estimated. Experience and expertise usually provide them with tacit understanding of the likely values of the true parameters. Under this scenario the subjective knowledge about the parameter value can be incorporated as non-sample information in the hedonic price model. In this paper, we consider a class of Generalized Stein Variance Double k-class (GSVKK) estimators which allows real estate practitioners to introduce potentially useful information about the parameter values in the estimation of hedonic pricing models. The GSVKK estimator is a generalization of a family of shrinkage estimators introduced by Ohtani and Wan (2002, Econometric Reviews). Data from the Hong Kong real estate market are used to investigate the estimators' performance empirically. Compared with the traditional Ordinary Lease Squares approach, the GSVKK estimators have smaller predictive mean squared errors and lead to more precise parameter estimates. Some results on the theoretical properties of the GSVKK estimators are also presented.
- Published
- 2007
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.