23 results on '"Standard errors"'
Search Results
2. Combination of three global Moho density contrast models by a weighted least-squares procedure
- Author
-
Sjöberg, Lars, Abrehdary, M., Sjöberg, Lars, and Abrehdary, M.
- Abstract
Due to different structures of the Earth's crust and mantle, there is a significant density contrast at their boundary, the Moho Density Contrast (or shortly MDC). Frequently one assumes that the MDC is about 600 kg/m3, but seismic and gravimetric data show a considerable variation from region to region, and today there are few such studies, and global models are utterly rare. This research determines a new global model, called MDC21, which is a weighted least-squares combination of three available MDC models, pixel by pixel at a resolution of 1° × 1°. For proper weighting among the models, the study starts by estimating lacking standard errors and (frequently high) correlations among them. The numerical investigation shows that MDC21 varies from 21 to 504 kg/m3 in ocean areas and ranges from 132 to 629 kg/m3 in continental regions. The global average is 335 kg/m3. The standard errors estimated in ocean regions are mostly less than 40 kg/m3, while for continental regions it grows to 80 kg/m3. Most standard errors are small, but they reach to notable values in some specific regions. The estimated MDCs (as well as Moho depths) at mid-ocean ridges are small but show significant variations and qualities., QC 20230227
- Published
- 2022
- Full Text
- View/download PDF
3. LMest: Generalized Latent Markov Models for longitudinal continuous and categorical data
- Author
-
Bartolucci, F, Bartolucci, F, Pandolfi, S, Pennoni, F, Farcomeni, A, Serafini, A, PANDOLFI, SARA, Bartolucci, F, Bartolucci, F, Pandolfi, S, Pennoni, F, Farcomeni, A, Serafini, A, and PANDOLFI, SARA
- Abstract
La libreria LMest permette di specificare e stimare i modelli Hidden o Latent Markov (LM) per l'analisi di dati longitudinali sia continui che categoriali. Le covariate sono presenti nei modelli in base ad adeguate parametrizzazioni. Varie tipologie di modelli possono essere stimati utilizzando la struttura dei dati sia in formato long che wide. La stima di massima verosimiglianza è ottenuta con l'algoritmo Expectation-Maximization implementato attraverso routines Fortran. La libreria permette di trattare risposte mancanti, drop-out e valori mancanti secondo una struttura non-monotona. Gli errori standard per i parametri stimati vengono calcolati attraverso la matrice di informazione ottenuta in modo esatto oppure approssimato. La libreria include alcuni esempi e dei dati sia reali che simulati., The package LMest is a framework for specifying and fitting Latent (or Hidden) Markov (LM) models, which are tailored for the analysis of longitudinal continuous and categorical data. Covariates are also included in the model specification through suitable parameterizations. Different LM models are estimated through specific functions requiring a data frame in long format. Maximum likelihood estimation of model parameters is performed through the Expectation-Maximization algorithm, which is implemented by relying on Fortran routines. The package allows us to deal with missing responses, including drop-out and non-monotonic missingness, under the missing-at-random assumption. Standard errors for the parameter estimates are obtained by exact computation of the information matrix or through reliable numerical approximations of this matrix. The package also provides some examples and real and simulated data sets.
- Published
- 2019
4. Inference in Linear Regression Models with Many Covariates and Heteroscedasticity
- Author
-
Cattaneo, MD, Cattaneo, MD, Jansson, M, Newey, WK, Cattaneo, MD, Cattaneo, MD, Jansson, M, and Newey, WK
- Abstract
The linear regression model is widely used in empirical work in economics, statistics, and many other disciplines. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroscedasticity. Our results are obtained using high-dimensional approximations, where the number of included covariates is allowed to grow as fast as the sample size. We find that all of the usual versions of Eicker–White heteroscedasticity consistent standard error estimators for linear models are inconsistent under this asymptotics. We then propose a new heteroscedasticity consistent standard error formula that is fully automatic and robust to both (conditional) heteroscedasticity of unknown form and the inclusion of possibly many covariates. We apply our findings to three settings: parametric linear models with many covariates, linear panel models with many fixed effects, and semiparametric semi-linear models with many technical regressors. Simulation evidence consistent with our theoretical results is provided, and the proposed methods are also illustrated with an empirical application. Supplementary materials for this article are available online.
- Published
- 2018
5. Inference in Linear Regression Models with Many Covariates and Heteroscedasticity
- Author
-
Cattaneo, MD, Cattaneo, MD, Jansson, M, Newey, WK, Cattaneo, MD, Cattaneo, MD, Jansson, M, and Newey, WK
- Abstract
The linear regression model is widely used in empirical work in economics, statistics, and many other disciplines. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroscedasticity. Our results are obtained using high-dimensional approximations, where the number of included covariates is allowed to grow as fast as the sample size. We find that all of the usual versions of Eicker–White heteroscedasticity consistent standard error estimators for linear models are inconsistent under this asymptotics. We then propose a new heteroscedasticity consistent standard error formula that is fully automatic and robust to both (conditional) heteroscedasticity of unknown form and the inclusion of possibly many covariates. We apply our findings to three settings: parametric linear models with many covariates, linear panel models with many fixed effects, and semiparametric semi-linear models with many technical regressors. Simulation evidence consistent with our theoretical results is provided, and the proposed methods are also illustrated with an empirical application. Supplementary materials for this article are available online.
- Published
- 2018
6. Смещения всеобщих видоспецифичных аллометрических моделей при локальной оценке биомассы деревьев сосны, кедра и пихты
- Author
-
Усольцев, В. А., Колчин, К. В., Норицина, Ю. В., Азаренок, М. В., Богословская, О. А., Усольцев, В. А., Колчин, К. В., Норицина, Ю. В., Азаренок, М. В., and Богословская, О. А.
- Abstract
На основе сформированной базы данных о биомассе деревьев двух- и пятихвойных сосен и пихт в количестве 1234 определений построены аллометрические модели четырёх структурных форм, включающие в себя фиктивные переменные, которые дают возможность дать региональные оценки их надземной биомассы по известным морфометрическим показателям (диаметр ствола и кроны, высота дерева). Предложенные аллометрические модели свидетельствуют об их адекватности фактическим данным с коэффициентом детерминации от 0,725 до 0,990 (исключение составила зависимость надземной биомассы от диаметра кроны пихты с коэффициентом детерминации 0,430) и могут применяться при региональных оценках биомассы деревьев трёх древесных пород. Однако всеобщие аллометрические модели, построенные по всему массиву фактических данных, дают в экорегионах слишком большие стандартные ошибки (до 572%) и неприемлемые смещения обоих знаков (от +315 до -92%), что исключает возможность их применения на региональных уровнях., Оn the basis of the compiled database of tree biomass of 2- or 5-needled pines and firs at a number of 1234 trees, allometric models of the four modifications are designed, which include the block of independent dummy variables. These models provide an opportunity to give regional estimates of tree above-ground biomass when using some known mass-forming indices (stem and crown diameter and tree height). Allometric models proposed are indicative of their adequacy for the actual data having the coefficients of determination between 0,725 and 0,990 (the only exception was the dependence of fir biomass upon crown diameter with a coefficient of determination 0,430) and can be applied in regional estimating above-ground biomass of the above-mentioned tree species. However, generic allometric models built using the total quantity of actual data give in different ecoregions too large standard errors (up to 572%) and unacceptable both positive and negative biases (from + 315 to -92%), that excludes any possibility of their application at regional levels.
- Published
- 2017
7. Фиктивные переменные и смещения всеобщих аллометрических моделей при локальной оценке фитомассы деревьев (на примере Picea L.)
- Author
-
Усольцев, В. А., Колчин, К. В., Воронов, М. П., Усольцев, В. А., Колчин, К. В., and Воронов, М. П.
- Abstract
Леса играют важную роль в снижении количества парниковых газов в атмосфере и предотвращении изменения климата. Одним из способов количественной оценки углеродного обмена в лесном покрове является определение изменений в запасах его фитомассы и углерода со временем. Запас фитомассы на единице площади начинается с определения его на уровне отдельных деревьев. Известно строгое и устойчивое аллометрическое соотношение между фитомассой дерева и его диаметром (простая аллометрия), или между фитомассой дерева и несколькими массообразующими (морфометрическими) показателями (многофакторная аллометрия). В настоящее время в разных странах и континентах проводятся интенсивные исследования применимости так называемых «всеобщих» аллометрических моделей (generic, generalized, common models), которые обеспечивали бы аллометрической модели приемлемую точность при оценке фитомассы насаждений. В статье на основе сформированной базы данных о фитомассе деревьев Picea в количестве 1065 определений построены аллометрические модели четырёх видов, включающие в себя фиктивные переменные, которые дают возможность дать региональные оценки их фитомассы по известным морфометрическим показателям (диаметр ствола и кроны, высота дерева). Предложенные аллометрические модели свидетельствуют об их адекватности фактическим данным (коэффициент детерминации от 0,959 до 0,984) и могут применяться при региональных оценках фитомассы деревьев ели. Однако всеобщие аллометрические модели, построенные по всему массиву фактических данных, дают в экорегионах слишком большие стандартные ошибки (до 402%) и неприемлемые смещения обоих знаков (от +311 до -86%), что исключает воз-можность их применения на региональных уровнях., Forests play an important role in reducing the amount of greenhouse gases in the atmosphere and preventing climate change. One way to quantify сarbon exchange in forest cover is estimating changes in its biomass and carbon pools over time. Biomass estimating on the unit of area starts with harvesting sample trees and weighing their biomass. It is known the strong and sustainable relationship between tree biomass and its diameter (simple allometry), or between tree biomass and a number of mass-forming (morphometric) indices (multi-factor allometry). At present, in different countries and continents, the studies of the applicability of the so-called generic (generalized, common) allometric models are intensified that would give acceptable accuracy in estimating forest biomass. In the article on the basis of the compiled database of tree biomass of Picea at a number of 1065 trees, allometric models of the four modifications are designed, which include the block of independent dummy variables. These models provide an opportunity to give regional estimates of tree biomass when using some known mass-forming indices (stem and crown diameter and tree height). Allometric models proposed are indicative of their adequacy for the actual data (coefficients of determination are 0,959 to 0,984) and can be applied in regional estimating of spruce tree biomass. However, generic allometric models built using the total quantity of actual data give in different ecoregions too large standard errors (up to 402%) and unacceptable both positive and negative biases (from + 311 to -86%), that excludes any possibility of their application at regional levels
- Published
- 2017
8. Item response theory observed-score kernel equating
- Author
-
Andersson, Björn, Wiberg, Marie, Andersson, Björn, and Wiberg, Marie
- Abstract
Item response theory (IRT) observed-score kernel equating is introduced for the non-equivalent groups with anchor test equating design using either chain equating or post-stratification equating. The equating function is treated in a multivariate setting and the asymptotic covariance matrices of IRT observed-score kernel equating functions are derived. Equating is conducted using the two-parameter and three-parameter logistic models with simulated data and data from a standardized achievement test. The results show that IRT observed-score kernel equating offers small standard errors and low equating bias under most settings considered.
- Published
- 2017
- Full Text
- View/download PDF
9. Item Response Theory Observed-Score Kernel Equating
- Author
-
Andersson, Björn, Wiberg, Marie, Andersson, Björn, and Wiberg, Marie
- Abstract
Item response theory (IRT) observed-score kernel equating is introduced for the non-equivalent groups with anchor test equating design using either chain equating or post-stratification equating. The equating function is treated in a multivariate setting and the asymptotic covariance matrices of IRT observed-score kernel equating functions are derived. Equating is conducted using the two-parameter and three-parameter logistic models with simulated data and data from a standardized achievement test. The results show that IRT observed-score kernel equating offers small standard errors and low equating bias under most settings considered.
- Published
- 2017
- Full Text
- View/download PDF
10. Sensitivity of Econometric Estimates to Item Non-response Adjustment
- Author
-
Sanchez, Juana, Sanchez, Juana, Sanchez, Juana, and Sanchez, Juana
- Abstract
Non-response in establishment surveys is a very important problem that can bias results of statistical analysis. The bias can be considerable when the survey data is used to do multivariate analysis that involve several variables with different response rates, which can reduce the effective sample size considerably. Fixing the non-response, however, could potentially cause other econometric problems. This paper uses an operational approach to analyze the sensitivity of results of multivariate analysis to multiple imputation procedures applied to the U.S. Census Bureau/NSF‘s Business Research and Development and Innovation Survey (BRDIS) to address item non-response. Multiple imputation is first applied using data from all survey units and periods for which there is data, presenting scenario 1. A scenario 2 involves separate imputation for units that have participated in the survey only once and those that repeat. Scenario 3 involves no imputation. Sensitivity analysis is done by comparing the model estimates and their standard errors, and measures of the additional uncertainty created by the imputation procedure. In all cases, unit non-response is addressed by using the adjusted weights that accompany BRDIS micro data. The results suggest that substantial benefit may be derived from multiple imputation, not only because it helps provide more accurate measures of the uncertainty due to item non-response but also because it provides alternative estimates of effect sizes and population totals.
- Published
- 2016
11. The Buckley-James estimator and induced smoothing
- Author
-
Wang, You-Gan, Zhao, Yudong, Fu, Liya, Wang, You-Gan, Zhao, Yudong, and Fu, Liya
- Abstract
The Buckley-James (BJ) estimator is known to be consistent and efficient for a linear regression model with censored data. However, its application in practice is handicapped by the lack of a reliable numerical algorithm for finding the solution. For a given data set, the iterative approach may yield multiple solutions, or no solution at all. To alleviate this problem, we modify the induced smoothing approach originally proposed in 2005 by Brown & Wang. The resulting estimating functions become smooth, thus eliminating the tendency of the iterative procedure to oscillate between different parameter values. In addition to facilitating point estimation the smoothing approach enables easy evaluation of the projection matrix, thus providing a means of calculating standard errors. Extensive simulation studies were carried out to evaluate the performance of different estimators. In general, smoothing greatly alleviates numerical issues that arise in the estimation process. In particular, the one-step smoothing estimator eliminates non-convergence problems and performs similarly to full iteration until convergence. The proposed estimation procedure is illustrated using a dataset from a multiple myeloma study.
- Published
- 2016
12. Sensitivity of Econometric Estimates to Item Non-response Adjustment
- Author
-
Sanchez, Juana, Sanchez, Juana, Sanchez, Juana, and Sanchez, Juana
- Abstract
Non-response in establishment surveys is a very important problem that can bias results of statistical analysis. The bias can be considerable when the survey data is used to do multivariate analysis that involve several variables with different response rates, which can reduce the effective sample size considerably. Fixing the non-response, however, could potentially cause other econometric problems. This paper uses an operational approach to analyze the sensitivity of results of multivariate analysis to multiple imputation procedures applied to the U.S. Census Bureau/NSF‘s Business Research and Development and Innovation Survey (BRDIS) to address item non-response. Multiple imputation is first applied using data from all survey units and periods for which there is data, presenting scenario 1. A scenario 2 involves separate imputation for units that have participated in the survey only once and those that repeat. Scenario 3 involves no imputation. Sensitivity analysis is done by comparing the model estimates and their standard errors, and measures of the additional uncertainty created by the imputation procedure. In all cases, unit non-response is addressed by using the adjusted weights that accompany BRDIS micro data. The results suggest that substantial benefit may be derived from multiple imputation, not only because it helps provide more accurate measures of the uncertainty due to item non-response but also because it provides alternative estimates of effect sizes and population totals.
- Published
- 2016
13. Issues on the estimation of latent variable and latent class models
- Author
-
PENNONI, FULVIA, Pennoni, F, PENNONI, FULVIA, PENNONI, FULVIA, Pennoni, F, and PENNONI, FULVIA
- Abstract
This book is made of different reseach problems which have in common the presence of latent variables. In the first part undirected and directed graphical models are considered and in the case of Gaussian continuous variables the author shows that the specification of a complex multivariate distribution through univariate regressions induced by a Directed Acyclic Graph (DAG) can be regarded as a simplification. The Expectation-Maximization algorithm is considered for the maximum likelihood estimation of the model parameters and the author illustrates a method for obtaining an explicit formula of the observed information matrix using the missing information principle. An essential background on the latent class model is given as well its extension to study latent changes over time. The Hidden Markov model is presented consisting of hidden states and observed variables both varying over time. The latent class cluster model is extended by proposing a latent model that also incorporates the longitudinal structure of the data by using a local likelihood approach. Some examples illustrate the use of the models in criminology and education. A detailed bibliography is provided
- Published
- 2014
14. Issues on the estimation of latent variable and latent class models
- Author
-
Pennoni, F, PENNONI, FULVIA, Pennoni, F, and PENNONI, FULVIA
- Abstract
This book is made of different reseach problems which have in common the presence of latent variables. In the first part undirected and directed graphical models are considered and in the case of Gaussian continuous variables the author shows that the specification of a complex multivariate distribution through univariate regressions induced by a Directed Acyclic Graph (DAG) can be regarded as a simplification. The Expectation-Maximization algorithm is considered for the maximum likelihood estimation of the model parameters and the author illustrates a method for obtaining an explicit formula of the observed information matrix using the missing information principle. An essential background on the latent class model is given as well its extension to study latent changes over time. The Hidden Markov model is presented consisting of hidden states and observed variables both varying over time. The latent class cluster model is extended by proposing a latent model that also incorporates the longitudinal structure of the data by using a local likelihood approach. Some examples illustrate the use of the models in criminology and education. A detailed bibliography is provided
- Published
- 2014
15. Effect of merging levels of locomotion scores for dairy cows on intra- and interrater reliability and agreement
- Author
-
Schlageter-Tello, A., Bokkers, E.A.M., Groot Koerkamp, P.W.G., van Hertem, T., Viazzi, S., Romanini, C.E.B., Halachmi, I., Bahr, C., Berckmans, D., Lokhorst, K., Schlageter-Tello, A., Bokkers, E.A.M., Groot Koerkamp, P.W.G., van Hertem, T., Viazzi, S., Romanini, C.E.B., Halachmi, I., Bahr, C., Berckmans, D., and Lokhorst, K.
- Abstract
Locomotion scores are used for lameness detection in dairy cows. In research, locomotion scores with 5 levels are used most often. Analysis of scores, however, is done after transformation of the original 5-level scale into a 4-, 3-, or 2-level scale to improve reliability and agreement. The objective of this study was to evaluate different ways of merging levels to optimize resolution, reliability, and agreement of locomotion scores for dairy cows. Locomotion scoring was done by using a 5-level scale and 10 experienced raters in 2 different scoring sessions from videos from 58 cows. Intra- and interrater reliability and agreement were calculated as weighted kappa coefficient (¿w) and percentage of agreement (PA), respectively. Overall intra- and interrater reliability and agreement and specific intra- and interrater agreement were determined for the 5-level scale and after transformation into 4-, 3-, and 2-level scales by merging different combinations of adjacent levels. Intrarater reliability (¿w) ranged from 0.63 to 0.86, whereas intrarater agreement (PA) ranged from 60.3 to 82.8% for the 5-level scale. Interrater ¿w=0.28 to 0.84 and interrater PA=22.6 to 81.8% for the 5-level scale. The specific intrarater agreement was 76.4% for locomotion level 1, 68.5% for level 2, 65% for level 3, 77.2% for level 4, and 80% for level 5. Specific interrater agreement was 64.7% for locomotion level 1, 57.5% for level 2, 50.8% for level 3, 60% for level 4, and 45.2% for level 5. Specific intra- and interrater agreement suggested that levels 2 and 3 were more difficult to score consistently compared with other levels in the 5-level scale. The acceptance threshold for overall intra- and interrater reliability (¿w and ¿ =0.6) and agreement (PA =75%) and specific intra- and interrater agreement (=75% for all levels within locomotion score) was exceeded only for the 2-level scale when the 5 levels were merged as (12)(345) or (123)(45). In conclusion, when locomotion scoring is perf
- Published
- 2014
16. Calculating errors for measures derived from choice modelling estimates
- Author
-
Daly, Andrew, Hess, Stephane, de Jong, Gerard, Daly, Andrew, Hess, Stephane, and de Jong, Gerard
- Abstract
The calibration of choice models produces a set of parameter estimates and an associated covariance matrix, usually based on maximum likelihood estimation. However, in many cases, the values of interest to analysts are in fact functions of these parameters rather than the parameters themselves. It is thus also crucial to have a measure of variance for these derived quantities and it is preferable that this can be guaranteed to have the maximum likelihood properties, such as minimum variance. While the calculation of standard errors using the Delta method has been described for a number of such measures in the literature, including the ratio of two parameters, these results are often seen to be approximate calculations and do not claim maximum likelihood properties. In this paper, we show that many measures commonly used in transport studies and elsewhere are themselves maximum likelihood estimates and that the standard errors are thus exact, a point we illustrate for a substantial number of commonly used functions. We also discuss less appropriate methods, notably highlighting the issues with using simulation for obtaining the variance of a function of estimates., QC 20120418
- Published
- 2012
- Full Text
- View/download PDF
17. Simple means to improve the interpretability of regression coefficients
- Author
-
Schielzeth, Holger and Schielzeth, Holger
- Abstract
1. Linear regression models are an important statistical tool in evolutionary and ecological studies. Unfortunately, these models often yield some uninterpretable estimates and hypothesis tests, especially when models contain interactions or polynomial terms. Furthermore, the standard errors for treatment groups, although often of interest for including in a publication, are not directly available in a standard linear model. 2. Centring and standardization of input variables are simple means to improve the interpretability of regression coefficients. Further, refitting the model with a slightly modified model structure allows extracting the appropriate standard errors for treatment groups directly from the model. 3. Centring will make main effects biologically interpretable even when involved in interactions and thus avoids the potential misinterpretation of main effects. This also applies to the estimation of linear effects in the presence of polynomials. Categorical input variables can also be centred and this sometimes assists interpretation. 4. Standardization (z-transformation) of input variables results in the estimation of standardized slopes or standardized partial regression coefficients. Standardized slopes are comparable in magnitude within models as well as between studies. They have some advantages over partial correlation coefficients and are often the more interesting standardized effect size. 5. The thoughtful removal of intercepts or main effects allows extracting treatment means or treatment slopes and their appropriate standard errors directly from a linear model. This provides a simple alternative to the more complicated calculation of standard errors from contrasts and main effects. 6. The simple methods presented here put the focus on parameter estimation (point estimates as well as confidence intervals) rather than on significance thresholds. They allow fitting complex, but meaningful models that can be concisely presented and interpreted. The
- Published
- 2010
- Full Text
- View/download PDF
18. Conceptual Design Prediction of the Buffet Envelope of Transport Aircraft
- Author
-
Bérard, Adrien, Isikveren, Askin, Bérard, Adrien, and Isikveren, Askin
- Abstract
This paper describes a methodology that inexpensively predicts the buffet envelope of new transport airplane wing geometries at the conceptual design stage. The parameters that demonstrate a strong functional sensitivity to buffet onset were identified and their relative effect was quantified. To estimate the buffet envelope of any target aircraft geometry, the method uses fractional change transformations in consort with a generic reference buffet onset curve provided by the authors or the buffet onset of a known seed airplane. The explicit design variables required to perform buffet onset prediction are those describing the wing planform and the wingtip section. The mutually exclusive nature of the method's analytical construct provides considerable freedom in deciding the scope of free-design-variable complexity. The method has been shown to be adequately robust and flexible enough to deal with a wide variety of transport airplane designs. For the example transport airplanes considered, irrespective of aircraft morphology and en route flight phase, the relative error in prediction was found to be mostly within +/-5.0%, with occasional excursions not exceeding a +/-9.0% bandwidth. The standard error of estimate for the lift coefficient at 1.0 g buffet onset at a given Mach number was calculated to be 0.0262., QC 20101104 Uppdaterad från submitted till published (20101104).
- Published
- 2009
- Full Text
- View/download PDF
19. Induced smoothing for rank regression with censored survival times
- Author
-
Brown, B, Wang, You-Gan, Brown, B, and Wang, You-Gan
- Abstract
Adaptions of weighted rank regression to the accelerated failure time model for censored survival data have been successful in yielding asymptotically normal estimates and flexible weighting schemes to increase statistical efficiencies. However, for only one simple weighting scheme, Gehan or Wilcoxon weights, are estimating equations guaranteed to be monotone in parameter components, and even in this case are step functions, requiring the equivalent of linear programming for computation. The lack of smoothness makes standard error or covariance matrix estimation even more difficult. An induced smoothing technique overcame these difficulties in various problems involving monotone but pure jump estimating equations, including conventional rank regression. The present paper applies induced smoothing to the Gehan-Wilcoxon weighted rank regression for the accelerated failure time model, for the more difficult case of survival time data subject to censoring, where the inapplicability of permutation arguments necessitates a new method of estimating null variance of estimating functions. Smooth monotone parameter estimation and rapid, reliable standard error or covariance matrix estimation is obtained.
- Published
- 2007
20. Socioeconomic inequalities in health: Measurement, computation, and statistical inference
- Author
-
Kakwani, N., Wagstaff, A. (Adam), Doorslaer, E.K.A. (Eddy) van, Kakwani, N., Wagstaff, A. (Adam), and Doorslaer, E.K.A. (Eddy) van
- Abstract
This paper clarifies the relationship between two widely used indices of health inequality and explains why these are superior to others indices used in the literature. It also develops asymptotic estimators for their variances and clarifies the role that demographic standardization plays in the analysis of socioeconomic inequalities in health. Empirical illustrations are presented for Dutch health survey data.
- Published
- 1997
- Full Text
- View/download PDF
21. Item Response Theory: Some Standard Errors
- Author
-
MCFANN GRAY AND ASSOCIATES INC SAN ANTONIO TX, Thissen, David, Wainer, Howard, MCFANN GRAY AND ASSOCIATES INC SAN ANTONIO TX, Thissen, David, and Wainer, Howard
- Abstract
The mathematics required to calculate the asymptotic standard errors of the parameters of three commonly used logistic item response models is described and used to generate values for some common situations. It is shown that the maximum likelihood estimation of a lower asymptote reduces the accuracy of estimation of a location parameter. If one requires accurate estimates of location parameters (e.g., for purposes of test linking/equating or for computerized adaptive testing), the sample sizes required for acceptable accuracy may be so large as to make maximum likelihood estimation infeasible in most applications. It is suggested that other estimation methods be used if the three-parameter model is applied in these situations.
- Published
- 1983
22. Propellant Surveillance Report LGM-30 F & G Stage I. Phase G. Series VIII, TP-H1011.
- Author
-
OGDEN AIR LOGISTICS CENTER HILL AFB UT PROPELLANT LAB SECTION, Thompson,John A, OGDEN AIR LOGISTICS CENTER HILL AFB UT PROPELLANT LAB SECTION, and Thompson,John A
- Abstract
This report contains propellant test results from cartons of TP-H1011 bulk propellant representing LGM-30 F and G First Stage Minuteman Motors. This report uses a statistical approach to analyze the bulk carton propellant data. Testing was accomplished in accordance with MMWRM Project M04046C-WNL01529. The data from this test period are combined with data from previous testing and entered into the G085 Computer for storage, analysis and regression analysis. From the statistical analysis of all data tested to date (fourteen years for F and G), significant degradation of the propellant does not appear likely for at least two years past the oldest data point. Each point on the regression plot represents the mean of all samples at that particular age. The number of samples at each point is indicated on the sample size summary sheet on the page accompanying each regression plot or group of regression plots. The data range at any age can be found by suitable inquiry of the G085 system. (Author)
- Published
- 1980
23. A study of the variability of estimates of heritability and their standard errors derived by paternal half-sib techniques using simulated data : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Animal Science at Massey University
- Author
-
Rendel, John Martin and Rendel, John Martin
- Abstract
Data sets were generated that varied in the number of sires (20, 50, 100, 150 and 200) and progeny per sire (a mean of 20, 50, 70 and 100). These data sets were generated far balanced data and, in an effort to approximate actual flock data, unbalanced data based on a normal distribution of progeny per sire with standard deviations of 2, 5 and 7. In addition, data sets were generated with a standard deviation of 14, 25 and 29 progeny per sire, but for data set size of 100 sires with a mean of 100 progeny per sire, only. Also, numbers of progeny per sire and numbers of sires from 6 actual flocks were used to generate data sets. The sets were generated to conform with a 1-way random model with, the sire variance set at 0.6783 and error variance at 11.0106, giving a paternal half-sib heritability of 0.2321. Each combination of number of sire and progeny per sire was generated 100 times (i.e. 100 replicates) at each level of unbalance. Sire and error variances and heritabilities were estimated, as well as their standard errors, for each replicate using Henderson's Method 1 (HM), Maximum Likelihood (ML) and Restricted Maximum Likelihood (REML). There was good agreement between the population heritabilities and sire and error variances, and the corresponding mean of the replicates that made up each data set. There was also little difference between the results of the 3 methods of estimating the variance components. The Mean Squared Error (MSE) was similar for each method except for the data sets based on the flocks where the MSE of the sire variances for HM was larger than those for ML and REML. The MSE was largest for data sets consisting of 20 sires and 50 sires with a mean of 20 progeny per sire. The standard errors of the heritability and sire and error variances appear to be good indicators of the variation of estimates within data sets regardless of the level of unbalance or method of estimation. The differences between heritability estimates from 31 flocks for weani
- Published
- 1989
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.