16 results on '"Oberski, Daniel"'
Search Results
2. Comparability of Survey Measurements
- Author
-
Oberski, Daniel L. and Gideon, Lior, editor
- Published
- 2012
- Full Text
- View/download PDF
3. Questionnaire Science
- Author
-
Oberski, Daniel, Atkeson, Lonna Rae, book editor, and Alvarez, R. Michael, book editor
- Published
- 2018
- Full Text
- View/download PDF
4. Estimating Measurement Error in Longitudinal Data Using the Longitudinal MultiTrait MultiError Approach.
- Author
-
Cernat, Alexandru and Oberski, Daniel
- Subjects
- *
PANEL analysis , *MEASUREMENT errors , *SOCIAL desirability , *SOCIAL impact , *EXPERIMENTAL design , *SOCIAL science research , *LATENT variables - Abstract
Longitudinal data makes it possible to investigate change in time and its causes. While this type of data is getting more popular there is limited knowledge regarding the measurement errors involved, their stability in time and how they bias estimates of change. In this paper we apply a new method to estimate multiple types of errors concurrently, called the MultiTrait MultiError approach, to longitudinal data. This method uses a combination of experimental design and latent variable modelling to disentangle random error, social desirability, acquiescence and method effect. Using data collection from the Understanding Society Innovation Panel in the UK we investigate the stability of these measurement errors in three waves. Results show that while social desirability exhibits very high stability this is very low for method effects. Implications for social research is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Estimating stochastic survey response errors using the multitrait‐multierror model
- Author
-
Cernat, Alexandru, Oberski, Daniel L., Leerstoel Oberski, and Methodology and statistics for the behavioural and social sciences
- Subjects
multitrait ,Statistics and Probability ,Economics and Econometrics ,attitudes towards immigrants ,latent variable modelling ,Statistics, Probability and Uncertainty ,measurement error ,Social Sciences (miscellaneous) ,design of experiments (DoE) - Abstract
Surveys are well known to contain response errors of different types, including acquiescence, social desirability, common method variance and random error simultaneously. Nevertheless, a single error source at a time is all that most methods developed to estimate and correct for such errors consider in practice. Consequently, estimation of response errors is inefficient, their relative importance is unknown and the optimal question format may not be discoverable. To remedy this situation, we demonstrate how multiple types of errors can be estimated concurrently with the recently introduced ‘multitrait-multierror’ (MTME) approach. MTME combines the theory of design of experiments with latent variable modelling to estimate response error variances of different error types simultaneously. This allows researchers to evaluate which errors are most impactful, and aids in the discovery of optimal question formats. We apply this approach using representative data from the United Kingdom to six survey items measuring attitudes towards immigrants that are commonly used across public opinion studies.
- Published
- 2022
6. Modelling error dependence in categorical longitudinal data
- Author
-
Pavlopoulos, Dimitris, Pankowska, Paulina, Bakker, Bart, Oberski, Daniel, Cernat, Alexandru, Sakshaug, Joseph W., Methodology and statistics for the behavioural and social sciences, and Leerstoel Klugkist
- Subjects
Mathematics(all) ,Measurement error ,Hidden markov model ,Local independence ,Taverne - Abstract
Hidden Markov models (HMMs) offer an attractive way of accounting and correcting for measurement error in longitudinal data as they do not require the use of a ‘gold standard’ data source as a benchmark. However, while the standard HMM assumes the errors to be independent or random, some common situations in survey and register data cause measurement error to be systematic. HMMs can correct for systematic error as well if the local independence assumption is relaxed. In this chapter, we present several (mixed) HMMs that relax this assumption with the use of two independent indicators for the variable of interest. Finally, we illustrate the results of some of these HMMs with the use of an example of employment mobility. For this purpose, we use linked survey-register data from the Netherlands.
- Published
- 2021
7. Achieving Fair Inference Using Error-Prone Outcomes
- Author
-
Boeschoten, Laura, van Kesteren, Erik-Jan, Bagheri, Ayoub, Oberski, Daniel L., Leerstoel Klugkist, Methodology and statistics for the behavioural and social sciences, Leerstoel Heijden, Leerstoel Klugkist, Methodology and statistics for the behavioural and social sciences, and Leerstoel Heijden
- Subjects
Statistics and Probability ,Computer science ,Computer Networks and Communications ,Inference ,Machine learning ,computer.software_genre ,lcsh:Technology ,Measurement Invariance ,fair machine learning ,Artificial Intelligence ,algorithmic bias ,error analysis ,Algorithmic Bias ,lcsh:T ,business.industry ,latent variable model ,IJIMAI ,Latent Variable Model ,Computer Science Applications ,Fair Machine Learning ,measurement invariance ,Signal Processing ,Artificial intelligence ,Computer Vision and Pattern Recognition ,business ,computer ,Measurement Error - Abstract
Recently, an increasing amount of research has focused on methods to assess and account for fairness criteria when predicting ground truth targets in supervised learning. However, recent literature has shown that prediction unfairness can potentially arise due to measurement error when target labels are error prone. In this study we demonstrate that existing methods to assess and calibrate fairness criteria do not extend to the true target variable of interest, when an error-prone proxy target is used. As a solution to this problem, we suggest a framework that combines two existing fields of research: fair ML methods, such as those found in the counterfactual fairness literature and measurement models found in the statistical literature. Firstly, we discuss these approaches and how they can be combined to form our framework. We also show that, in a healthcare decision problem, a latent variable model to account for measurement error removes the unfairness detected previously.
- Published
- 2021
8. Predicting measurement error variance in social surveys
- Author
-
Oberski, Daniel L. and DeCastellarnau, Anna
- Subjects
Measurement error ,Machine learning ,Prediction ,SQP ,Multitrait-multimethod - Abstract
Social science commonly studies relationships among variables by employing survey questions. Answers to these questions will contain some degree of measurement error, distorting the relationships of interest. Such distortions can be removed by standard statistical methods, when these are provided knowledge of a question’s measurement error variance. However, acquiring this information routinely necessitates additional experimentation, which is infeasible in practice. We use three decades’ worth of survey experiments combined with machine learning methods to show that survey measurement error variance can be predicted from the way a question was asked. By predicting experimentally obtained estimates of survey measurement error variance from question characteristics, we enable researchers to obtain estimates of the extent of measurement error in a survey question without requiring additional data collection. Our results suggest only some commonly accepted best practices in survey design have a noticeable impact on study quality, and that predicting measurement error variance is a useful approach to removing this impact in future social surveys. The authors thank Willem Saris for support and comments; the American Association for Public Opinion Research (AAPOR) for its support of a previous version of this work; and Wiebke Weber, Melanie Revilla and Diana Zavala-Rojas for comments on an earlier version of this manuscript.
- Published
- 2021
9. Fair inference on error-prone outcomes
- Author
-
Boeschoten, Laura, van Kesteren, Erik-Jan, Bagheri, Ayoub, and Oberski, Daniel L.
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Fairness ,Machine Learning (stat.ML) ,Latent variable model ,Machine Learning (cs.LG) ,Computer Science - Computers and Society ,Fair machine learning ,Algorithmic bias ,Measurement error ,Statistics - Machine Learning ,Computers and Society (cs.CY) ,Item bias ,Differential item functioning ,Measurement invariance - Abstract
Fair inference in supervised learning is an important and active area of research, yielding a range of useful methods to assess and account for fairness criteria when predicting ground truth targets. As shown in recent work, however, when target labels are error-prone, potential prediction unfairness can arise from measurement error. In this paper, we show that, when an error-prone proxy target is used, existing methods to assess and calibrate fairness criteria do not extend to the true target variable of interest. To remedy this problem, we suggest a framework resulting from the combination of two existing literatures: fair ML methods, such as those found in the counterfactual fairness literature on the one hand, and, on the other, measurement models found in the statistical literature. We discuss these approaches and their connection resulting in our framework. In a healthcare decision problem, we find that using a latent variable model to account for measurement error removes the unfairness detected previously., Comment: Online supplementary code is available at https://dx.doi.org/10.5281/zenodo.3708150
- Published
- 2020
10. Dependent interviewing: A remedy or a curse for measurement error in surveys?
- Author
-
Pankowska, Paulina, Bakker, Bart, Oberski, Daniel, Pavlopoulos, Dimitris, Leerstoel Oberski, Methodology and statistics for the behavioural and social sciences, Leerstoel Klugkist, Sociology, Social Inequality and the Life Course (SILC), A-LAB, Leerstoel Oberski, Methodology and statistics for the behavioural and social sciences, and Leerstoel Klugkist
- Subjects
Measurement error ,register data ,SDG 8 - Decent Work and Economic Growth ,Panel survey ,hidden Markov model (HMM) ,dependent interviewing ,Hidden Markov models (HMM) ,Dependent interviewing (DI) ,labour force survey ,Education - Abstract
Longitudinal surveys often rely on dependent interviewing (DI) to lower the levels of random measurement error in survey data and reduce the incidence of spurious change. DI refers to a data collection technique that incorporates information from prior interview rounds into subsequent waves. While this method is considered an eective remedy for random measurement error, it can also introduce more systematic errors, in particular when respondents are rst reminded of their previously provided answer and then asked about their current status. The aim of this paper is to assess the impact of DI on measurement error in employment mobility. We take advantage of a unique experimental situation that was created by the roll-out of dependent interviewing in the Dutch Labour Force Survey (LFS). We apply Hidden Markov Modeling (HMM) to linked LFS and Employment Register (ER) data that cover a period before and after dependent interviewing was abolished, which in turn enables the modeling of systematic errors in the LFS data. Our results indicate that DI lowered the probability of obtaining random measurement error but had no signicant eect on the systematic component of the error. The lack of a signicant eect might be partially due to the fact that the probability of repeating the same error was extremely high at baseline (i.e when using standard, independent interviewing); therefore the use of DI could not increase this probability any further., Survey Research Methods, Vol 15 No 2 (2021)
- Published
- 2021
11. Questionnaire science
- Author
-
Oberski, Daniel L., Atkeson, Lonna Rae, Alvarez, R. Michael, Leerstoel Klugkist, and Methodology and statistics for the behavioural and social sciences
- Subjects
Best practices ,Expert systems ,Measurement error ,Question crafting ,Survey methodology ,Taverne ,Survey quality ,Social Sciences(all) ,Questionnaire design ,Reliability ,Survey experiments - Abstract
Some textbooks on questionnaire design claim it is an art. That would make the criterion for a “good” question entirely subjective—a worrying conclusion given that surveys are often used to discover important facts about people. Are our discoveries about people also entirely subjective? This chapter shows that it is possible to study what a “good” or a “bad” question is by experimentation. There is already a body of scientific evidence on questionnaire design that can be taken into account when designing a questionnaire. The chapter reviews some of this evidence and shows how it can be used to the advantage of the survey researcher. Questionnaire science is far from complete. On the one hand, this means that some of our conclusions may still be more art than science. On the other, it means that we can agree on one aspect of questionnaire science: more of it is needed.
- Published
- 2018
12. Reconciliation of inconsistent data sources using hidden Markov models.
- Author
-
Pankowska, Paulina, Pavlopoulos, Dimitris, Bakker, Bart, and Oberski, Daniel L.
- Abstract
This paper discusses how National Statistical Institutes (NSI’s) can use hidden Markov models (HMMs) to produce consistent official statistics for categorical, longitudinal variables using inconsistent sources. Two main challenges are addressed: first, the reconciliation of inconsistent sources with multi-indicator HMMs requires linking the sources on the micro level. Such linkage might lead to bias due to linkage error. Second, applying and estimating HMMs regularly is a complicated and expensive procedure. Therefore, it is preferable to use the error parameter estimates as a correction factor for a number of years. However, this might lead to biased structural estimates if measurement error changes over time or if the data collection process changes. Our results on these issues are highly encouraging and imply that the suggested method is appropriate for NSI’s. Specifically, linkage error only leads to (substantial) bias in very extreme scenarios. Moreover, measurement error parameters are largely stable over time if no major changes in the data collection process occur. However, when a substantial change in the data collection process occurs, such as a switch from dependent (DI) to independent (INDI) interviewing, re-using measurement error estimates is not advisable. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
13. Reconciliation of inconsistent data sources by correction for measurement error: The feasibility of parameter re-use.
- Author
-
Pankowska, Paulina, Bakker, Bart, Oberski, Daniel L., and Pavlopoulos, Dimitris
- Subjects
HIDDEN Markov models ,DATA quality ,LABOR market ,MEASUREMENT errors ,DESCRIPTIVE statistics - Abstract
National Statistical Institutes (NSIs) often obtain information about a single variable from separate data sources. Administrative registers and surveys, in particular, often provide overlapping information on a range of phenomena of interest to official statistics. However, even though the two sources overlap, they both contain measurement error that prevents identical units from yielding identical values. Reconciling such separate data sources and providing accurate statistics, which is an important challenge for NSIs, is typically achieved through macro-integration. In this study we investigate the feasibility of an alternative method based on the application of previously obtained results from a recently introduced extension of the Hidden Markov Model (HMM) to newer data. The method allows a reconciliation of separate error-prone data sources without having to repeat the full HMM analysis, provided the estimated measurement error processes are stable over time. As we find that these processes are indeed stable over time, the proposed method can be used effectively for macro-integration, to reconciliate both first-order statistics – e.g. the size of temporary employment in the Netherlands – and second-order statistics – e.g. the amount of mobility from temporary to permanent employment. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. The Latent Class Multitrait-Multimethod Model.
- Author
-
Oberski, Daniel L., Hagenaars, Jacques A. P., and Saris, Willem E.
- Subjects
MULTITRAIT multimethod techniques ,ESTIMATION theory ,SOCIAL surveys ,ERROR analysis in mathematics ,PARAMETER estimation - Abstract
A latent class multitrait-multimethod (MTMM) model is proposed to estimate random and systematic measurement error in categorical survey questions while making fewer assumptions than have been made so far in such evaluations, allowing for possible extreme response behavior and other nonmonotone effects. The method is a combination of the MTMM research design of Campbell and Fiske (1959), the basic response model for survey questions of Saris and Andrews (1991), and the latent class factor model of Vermunt and Magidson (2004, pp. 227-230). The latent class MTMM model thus combines an existing design, model, and method to allow for the estimation of the degree to and manner in which survey questions are affected by systematic measurement error. Starting from a general form of the response function for a survey question, we present the MTMM experimental approach to identification of the response function's parameters. A "trait-method biplot" is introduced as a means of interpreting the estimates of systematic measurement error, whereas the quality of the questions can be evaluated by item information curves and the item information function. An experiment from the European Social Survey is analyzed and the results are discussed, yielding valuable insights into the functioning of a set of example questions on the role of women in society in 2 countries. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
15. Measurement Error Models With Uncertainty About the Error Variance.
- Author
-
Oberski, Daniel L. and Satorra, Albert
- Subjects
- *
MEASUREMENT errors , *ESTIMATION bias , *REGRESSION analysis , *VARIANCES , *STRUCTURAL equation modeling , *COMPUTER software - Abstract
It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing for consistent estimation of the relevant regression parameters. In many instances, however, embedding the measurement model into structural equation models is not possible because the model would not be identified. To correct for measurement error one has no other recourse than to provide the exact values of the variances of the measurement error terms of the model, although in practice such variances cannot be ascertained exactly, but only estimated from an independent study. The usual approach so far has been to treat the estimated values of error variances as if they were known exact population values in the subsequent structural equation modeling (SEM) analysis. In this article we show that fixing measurement error variance estimates as if they were true values can make the reported standard errors of the structural parameters of the model smaller than they should be. Inferences about the parameters of interest will be incorrect if the estimated nature of the variances is not taken into account. For general SEM, we derive an explicit expression that provides the terms to be added to the standard errors provided by the standard SEM software that treats the estimated variances as exact population values. Interestingly, we find there is a differential impact of the corrections to be added to the standard errors depending on which parameter of the model is estimated. The theoretical results are illustrated with simulations and also with empirical data on a typical SEM model. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
16. Measurement error: estimation, correction, and analysis of implications
- Author
-
Pankowska, P.K., Bakker, Bart, Pavlopoulos, Dimitris, Oberski, Daniel, Social Inequality and the Life Course (SILC), and Sociology
- Subjects
measurement error ,hidden Markov models ,data quality ,latent variable models - Published
- 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.