1,115 results on '"Antwortverhalten"'
Search Results
2. Ein integratives Kommunikationsmodell nach Hargie und Kollegen
- Author
-
Röhner, Jessica, Schütz, Astrid, Kriz, Jürgen, Series Editor, Bühner, Markus, Advisory Editor, Goschke, Thomas, Advisory Editor, Lohaus, Arnold, Advisory Editor, Müsseler, Jochen, Advisory Editor, Schütz, Astrid, Advisory Editor, and Röhner, Jessica
- Published
- 2020
- Full Text
- View/download PDF
3. Endgerätespezifische und darstellungsabhängige Bearbeitungszeit- und Antwortverhaltensunterschiede in Webbefragungen
- Author
-
Schlosser, Stephan, Silber, Henning, Mays, Anja, editor, Dingelstedt, André, editor, Hambauer, Verena, editor, Schlosser, Stephan, editor, Berens, Florian, editor, Leibold, Jürgen, editor, and Höhne, Jan Karem, editor
- Published
- 2020
- Full Text
- View/download PDF
4. Antwortskalenrichtung und Umfragemodus
- Author
-
Krebs, Dagmar, Höhne, Jan Karem, Mays, Anja, editor, Dingelstedt, André, editor, Hambauer, Verena, editor, Schlosser, Stephan, editor, Berens, Florian, editor, Leibold, Jürgen, editor, and Höhne, Jan Karem, editor
- Published
- 2020
- Full Text
- View/download PDF
5. In der Mitte ist Platz für mehrere Meinungen : Vergleich von partiell- und vollverbalisierten Skalen mit unterschiedlicher Formulierung der Skalenmitte
- Author
-
Krebs, Dagmar, Faulbaum, Frank, Series Editor, Kley, Stefanie, Series Editor, Pfau-Effinger, Birgit, Series Editor, Schupp, Jürgen, Series Editor, Schröder, Jette, Series Editor, Wolf, Christof, Series Editor, Menold, Natalja, editor, and Wolbring, Tobias, editor
- Published
- 2019
- Full Text
- View/download PDF
6. Einflüsse unterschiedlicher Formen der Verbalisierung von Antwortskalen auf das Antwortverhalten von Befragungspersonen
- Author
-
Rosebrock, Antje, Schlosser, Stephan, Höhne, Jan Karem, Kühnel, Steffen M., Faulbaum, Frank, Series Editor, Kley, Stefanie, Series Editor, Pfau-Effinger, Birgit, Series Editor, Schupp, Jürgen, Series Editor, Schröder, Jette, Series Editor, Wolf, Christof, Series Editor, Menold, Natalja, editor, and Wolbring, Tobias, editor
- Published
- 2019
- Full Text
- View/download PDF
7. Liefern Jugendliche valide Informationen zum Bildungsstand ihrer Eltern in standardisierten Erhebungen? Befunde zu Schülerinnen und Schülern der 9. Jahrgangsstufe in Deutschland.
- Author
-
Hovestadt, Till and Schneider, Thorsten
- Abstract
Copyright of Zeitschrift für Erziehungswissenschaft is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2021
- Full Text
- View/download PDF
8. Ein integratives Kommunikationsmodell nach Hargie und Kollegen
- Author
-
Röhner, Jessica, Schütz, Astrid, Kriz, Jürgen, Series editor, Röhner, Jessica, and Schütz, Astrid
- Published
- 2016
- Full Text
- View/download PDF
9. Measuring Congruence Between Voters and Parties in Online Surveys: Does Question Wording Matter?
- Author
-
Bruinsma, Bastiaan and Bruinsma, Bastiaan
- Abstract
Congruence on policies between political parties and voters is a frequently assumed requirement for democracy. To be able to study this, we should be able to calculate accurate and precise measures of policy congruence in political systems. This could then tell us more about the political system we study, and the "distances" that exist between parties and voters on either issues or broader ideological dimensions. Here, I draw on experimental data from a Voting Advice Application to show that the wording of the issues can influence the degree of congruence one measures. Yet, this comes with the complication that this influence depends on the type of issue, the characteristics of the voters themselves, and the party the congruence is calculated with. These findings should serve as a warning for those who aim to measure congruence that even minor changes in question-wording can (but do not have to) cause relatively large changes in congruence, especially when many parties are involved and the differences between the congruences are small.
- Published
- 2023
10. Relying on External Information Sources When Answering Knowledge Questions in Web Surveys
- Author
-
Gummer, Tobias, Kunz, Tanja, Gummer, Tobias, and Kunz, Tanja
- Abstract
Knowledge questions frequently are used in survey research to measure respondents’ topic-related cognitive ability and memory. However, in self-administered surveys, respondents can search external sources for additional information to answer a knowledge question correctly. In this case, the knowledge question measures accessible and procedural memory. Depending on what the knowledge question aims at, the validity of this measure is limited. Thus, in this study, we conducted three experiments using a web survey to investigate the effects of task difficulty, respondents’ ability, and respondents’ motivation on the likelihood of searching external sources for additional information as a form of over-optimizing response behavior when answering knowledge questions. We found that the respondents who are highly educated and more interested in a survey are more likely to invest additional efforts to answer knowledge questions correctly. Most importantly, our data showed that for these respondents, a more difficult question design further increases the likelihood of over-optimizing response behavior.
- Published
- 2023
11. Using Only Numeric Labels Instead of Verbal Labels: Stripping Rating Scales to Their Bare Minimum in Web Surveys
- Author
-
Gummer, Tobias, Kunz, Tanja, Gummer, Tobias, and Kunz, Tanja
- Abstract
With the increasing use of smartphones in web surveys, considerable efforts have been devoted to reduce the amount of screen space taken up by questions. An emerging stream of research in this area is aimed at optimizing the design elements of rating scales. One suggestion that has been made is to completely abandon verbal labels and use only numeric labels instead. This approach deliberately shifts the task of scale interpretation to the respondents and reduces the information given to them with an intention to reduce their response burden while still preserving the scale meaning. Following prior research, and by drawing on the established model of the cognitive response process, we critically tested these assumptions. Based on a web survey experiment, we found that omitting verbal labels and using only numeric labels instead pushed respondents to focus their responses on the endpoints of a rating scale. Moreover, drawing on response time paradata, we showed that their response burden was not reduced when presented with only numeric labels; quite the opposite was the case, especially when respondents answered the scale with only numeric labels for the first time, which seemed to entail additional cognitive effort. Based on our findings, we advise against using only numeric labels for rating scales in web surveys.
- Published
- 2023
12. Understanding Respondents' Attitudes Toward Web Paradata Use
- Author
-
Kunz, Tanja, Gummer, Tobias, Kunz, Tanja, and Gummer, Tobias
- Abstract
The collection and use of paradata is gaining in importance, especially in web surveys. From a research ethics’ perspective, respondents should be asked for their consent to the collection and use of web paradata. In this context, a positive attitude toward paradata use has been deemed to be a prerequisite for respondents’ willingness to share their paradata. The present study aimed to identify factors affecting respondents’ attitudes toward paradata use. Our findings revealed that adequately informing survey respondents about what paradata are and why they are used was an important determinant of their attitudes toward paradata use. Moreover, we found that respondents with a positive attitude toward the survey were more likely to have a favorable opinion of paradata use. Our findings suggest that a thorough understanding of the factors that contribute to a positive attitude toward paradata use provides the basis for improved paradata consent procedures, which in turn will increase rates of consent to paradata use and help attenuate the risk of consent bias in web surveys.
- Published
- 2023
13. How Effective Are Eye-Tracking Data in Identifying Problematic Questions?
- Author
-
Neuert, Cornelia and Neuert, Cornelia
- Abstract
To collect high-quality data, survey designers aim to develop questions that each respondent can understand as intended. A critical step to this end is designing questions that minimize the respondents' burden by reducing the cognitive effort required to comprehend and answer them. One promising technique for identifying problematic survey questions is eye tracking. This article investigates the potential of eye movements and pupil dilations as indicators for evaluating survey questions. Respondents were randomly assigned to either a problematic or an improved version of six experimental questions. By analyzing fixation times, fixation counts, and pupil diameters, it was examined whether these parameters could be used to distinguish between the two versions. Identifying the improved version worked best by comparing fixation times, whereas in most cases, it was not possible to differentiate between versions on the basis of pupil data. Limitations and practical implications of the findings are discussed.
- Published
- 2023
14. Using Apples and Oranges to Judge Quality? Selection of Appropriate Cross-National Indicators of Response Quality in Open-Ended Questions
- Author
-
Meitinger, Katharina, Behr, Dorothée, Braun, Michael, Meitinger, Katharina, Behr, Dorothée, and Braun, Michael
- Abstract
Methodological studies usually gauge response quality in narrative open-ended questions with the proportion of nonresponse, response length, response time, and number of themes mentioned by respondents. However, not all of these indicators may be comparable and appropriate for evaluating open-ended questions in a cross-national context. This study assesses the cross-national appropriateness of these indicators and their potential bias. For the analysis, we use data from two web surveys conducted in May 2014 with 2,685 respondents and in June 2014 with 2,689 respondents and compare responses from Germany, Great Britain, the United States, Mexico, and Spain. We assess open-ended responses for a variety of topics (e.g., national identity, gender attitudes, and citizenship) with these indicators and evaluate whether they arrive at similar or contradictory conclusions about response quality. We find that all indicators are potentially biased in a cross-national context due to linguistic and cultural reasons and that the bias differs in prevalence across topics. Therefore, we recommend using multiple indicators as well as items covering a range of topics when evaluating response quality in open-ended questions across countries.
- Published
- 2023
15. Effects of the Number of Open-Ended Probing Questions on Response Quality in Cognitive Online Pretests
- Author
-
Neuert, Cornelia, Lenzner, Timo, Neuert, Cornelia, and Lenzner, Timo
- Abstract
Cognitive online pretests have, in recent years, become recognized as a promising tool for evaluating questions prior to their use in actual surveys. While existing research has shown that cognitive online pretests produce similar results to face-to-face cognitive interviews with regard to the problems detected and the item revisions suggested, little is known about the ideal design of a cognitive online pretest. This study examines whether the number of open-ended probing questions asked during a cognitive online pretest has an effect on the quality and depth of respondents' answers as well as on respondents' satisfaction with the survey. We conducted an experiment in which we varied the number of open-ended probing questions that respondents received during a cognitive online pretest. The questionnaire consisted of 26 survey questions, and respondents received either 13 probing questions (n = 120, short version) or 21 probing questions (n = 120, long version). The findings suggest that asking a greater number of open-ended probes in a cognitive online pretest does not undermine the quality of respondents' answers represented by the response quality indicators: (1) amount of probe nonresponse, (2) number of uninterpretable answers, (3) number of dropouts, (4) number of words, (5) response times, and (6) number and type of themes covered by the probes. Furthermore, the respondents' satisfaction with the survey is not affected by the number of probes being asked.
- Published
- 2023
16. Linking Twitter and Survey Data: The Impact of Survey Mode and Demographics on Consent Rates Across Three UK Studies
- Author
-
Al Baghal, Tarek, Sloan, Luke, Jessop, Curtis, Williams, Matthew L., Burnap, Pete, Al Baghal, Tarek, Sloan, Luke, Jessop, Curtis, Williams, Matthew L., and Burnap, Pete
- Abstract
In light of issues such as increasing unit nonresponse in surveys, several studies argue that social media sources such as Twitter can be used as a viable alternative. However, there are also a number of shortcomings with Twitter data such as questions about its representativeness of the wider population and the inability to validate whose data you are collecting. A useful way forward could be to combine survey and Twitter data to supplement and improve both. To do so, consent within a survey is first needed. This study explores the consent decisions in three large representative surveys of the adult British population to link Twitter data to survey responses and the impact that demographics and survey mode have on these outcomes. Findings suggest that consent rates for data linkage are relatively low, and this is in part mediated by mode, where face-to-face surveys have higher consent rates than web versions. These findings are important to understand the potential for linking Twitter and survey data but also to the consent literature generally.
- Published
- 2023
17. Using Instructed Response Items as Attention Checks in Web Surveys: Properties and Implementation
- Author
-
Gummer, Tobias, Roßmann, Joss, Silber, Henning, Gummer, Tobias, Roßmann, Joss, and Silber, Henning
- Abstract
Identifying inattentive respondents in self-administered surveys is a challenging goal for survey researchers. Instructed response items (IRIs) provide a measure for inattentiveness in grid questions that is easy to implement. The present article adds to the sparse research on the use and implementation of attention checks by addressing three research objectives. In a first study, we provide evidence that IRIs identify respondents who show an elevated use of straightlining, speeding, item nonresponse, inconsistent answers, and implausible statements throughout a survey. Excluding inattentive respondents, however, did not alter the results of substantive analyses. Our second study suggests that respondents' inattentiveness partially changes as the context in which they complete the survey changes. In a third study, we present experimental evidence that a mere exposure to an IRI does not negatively or positively affect response behavior within a survey. A critical discussion on using IRI attention checks concludes this article.
- Published
- 2023
18. The Issue of Noncompliance in Attention Check Questions: False Positives in Instructed Response Items
- Author
-
Silber, Henning, Roßmann, Joss, Gummer, Tobias, Silber, Henning, Roßmann, Joss, and Gummer, Tobias
- Abstract
Attention checks detect inattentiveness by instructing respondents to perform a specific task. However, while respondents may correctly process the task, they may choose to not comply with the instructions. We investigated the issue of noncompliance in attention checks in two web surveys. In Study 1, we measured respondents’ attitudes toward attention checks and their self-reported compliance. In Study 2, we experimentally varied the reasons given to respondents for conducting the attention check. Our results showed that while most respondents understand why attention checks are conducted, a nonnegligible proportion of respondents evaluated them as controlling or annoying. Most respondents passed the attention check; however, among those who failed the test, 61% seem to have failed the task deliberately. These findings reinforce that noncompliance is a serious concern with attention check instruments. The results of our experiment showed that more respondents passed the attention check if a comprehensible reason was given.
- Published
- 2023
19. Combining Clickstream Analyses and Graph-Modeled Data Clustering for Identifying Common Response Processes
- Author
-
Ulitzsch, Esther, He, Qiwei, Ulitzsch, Vincent, Molter, Hendrik, Nichterlein, André, Niedermeier, Rolf, Pohl, Steffi, Ulitzsch, Esther, He, Qiwei, Ulitzsch, Vincent, Molter, Hendrik, Nichterlein, André, Niedermeier, Rolf, and Pohl, Steffi
- Abstract
Complex interactive test items are becoming more widely used in assessments. Being computer-administered, assessments using interactive items allow logging time-stamped action sequences. These sequences pose a rich source of information that may facilitate investigating how examinees approach an item and arrive at their given response. There is a rich body of research leveraging action sequence data for investigating examinees’ behavior. However, the associated timing data have been considered mainly on the item-level, if at all. Considering timing data on the action-level in addition to action sequences, however, has vast potential to support a more fine-grained assessment of examinees’ behavior. We provide an approach that jointly considers action sequences and action-level times for identifying common response processes. In doing so, we integrate tools from clickstream analyses and graph-modeled data clustering with psychometrics. In our approach, we (a) provide similarity measures that are based on both actions and the associated action-level timing data and (b) subsequently employ cluster edge deletion for identifying homogeneous, interpretable, well-separated groups of action patterns, each describing a common response process. Guidelines on how to apply the approach are provided. The approach and its utility are illustrated on a complex problem-solving item from PIAAC 2012.
- Published
- 2023
20. Developing and Applying IR-Tree Models: Guidelines, Caveats, and an Extension to Multiple Groups
- Author
-
Plieninger, Hansjörg and Plieninger, Hansjörg
- Abstract
IR-tree models assume that categorical item responses can best be explained by multiple response processes. In the present article, guidelines are provided for the development and interpretation of IR-tree models. In more detail, the relationship between a tree diagram, the model equations, and the analysis on the basis of pseudo-items is described. Moreover, it is shown that IR-tree models do not allow conclusions about the sequential order of the processes, and that mistakes in the model specification can have serious consequences. Furthermore, multiple-group IR-tree models are presented as a novel extension of IR-tree models to data from heterogeneous units. This allows, for example, to investigate differences across countries or organizations with respect to core parameters of the IR-tree model. Finally, an empirical example on organizational commitment and response styles is presented.
- Published
- 2023
21. The Relationship Between Response Probabilities and Data Quality in Grid Questions
- Author
-
Gummer, Tobias, Bach, Ruben L., Daikeler, Jessica, Eckman, Stephanie, Gummer, Tobias, Bach, Ruben L., Daikeler, Jessica, and Eckman, Stephanie
- Abstract
Response probabilities are used in adaptive and responsive survey designs to guide data collection efforts, often with the goal of diversifying the sample composition. However, if response probabilities are also correlated with measurement error, this approach could introduce bias into survey data. This study analyzes the relationship between response probabilities and data quality in grid questions. Drawing on data from the probability-based GESIS panel, we found low propensity cases to more frequently produce item nonresponse and nondifferentiated answers than high propensity cases. However, this effect was observed only among long-time respondents, not among those who joined more recently. We caution that using adaptive or responsive techniques may increase measurement error while reducing the risk of nonresponse bias.
- Published
- 2023
22. Using a Responsive Survey Design to Innovate Self-Administered Mixed-Mode Surveys
- Author
-
Tobias Gummer, Pablo Christmann, Sascha Verhoeven, and Christof Wolf
- Subjects
Erhebungstechniken und Analysetechniken der Sozialwissenschaften ,Statistics and Probability ,Economics and Econometrics ,Sozialwissenschaften, Soziologie ,experiment ,costs ,Umfrageforschung ,Federal Republic of Germany ,Befragung ,Antwortverhalten ,error ,Bundesrepublik Deutschland ,Methods and Techniques of Data Collection and Data Analysis, Statistical Methods, Computer Methods ,Kosten ,Fehler ,survey research ,ddc:300 ,survey ,response behavior ,Statistics, Probability and Uncertainty ,Social sciences, sociology, anthropology ,EVS ,Social Sciences (miscellaneous) ,mixed-mode ,responsive survey design ,survey costs ,survey error ,survey experiments - Abstract
Implementing innovations in surveys often results in uncertainty concerning how different design decisions will affect key performance indicators such as response rates, nonresponse bias, or survey costs. Thus, responsive survey designs have been developed to better cope with such situations. In the present study, we propose a responsive survey design that relies on experimentation in the earlier phases of the survey to decide between different design choices of which—prior to data collection—their impact on performance indicators is uncertain. We applied this design to the European Values Study 2017/2018 in Germany that advanced its general social survey-type design away from the traditional face-to-face mode to self-administered modes. These design changes resulted in uncertainty as to how different incentive strategies and mode choice sequences would affect response rates, nonresponse bias, and survey costs. We illustrate the application and operation of the proposed responsive survey design, as well as an efficiency issue that accompanies it. We also compare the performance of the responsive survey design to a traditional survey design that would have kept all design characteristics static during the field period.
- Published
- 2022
- Full Text
- View/download PDF
23. Interviewer-Observed Paradata in Mixed-Mode and Innovative Data Collection
- Author
-
Kunz, Tanja, Daikeler, Jessica, and Ackermann-Piek, Daniela
- Subjects
Erhebungstechniken und Analysetechniken der Sozialwissenschaften ,evidence map ,contact history information ,interviewer observations ,interviewer evaluations ,face-to-face interview ,CAPI-plus ,video interview ,knock-to-nudge ,Sozialwissenschaften, Soziologie ,Datenqualität ,interview ,Umfrageforschung ,Befragung ,Antwortverhalten ,data capture ,Methods and Techniques of Data Collection and Data Analysis, Statistical Methods, Computer Methods ,survey research ,ddc:300 ,data quality ,response behavior ,survey ,Datengewinnung ,Social sciences, sociology, anthropology - Abstract
In this research note, we address the potentials of using interviewer-observed paradata, typically collected during face-to-face-only interviews, in mixed-mode and innovative data collection methods that involve an interviewer at some stage (e.g., during the initial contact or during the interview). To this end, we first provide a systematic overview of the types and purposes of the interviewer-observed paradata most commonly collected in face-to-face interviews—contact form data, interviewer observations, and interviewer evaluations—using the methodology of evidence mapping. Based on selected studies, we illustrate the main purposes of interviewer-observed paradata we identified—including fieldwork management, propensity modeling, nonresponse bias analysis, substantive analysis, and survey data quality assessment. Based on this, we discuss the possible use of interviewer-observed paradata in mixed-mode and innovative data collection methods. We conclude with thoughts on new types of interviewer-observed paradata and the potential of combining paradata from different survey modes.
- Published
- 2023
24. Early and Late Participation during the Field Period: Response Timing in a Mixed-Mode Probability-Based Panel Survey
- Author
-
Bella Struminskaya and Tobias Gummer
- Subjects
Panel survey ,Sociology and Political Science ,Umfrageforschung ,Antwortverhalten ,01 natural sciences ,010104 statistics & probability ,0504 sociology ,survey research ,field period ,mixed-mode ,panel surveys ,participation ,reluctant respondents ,response timing ,survey ,response behavior ,0101 mathematics ,Social sciences, sociology, anthropology ,Erhebungstechniken und Analysetechniken der Sozialwissenschaften ,Teilnehmer ,Sozialwissenschaften, Soziologie ,Magnetic reluctance ,Field (Bourdieu) ,05 social sciences ,050401 social sciences methods ,Befragung ,Mixed mode ,Methods and Techniques of Data Collection and Data Analysis, Statistical Methods, Computer Methods ,Respondent ,ddc:300 ,Panel ,Demographic economics ,Psychology ,Social Sciences (miscellaneous) ,Period (music) ,participant - Abstract
Reluctance of respondents to participate in surveys has long drawn the attention of survey researchers. Yet, little is known about what drives a respondent’s decision to answer the survey invitation early or late during the field period. Moreover, we still lack evidence on response timing in longitudinal surveys. That is, the questions on whether response timing is a rather stable respondent characteristic and what—if anything—affects change in response timing across different interviews remain open. We relied on data from a mixed-mode general population panel survey collected between 2014 and 2016 to study the stability of response timing across 18 panel waves and factors that influence the decision to participate early or late in the field period. Our results suggest that the factors which had effects on response timing are different in the mail and web modes. Moreover, we found that experience with prior panel waves affected the respondent’s decision to participate early or late. Overall, the present study advocates understanding response timing as a metric variable and, consequently, the need to reflect this in modeling strategies.
- Published
- 2023
25. Survey participation as a function of democratic engagement, trust in institutions, and perceptions of surveys
- Author
-
Henning Silber, Patricia Moy, Timothy P Johnson, Rico Neumann, Sven Stadtmüller, and Lydia Repke
- Subjects
Vertrauen ,democracy ,Umfrageforschung ,survey climate ,perception ,Antwortverhalten ,survey response ,survey research ,participation ,survey ,response behavior ,Partizipation ,Wahrnehmung ,Social sciences, sociology, anthropology ,Erhebungstechniken und Analysetechniken der Sozialwissenschaften ,Beteiligung ,Sozialwissenschaften, Soziologie ,Institution ,General Social Sciences ,Befragung ,Methods and Techniques of Data Collection and Data Analysis, Statistical Methods, Computer Methods ,survey attitudes ,ddc:300 ,confidence ,300 Sozialwissenschaften::300 Sozialwissenschaften, Soziologie::300 Sozialwissenschaften ,Demokratie - Abstract
Objective: With response rates of large-scale surveys having decreased significantly over the years and rebounds seeming unlikely, many studies now examine how response rates vary with methodological design and incentives. This investigation delves into how individual-level factors shape survey participation. Specifically, we examine the influence of individuals’ democratic engagement and their trust in institutions on intent to participate in surveys, both directly and indirectly through their perceptions of surveys. Methods: We collected survey data from a probability sample of adults (N = 1343) in Mannheim, Germany, from November 2019 to March 2020. Structural equation models were estimated to test the hypothesized relationships. Results: The analyses support most, but not all, hypothesized relationships. Democratic engagement bolstered intent to participate, directly as well as indirectly through perceptions of surveys. Institutional trust, on the other hand, only influenced the outcome measure indirectly. Perceptions of surveys had a strong bearing overall effect on intent to participate. Conclusion: The study's results suggest that the response rates and larger issues related to the perceived legitimacy of public opinion and survey research might be intertwined with orientations related to people's civic and political life. The article discusses potential ways survey researchers can counteract distrust in surveys.
- Published
- 2022
- Full Text
- View/download PDF
26. Anonymisieren oder personalisieren?
- Author
-
Langenmaier, A.-M., Metje, E., Klasen, B., Brinkschmidt, T., Karst, M., and Amelung, V.
- Abstract
Copyright of Der Schmerz is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2018
- Full Text
- View/download PDF
27. Using Google Trends Data to Learn More About Survey Participation
- Author
-
Gummer, Tobias, Oehrlein, Anne-Sophie, Gummer, Tobias, and Oehrlein, Anne-Sophie
- Abstract
As response rates continue to decline, the need to learn more about the survey participation process remains an important task for survey researchers. Search engine data may be one possible source for learning about what information some potential respondents are looking up about a survey when they are making a participation decision. In the present study, we explored the potential of search engine data for learning about survey participation and how it can inform survey design decisions. We drew on freely available Google Trends (GT) data to learn about the use of Google Search with respect to our case study: participation in the Family Research and Demographic Analysis (FReDA) panel survey. Our results showed that some potential respondents were using Google Search to gather information on the FReDA survey. We also showed that the additional data obtained via GT can help survey researchers to discover topics of interest to respondents and geographically stratified search patterns. Moreover, we introduced different approaches for obtaining data via GT, discussed the challenges that come with these data, and closed with practical recommendations on how survey researchers might utilize GT data to learn about survey participation., Da Response Rates in Umfragen immer weiter sinken, bleibt es eine wichtige Aufgabe für methodische Forschung, mehr über den Teilnahmeprozess zu lernen. Search Engine Data können eine mögliche Quelle sein, um herauszufinden, welche Informationen potenzielle Befragte über eine Umfrage suchen, wenn sie eine Teilnahmeentscheidung treffen. In der vorliegenden Studie untersuchten die Autor*innen das Potenzial von Suchmaschinendaten, um etwas über die Teilnahme an Umfragen zu erfahren und wie diese Daten in Entscheidungen über die Gestaltung von Umfragen einfließen können. Sie stützten sich auf frei verfügbare Daten von Google Trends (GT), um mehr über die Nutzung der Google-Suche in Bezug auf eine Fallstudie zu erfahren: die Teilnahme an der Panel-Umfrage Family Research and Demographic Analysis (FReDA). Die Ergebnisse zeigten, dass einige potenzielle Befragte die Google-Suche nutzten, um Informationen über die FReDA-Umfrage einzuholen. Die Autoren zeigen ebenfalls, dass die über GT gewonnenen zusätzlichen Daten den Umfrageforschern helfen können, Themen, die für die Befragten von Interesse sind, sowie geografisch geschichtete Suchmuster zu entdecken. Darüber hinaus stellen die Autoren verschiedene Ansätze für die Beschaffung von Daten über GT vor, erörtern die mit diesen Daten verbundenen Herausforderungen und geben abschließend praktische Empfehlungen, wie Umfrageforscher GT-Daten nutzen können, um mehr über die Teilnahme an ihren Umfragen zu erfahren.
- Published
- 2022
28. A Comparison of Three Designs for List-style Open-ended Questions in Web Surveys
- Author
-
Kunz, Tanja, Meitinger, Katharina, Kunz, Tanja, and Meitinger, Katharina
- Abstract
Although list-style open-ended questions generally help us gain deeper insights into respondents’ thoughts, opinions, and behaviors, the quality of responses is often compromised. We tested a dynamic and a follow-up design to motivate respondents to give higher quality responses than with a static design, but without overburdening them. Our results showed that a follow-up design achieved longer responses with more themes and theme areas than a static design. In contrast, the dynamic design produced the shortest answers with the fewest themes and theme areas. No differences in item nonresponse and only minor differences in additional response burden were found among the three list-style designs. Our study shows that design features and timing are crucial to clarify the desired response format and motivate respondents to give high-quality answers to list-style open-ended questions.
- Published
- 2022
29. Now, later, or never? Using response-time patterns to predict panel attrition
- Author
-
Minderop, Isabella, Weiß, Bernd, Minderop, Isabella, and Weiß, Bernd
- Abstract
Preventing panel members from attriting is a fundamental challenge for panel surveys. Research has shown that response behavior in earlier waves (response or nonresponse) is a good predictor of panelists' response behavior in upcoming waves. However, response behavior can be described in greater detail by considering the time until the response is returned. In the present study, we investigated whether respondents who habitually return their survey late and respondents who switch between early and late response in multiple waves are more likely to attrit from a panel. Using data from the GESIS Panel, we found that later response is related to a higher likelihood of attrition (AME = 0.087) and that response-time stability is related to a lower likelihood of attrition (AME = −0.013). Our models predicted most cases of attrition; thus, survey practitioners could potentially predict future attriters by applying these models to their own data.
- Published
- 2022
30. The effects of question, respondent and interviewer characteristics on two types of item nonresponse
- Author
-
Silber, Henning, Roßmann, Joss, Gummer, Tobias, Zins, Stefan, Weyandt, Kai Willem, Silber, Henning, Roßmann, Joss, Gummer, Tobias, Zins, Stefan, and Weyandt, Kai Willem
- Abstract
In this article, we examine two types of item nonresponse in a face-to-face population survey: 'don’t know' (DK) and 'item refusal' (REF). Based on the cognitive model of survey response, the theory of survey satisficing and previous research, we derive explanatory variables on three levels: question, respondent and interviewer characteristics. The results of our cross-classified model show that while the two levels question and respondents' characteristics affected both types of item nonresponse, interviewer characteristics affected only DK answers. Our results also confirm that DK and REF are substantially different item nonresponse types resulting from distinguishable disruptions of the cognitive response process. Since most results are in line with prior theoretical predictions, they suggest that survey practitioners are well-advised by continuing to follow the large body of practical guidance derived from the theories tested here.
- Published
- 2022
31. The Impact of Presentation Format on Conjoint Designs: A Replication and an Extension
- Author
-
Cassel, Sophie, Magnusson, Josefine, Lundmark, Sebastian, Cassel, Sophie, Magnusson, Josefine, and Lundmark, Sebastian
- Abstract
In recent years, conjoint experiments have been in vogue across the social sciences. A reason for the conjoint experiments’ popularity is that they allow researchers to estimate the causal effects of many components of stimuli simultaneously. However, for conjoint experiments to produce valid results, respondents need to be able to process and understand the wide range of dimensions presented to them in the experiment. If the information processing is too demanding or too complicated, respondents are likely to turn to satisficing strategies, leading to poorer data quality and subsequently decreasing the researcher’s ability to make accurate causal inferences. One factor that may lead to the adoption of satisficing strategies is the presentation format used for the conjoint experiment (i.e., presenting the information within a text paragraph or a table). In the present paper, a direct replication of the single conjoint presentation format experiment described in Shamon, Dülmer, and Giza's (2019) paper in Sociological Methods & Research is presented, and extending their work to paired conjoint experiment. The results of the direct replication showed that respondents evaluated the questionnaire more favorably when reading the table format but were, on the other hand, less likely to participate in subsequent panel waves. Albeit the number of break-offs, refusals, and non-responses did not differ between the two formats, respondents who saw the table format evaluated the scenarios with more consistency and less dimension reduction, thus favoring the table presentation format. For paired conjoint experiments, the presentation format did not affect survey evaluations or panel participation but the table format heavily outperformed the text format on every data quality measure except for dimension reduction. Conceptually, albeit not directly replicating the findings in Shamon, Dülmer, and Giza (2019), the present manuscript concludes that the table format appears pr
- Published
- 2022
32. Are You...? Asking Questions on Sex with a Third Category in Germany
- Author
-
Hadler, Patricia, Neuert, Cornelia, Ortmanns, Verena, Stiegler, Angelika, Hadler, Patricia, Neuert, Cornelia, Ortmanns, Verena, and Stiegler, Angelika
- Abstract
A question asking for respondents' sex is one of the standard sociodemographic characteristics collected in a survey. Until now, it typically consisted of a simple question (e.g., "Are you…?") with two answer categories ("male" and "female"). In 2019, Germany implemented the additional sex designation divers for intersex people. In survey methodology, this has led to an ongoing discussion how to include a third category in questionnaires. We investigate respondents' understanding of the third category, and whether introducing it affects data quality. Moreover, we investigate the understanding of the German term Geschlecht for sex and gender. To answer our research questions, we implemented different question wordings asking for sex/gender in a non-probability-based online panel in Germany and combined them with open-ended questions. Findings and implications for surveys are discussed., Der Geschlechtseintrag divers wurde zu Beginn 2019 in Deutschland für intersexuelle Menschen eingeführt, also für Menschen, die auf Basis ihrer primären oder sekundären Geschlechtsmerkmale weder eindeutig männlich noch weiblich sind. Im März 2019 haben die Autorinnen in einer Web Probing-Studie Daten zum Verständnis der Kategorie erhoben und ausgewertet. Die Ergebnisse zeigen, dass etwa die Hälfte der Befragten ein grundsätzliches Verständnis des Begriffs divers haben, ihn also beispielsweise als "nicht männlich oder weiblich" definieren können. Allerdings zeigte sich auch, dass einige Befragte divers als Kategorie für Transgender hielten, also beispielsweise für Menschen, die biologisch gesehen Männer sind, sich aber als Frau begreifen. Diese Assoziation war sogar wesentlich häufiger als das korrekte Verständnis als Geschlechtseintrag für Intersexualität. Dieser Befund deckt sich mit einem weiteren Schwerpunkt des Artikels, der zeigt, dass die meisten Befragten keinen Unterschied zwischen ihrem biologischen und gefühlten Geschlecht machen. Dies haben wir in einer Auswertung der Begriffe "Geschlecht", "offiziell eingetragenes Geschlecht" und "Geschlechtsidentität" untersucht. Des Weiteren zeigt der Artikel, dass der Anteil an negativen Reaktionen auf den Begriff, in Form von Survey Abbruch oder feindseligen Kommentaren über nicht-binäre Geschlechtszugehörigkeit, gering ist.
- Published
- 2022
33. Using Double Machine Learning to Understand Nonresponse in the Recruitment of a Mixed-Mode Online Panel
- Author
-
Felderer, Barbara, Kueck, Jannis, Spindler, Martin, Felderer, Barbara, Kueck, Jannis, and Spindler, Martin
- Abstract
Survey scientists increasingly face the problem of high-dimensionality in their research as digitization makes it much easier to construct high-dimensional (or "big") data sets through tools such as online surveys and mobile applications. Machine learning methods are able to handle such data, and they have been successfully applied to solve predictive problems. However, in many situations, survey statisticians want to learn about causal relationships to draw conclusions and be able to transfer the findings of one survey to another. Standard machine learning methods provide biased estimates of such relationships. We introduce into survey statistics the double machine learning approach, which gives approximately unbiased estimators of parameters of interest, and show how it can be used to analyze survey nonresponse in a high-dimensional panel setting. The double machine learning approach here assumes unconfoundedness of variables as its identification strategy. In high-dimensional settings, where the number of potential confounders to include in the model is too large, the double machine learning approach secures valid inference by selecting the relevant confounding variables., Wissenschaftlerinnen und Wissenschaftler im Feld "Umfrageforschung" sehen sich in ihrer Forschung zunehmend mit dem Problem der hohen Dimensionalität konfrontiert, da es durch die Digitalisierung viel einfacher geworden ist, hochdimensionale (oder "große") Datensätze mit Hilfe von Tools wie Online-Umfragen und mobilen Anwendungen zu erstellen. Methoden des maschinellen Lernens sind in der Lage, mit solchen Daten umzugehen, und sie wurden bereits erfolgreich zur Lösung von Vorhersageproblemen eingesetzt. In vielen Situationen möchten Umfragestatistiker*innen jedoch kausale Zusammenhänge erkennen, um Schlussfolgerungen ziehen und die Ergebnisse einer Umfrage auf eine andere übertragen zu können. Standardmethoden des maschinellen Lernens liefern verzerrte Schätzungen solcher Zusammenhänge. Die Autor*innen führen in die Umfragestatistik den Ansatz des doppelten maschinellen Lernens ein, der annähernd unverzerrte Schätzer der interessierenden Parameter liefert, und zeigen, wie er zur Analyse von Umfrage-Nonresponse in einem hochdimensionalen Panel-Umfeld eingesetzt werden kann.
- Published
- 2022
34. Do shorter stated survey length and inclusion of a QR code in an invitation letter lead to better response rates?
- Author
-
Lugtig, Peter and Lugtig, Peter
- Abstract
Invitation letters to web surveys often contain information on how long it will take to complete a web survey. When the stated length in an invitation of a survey is short, it could help to convince respondents to participate in the survey. When it is long respondents may choose not to participate, and when the actual length is longer than the stated length there may be a risk of dropout. This paper reports on an Randomised Control Trial (RCT) conducted in a cross-sectional survey conducted in the Netherlands. The RCT included different version of the stated length of a survey and inclusion of a Quick Response (QR) code as ways to communicate to potential respondents that the survey was short or not. Results from the RCT show that there are no effects of the stated length on actual participation in the survey, nor do we find an effect on dropout. We do however find that inclusion of a QR code leads respondents to be more likely to use a smartphone, and find some evidence for a different composition of our respondent sample in terms of age.
- Published
- 2022
35. Shaking hands in a busy waiting room: The effects of the surveyor’s introduction and people present in the waiting room on the response rate
- Author
-
Ongena, Yfke P., Haan, Marieke, Kwee, Thomas C., Yakar, Derya, Ongena, Yfke P., Haan, Marieke, Kwee, Thomas C., and Yakar, Derya
- Abstract
Although waiting room surveys are frequently conducted, methodological studies on this topic are scarce. Behaviour of surveyors in waiting rooms can easily be controlled, and these surveys also allow for collection of paradata; relevant information on the circumstances of a request to participate in survey research. In this paper, we present the results of an experiment systematically manipulating surveyor’s handshakes and verbal introduction of their names. Patients scheduled for radiological examinations were approached to take part in a survey. An observer noted circumstances in the waiting room (CT or MRI) such as the number of people present. In the CT waiting room, willingness to participate was higher when no other people were filling out the survey than when there were other people filling out the survey. Thus, scarcity effects seemed to play a major role in the decision to participate. In addition, a patient waiting alone was more likely to fully complete the questionnaire, than patients accompanied by one or more caregivers. There was no effect of the surveyor’s handshake or verbal name introduction on survey participation, which is a fortunate outcome in light of social distances measures fighting COVID-19.
- Published
- 2022
36. Risk of Nonresponse Bias and the Length of the Field Period in a Mixed-Mode General Population Panel
- Author
-
Struminskaya, Bella, Gummer, Tobias, Struminskaya, Bella, and Gummer, Tobias
- Abstract
Survey researchers are often confronted with the question of how long to set the length of the field period. Longer fielding time might lead to greater participation yet requires survey managers to devote more of their time to data collection efforts. With the aim of facilitating the decision about the length of the field period, we investigated whether a longer fielding time reduces the risk of nonresponse bias to judge whether field periods can be ended earlier without endangering the performance of the survey. By using data from six waves of a probability-based mixed-mode (online and mail) panel of the German population, we analyzed whether the risk of nonresponse bias decreases over the field period by investigating how day-by-day coefficients of variation develop during the field period. We then determined the optimal cut-off points for each mode after which data collection can be terminated without increasing the risk of nonresponse bias and found that the optimal cut-off points differ by mode. Our study complements prior research by shifting the perspective in the investigation of the risk of nonresponse bias to panel data as well as to mixed-mode surveys, in particular. Our proposed method of using coefficients of variation to assess whether the risk of nonresponse bias decreases significantly with each additional day of fieldwork can aid survey practitioners in finding the optimal field period for their mixed-mode surveys.
- Published
- 2022
37. German Health Update (GEDA 2019/2020-EHIS) - Background and methodology
- Author
-
Allen, Jennifer, Born, Sabine, Damerow, Stefan, Kuhnert, Ronny, Lemcke, Johannes, Müller, Anja, Weihrauch, Tim, Wetzstein, Matthias, Allen, Jennifer, Born, Sabine, Damerow, Stefan, Kuhnert, Ronny, Lemcke, Johannes, Müller, Anja, Weihrauch, Tim, and Wetzstein, Matthias
- Abstract
Between April 2019 and September 2020, 23,001 people aged 15 or over responded to questions about their health and living conditions for the German Health Update (GEDA 2019/2020-EHIS). The results are representative of the German resident population aged 15 or above. The response rate was 21.6%. The study used a questionnaire based on the third wave of the European Health Interview Survey (EHIS), which was carried out in all EU member states. EHIS consists of four modules on health status, health care provision, health determinants, and socioeconomic variables. The data are collected in a harmonised manner and therefore have a high degree of international comparability. They constitute an important source of information for European health policy and health reporting and are made available by the Statistical Office of the European Union (Eurostat). They also form the basis of the Federal Health Reporting undertaken in Germany. Data collection began in April 2019, just under a year before the beginning of the SARS-CoV-2 pandemic, and continued into its initial phase, as of March 2020. As such, data from the current GEDA wave can also be used to conduct research into the health impact of the SARS-CoV-2 pandemic.
- Published
- 2022
38. Gesundheit in Deutschland aktuell (GEDA 2019/2020-EHIS) - Hintergrund und Methodik
- Author
-
Allen, Jennifer, Born, Sabine, Damerow, Stefan, Kuhnert, Ronny, Lemcke, Johannes, Müller, Anja, Weihrauch, Tim, Wetzstein, Matthias, Allen, Jennifer, Born, Sabine, Damerow, Stefan, Kuhnert, Ronny, Lemcke, Johannes, Müller, Anja, Weihrauch, Tim, and Wetzstein, Matthias
- Abstract
In der Studie Gesundheit in Deutschland aktuell (GEDA 2019/2020-EHIS) beantworteten 23.001 Menschen im Alter ab 15 Jahren zwischen April 2019 und September 2020 Fragen zur Gesundheit und zur Lebenssituation. Die Ergebnisse sind repräsentativ für die in Deutschland lebende Wohnbevölkerung ab 15 Jahren. Die Responserate lag bei 21,6 %. Die Fragebogeninhalte basieren auf der dritten Welle der Europäischen Gesundheitsbefragung (European Health Interview Survey, EHIS), die in allen EU-Mitgliedstaaten durchgeführt wurde. Sie umfasst die vier Module Gesundheitszustand, Gesundheitsversorgung, Gesundheitsdeterminanten und sozioökonomische Variablen. Die harmonisiert erhobenen EHIS-Daten besitzen ein hohes Maß an internationaler Vergleichbarkeit. Sie stellen eine wichtige Informationsgrundlage für die europäische Gesundheitspolitik und -berichterstattung dar und werden vom Statistischen Amt der Europäischen Union (Eurostat) zur Verfügung gestellt. Die Daten sind Grundlage für die Gesundheitsberichterstattung des Bundes. Der Zeitraum der Datenerhebung ab April 2019 berücksichtigte knapp ein Jahr vor der SARS-CoV-2-Pandemie und fiel dann ab März 2020 in die Anfangsphase der Pandemie. Somit stehen mit der aktuellen GEDA-Welle Daten für die Erforschung von gesundheitlichen Auswirkungen im zeitlichen Zusammenhang mit der SARS-CoV-2-Pandemie zur Verfügung.
- Published
- 2022
39. Assessing the Quality of Same-Sex Partnership Reports in the German Microcensus
- Author
-
GESIS - Leibniz-Institut für Sozialwissenschaften, Lengerer, Andrea, GESIS - Leibniz-Institut für Sozialwissenschaften, and Lengerer, Andrea
- Abstract
Since 1996, reports on cohabiting same-sex partnerships have been collected in the German Microcensus. However, it is unclear how reliable these reports are. Compared with other data sources, the Microcensus shows only a small number of cohabiting same-sex couples (less than 0.5% of all cohabiting couples in 2016), so under-reporting is assumed. But, because the "true" values are unknown, it is difficult to determine whether under-reporting is actually occurring. In this paper a procedure is proposed where the response behaviour of respondents is analysed depending on the composition of their household. It was found that non-response to the question about a partner in the household is highest among respondents in whose household there is a possible same-sex partner. This indicates under-reporting of cohabiting same-sex couples, and this under-reporting decreased only slightly, at most, over the period considered here (1996 to 2016). This is not the case for registered same-sex partnerships: a comparison with the Census shows that these are reliably recorded in the German Microcensus.
- Published
- 2022
40. Gender and Survey Participation: An Event History Analysis of the Gender Effects of Survey Participation in a Probability-based Multi-wave Panel Study with a Sequential Mixed-mode Design
- Author
-
Becker, Rolf and Becker, Rolf
- Abstract
In cross-sectional surveys, as well as in longitudinal panel studies, systematic gender differences in survey participation are routinely observed. Since there has been little research on this issue, this study seeks to reveal this association for web-based online surveys and computer-assisted telephone interviews in the context of a sequential mixed-mode design with a push-to-web method. Based on diverse versions of benefit-cost theories relating to deliberative and heuristic decision-making, several hypotheses are deduced and then tested by longitudinal data in the context of a multi-wave panel study on the educational and occupational trajectories of juveniles. Employing event history data on the survey participation of young panelists living in German-speaking cantons in Switzerland and matching them with geographical data at the macro level and panel characteristics at the meso level, none of the hypotheses is confirmed empirically. It is concluded that indirect measures of an individual’s perceptions of a situation, and of the benefits and costs as well as the process and mechanisms of the decision relating to survey participation, are insufficient to explain this gender difference. Direct tests of these theoretical approaches are needed in future.
- Published
- 2022
41. Do Web and Telephone Produce the Same Number of Changes and Events in a Panel Survey?
- Author
-
Lipps, Oliver, Voorpostel, Marieke, Lipps, Oliver, and Voorpostel, Marieke
- Abstract
Measuring change over time is one of the main purposes of longitudinal surveys. With an increase in the use of web as a mode of data collection it is important to assess whether the web mode differs from other modes with respect to the number of changes and events that are captured. We examine whether telephone and web data collection modes are comparable with respect to measuring changes over time or experiencing events. Using experimental data from a two-wave pilot of the Swiss Household Panel, we investigate this question for several variables in the domain of work and family. We find differences for the work-related variables, with web respondents more likely to report changes. These differences do not disappear once the socio-demographic composition of the sample is taken into consideration. This suggests that these differences are not driven by observed different characteristics of the respondents who may have self-selected into one or the other mode. Contrary to work-related variables, a termination of a relationship was more common in the telephone group. This shows that one mode does not necessarily measure more change or events than another; it may depend on the variable in question. In addition, the difference in the protocol mattered: a web respondent in a household that participated fully by web sometimes differed from a web respondent in a household that had a household interview by phone. Nonetheless, the telephone group differed more from the various web protocols that the web protocols among themselves. With more household panel surveys introducing web questionnaires in combination with more traditional face-to-face and telephone interviews, this study adds to our understanding of the potential consequences of mixing modes with respect to longitudinal data analysis.
- Published
- 2022
42. Top Incomes and Inequality Measurement: A Comparative Analysis of Correction Methods Using the EU SILC Data
- Author
-
Hlasny, Vladimir, Verme, Paolo, Hlasny, Vladimir, and Verme, Paolo
- Abstract
It is sometimes observed and frequently assumed that top incomes in household surveys worldwide are poorly measured and that this problem biases the measurement of income inequality. This paper tests this assumption and compares the performance of reweighting and replacing methods designed to correct inequality measures for top-income biases generated by data issues such as unit or item non-response. Results for the European Union’s Statistics on Income and Living Conditions survey indicate that survey response probabilities are negatively associated with income and bias the measurement of inequality downward. Correcting for this bias with reweighting, the Gini coefficient for Europe is revised upwards by 3.7 percentage points. Similar results are reached with replacing of top incomes using values from the Pareto distribution when the cut point for the analysis is below the 95th percentile. For higher cut points, results with replacing are inconsistent suggesting that popular parametric distributions do not mimic real data well at the very top of the income distribution.
- Published
- 2022
43. Measuring subjective social stratification: how does the graphical layout of rating scales affect response distributions, response effort, and criterion validity in web surveys?
- Author
-
Lenzner, Timo, Höhne, Jan Karem, Lenzner, Timo, and Höhne, Jan Karem
- Abstract
Previous research has shown that question characteristics, such as the shape of rating scales, can affect how respondents interpret and respond to questions. For example, earlier studies reported different response distributions for questions employing rating scales in the form of a ladder and in the form of a pyramid. The current experiment, implemented in a probability-based online panel (N = 4,377), re-visits and extends this research by examining how the two graphical layouts (ladder vs. pyramid) affect response behavior and data quality of a question on subjective social stratification. In line with the earlier results, we found that respondents rated their social status lower in the pyramid than in the ladder condition. No differences between the two layouts were found regarding response effort, however, the ladder layout was associated with higher criterion validity. Therefore, we recommend employing the ladder layout when measuring subjective social stratification.
- Published
- 2022
44. Who Is Willing to Use Audio and Voice Inputs in Smartphone Surveys, and Why?
- Author
-
Lenzner, Timo, Höhne, Jan Karem, Lenzner, Timo, and Höhne, Jan Karem
- Abstract
The ever-growing number of respondents completing web surveys via smartphones is paving the way for leveraging technological advances to improve respondents’ survey experience and, in turn, the quality of their answers. Smartphone surveys enable researchers to incorporate audio and voice features into web surveys, that is, having questions read aloud to respondents using pre-recorded audio files and collecting voice answers via the smartphone’s microphone. Moving from written to audio and voice communication channels might be associated with several benefits, such as humanizing the communication process between researchers and respondents. However, little is known about respondents’ willingness to undergo this change in communication channels. Replicating and extending earlier research, we examine the extent to which respondents are willing to use audio and voice channels in web surveys, the reasons for their (non)willingness, and respondent characteristics associated with (non)willingness. The results of a web survey conducted in a nonprobability online panel in Germany (N = 2146) reveal that more than 50% of respondents would be willing to have the questions read aloud (audio channel) and about 40% would also be willing to give answers via voice input (voice channel). While respondents mostly name a general openness to new technologies for their willingness, they mostly name preference for written communication for their nonwillingness. Finally, audio and voice channels in smartphone surveys appeal primarily to frequent and competent smartphone users as well as younger and tech-savvy respondents.
- Published
- 2022
45. Assessing Trends and Decomposing Change in Nonresponse Bias: The Case of Bias in Cohort Distributions
- Author
-
Gummer, Tobias and Gummer, Tobias
- Abstract
Survey research is still confronted by a trend of increasing nonresponse rates. In this context, several methodological advances have been made to stimulate participation and avoid bias. Yet, despite the growing number of tools and methods to deal with nonresponse, little is known about whether nonresponse biases show similar trends as nonresponse rates and what mechanisms (if any) drive changes in bias. Our article focuses on biases in cohort distributions in the U.S. and German general social surveys from 1980 to 2012 as one of the key variables in the social sciences. To supplement our cross-national comparison of these trends, we decompose changes into within-cohort change (WCC) and between-cohort change. We find that biases in cohort distributions have remained relatively stable and at a relatively low level in both countries. Furthermore, WCC (i.e., survey climate) accounts for the major part of the change in nonresponse bias.
- Published
- 2022
46. Instationäres aerodynamisches Verhalten einer bewegten Störklappe
- Author
-
Geisbauer, Sven
- Subjects
Experiment ,Validierung ,RANS ,instationär ,bewegte Störklappe ,Spoiler ,Antwortverhalten ,TAU ,CFD ,Simulation ,Aerodynamik - Published
- 2022
47. Conducting quantitative studies with the participation of political elites: best practices for designing the study and soliciting the participation of political elites
- Author
-
Barbara Vis and Sjoerd Stolwijk
- Subjects
Statistics and Probability ,Future studies ,political elite ,Best practice ,Population ,0211 other engineering and technologies ,Umfrageforschung ,02 engineering and technology ,Antwortverhalten ,Experiment ,Politics ,survey research ,Political science ,050602 political science & public administration ,Large-n interviews ,AES Australian Election Study ,ATES Asahi-Todai Elite Survey ,BES British Election Study ,BLS Brazilian Legislative Survey ,EPRG European Parliament Research Group survey ,GLES German Longitudinal Election Study ,data quality ,politische Elite ,survey ,response behavior ,Interview ,Datengewinnung ,education ,Social sciences, sociology, anthropology ,Erhebungstechniken und Analysetechniken der Sozialwissenschaften ,Response rate (survey) ,021110 strategic, defence & security studies ,education.field_of_study ,Sozialwissenschaften, Soziologie ,Datenqualität ,business.industry ,Stichprobe ,05 social sciences ,quantitative Methode ,Information processing ,General Social Sciences ,methodology ,Befragung ,Methodologie ,Public relations ,quantitative method ,sample ,0506 political science ,data capture ,Methods and Techniques of Data Collection and Data Analysis, Statistical Methods, Computer Methods ,Elite ,ddc:300 ,business - Abstract
Conducting quantitative research (e.g., surveys, a large number of interviews, experiments) with the participation of political elites is typically challenging. Given that a population of political elites is typically small by definition, a particular challenge is obtaining a sufficiently high number of observations and, thus, a certain response rate. This paper focuses on two questions related to this challenge: (1) What are best practices for designing the study? And (2) what are best practices for soliciting the participation of political elites? To arrive at these best practices, we (a) examine which factors explain the variation in response rates across surveys within and between large-scale, multi-wave survey projects by statistically analyzing a newly compiled dataset of 342 political elite surveys from eight projects, spanning 30 years and 58 countries, (b) integrate the typically scattered findings from the existing literature and (c) discuss results from an original expert survey among researchers with experience with such research (n = 23). By compiling a comprehensive list of best practices, systematically testing some widely held believes about response rates and by providing benchmarks for response rates depending on country, survey mode and elite type, we aim to facilitate future studies where participation of political elites is required. This will contribute to our knowledge and understanding of political elites’ opinions, information processing and decision making and thereby of the functioning of representative democracies.
- Published
- 2020
- Full Text
- View/download PDF
48. Response Quality in Nonprobability and Probability-based Online Panels
- Author
-
Annelies G. Blom and Carina Cornesse
- Subjects
Sociology and Political Science ,Computer science ,probability ,media_common.quotation_subject ,Online-Befragung ,Federal Republic of Germany ,Antwortverhalten ,Nonprobability sampling ,0504 sociology ,Statistics ,050602 political science & public administration ,GESIS Panel ,nonprobability sample ,probability-based sample ,online panel ,satisficing ,response quality ,response behavior ,Quality (business) ,Social sciences, sociology, anthropology ,media_common ,Erhebungstechniken und Analysetechniken der Sozialwissenschaften ,Sozialwissenschaften, Soziologie ,Stichprobe ,05 social sciences ,050401 social sciences methods ,sample ,Bundesrepublik Deutschland ,0506 political science ,Methods and Techniques of Data Collection and Data Analysis, Statistical Methods, Computer Methods ,Wahrscheinlichkeit ,ddc:300 ,Panel ,Satisficing ,online survey ,Social Sciences (miscellaneous) - Abstract
Recent years have seen a growing number of studies investigating the accuracy of nonprobability online panels; however, response quality in nonprobability online panels has not yet received much attention. To fill this gap, we investigate response quality in a comprehensive study of seven nonprobability online panels and three probability-based online panels with identical fieldwork periods and questionnaires in Germany. Three response quality indicators typically associated with survey satisficing are assessed: straight-lining in grid questions, item nonresponse, and midpoint selection in visual design experiments. Our results show that there is significantly more straight-lining in the nonprobability online panels than in the probability-based online panels. However, contrary to our expectations, there is no generalizable difference between nonprobability online panels and probability-based online panels with respect to item nonresponse. Finally, neither respondents in nonprobability online panels nor respondents in probability-based online panels are significantly affected by the visual design of the midpoint of the answer scale.
- Published
- 2020
- Full Text
- View/download PDF
49. Effects of Partner Presence During the Interview on Survey Responses: The Example of Questions Concerning the Division of Household Labor
- Author
-
Jette Schröder and Claudia Schmiedeberg
- Subjects
division of labor ,Sociology and Political Science ,Umfrageforschung ,Antwortverhalten ,0504 sociology ,survey research ,050602 political science & public administration ,Bystander effect ,data quality ,response behavior ,Datengewinnung ,Social sciences, sociology, anthropology ,Erhebungstechniken und Analysetechniken der Sozialwissenschaften ,Sozialwissenschaften, Soziologie ,Datenqualität ,05 social sciences ,interview ,housework ,050401 social sciences methods ,Arbeitsteilung ,Division (mathematics) ,0506 political science ,data capture ,bystander effects ,interview privacy ,spouse presence ,third-party presence ,Methods and Techniques of Data Collection and Data Analysis, Statistical Methods, Computer Methods ,ddc:300 ,Demographic economics ,Psychology ,Hausarbeit ,Social Sciences (miscellaneous) ,Division of labour - Abstract
Despite the fact that third parties are present during a substantial amount of face-to-face interviews, bystander influence on respondents’ response behavior is not yet fully understood. We use nine waves of the German Family Panel pairfam and apply fixed effects panel regression models to analyze effects of third-party presence on items regarding the sharing of household tasks between partners. We find that both male and female respondents report doing a smaller share of household tasks when their partner is present during the interview as compared to when their partner is not present. Similarly, if the respondent’s partner is present, both partners’ reports correspond more, so that they are less prone to resulting in unrealistically high sums. These results indicate that for items concerning household labor, partner presence does not compromise data quality but may in fact improve it.
- Published
- 2020
- Full Text
- View/download PDF
50. What can we learn from open questions in surveys? A case study on non-voting reported in the 2013 German longitudinal election study
- Author
-
Henning Silber, Cornelia Zuell, and Steffen-M. Kuehnel
- Subjects
Wahlforschung ,media_common.quotation_subject ,lcsh:BF1-990 ,election ,Umfrageforschung ,Antwortverhalten ,German ,0504 sociology ,Wahlverhalten ,survey research ,Voting ,050602 political science & public administration ,data quality ,survey ,response behavior ,Datengewinnung ,Function (engineering) ,open questions ,Social sciences, sociology, anthropology ,General Psychology ,media_common ,Erhebungstechniken und Analysetechniken der Sozialwissenschaften ,Sozialwissenschaften, Soziologie ,Datenqualität ,voting behavior ,05 social sciences ,050401 social sciences methods ,General Social Sciences ,Turnout ,Pre- and Post-election Cross Section (Cumulation) (GLES 2013) [open questions ,non-voting ,random imputation ,ZA5702] ,Befragung ,16. Peace & justice ,language.human_language ,0506 political science ,data capture ,Methods and Techniques of Data Collection and Data Analysis, Statistical Methods, Computer Methods ,lcsh:Psychology ,election research ,If and only if ,Data quality ,language ,Predictive power ,ddc:300 ,Voting behavior ,Psychology ,Social psychology - Abstract
Open survey questions are often used to evaluate closed questions. However, they can fulfil this function only if there is a strong link between answers to open questions and answers to related closed questions. Using reasons for non-voting reported in the German Longitudinal Election Study 2013, we investigated this link by examining whether the reported reasons for non-voting may be substantive reasons or ex-post legitimations. We tested five theoretically derived hypotheses about respondents who gave, or did not give, a specific reason. Results showed that (a) answers to open questions were indeed related to answers to closed questions and could be used in explanatory turnout models to predict voting behavior, and (b) the relationship between answers to open and closed questions and the predictive power of reasons given in response to the open questions were stronger in the post-election survey (reported behavior) than in the pre-election survey (intended behavior).
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.