1,204 results on '"Debiasing"'
Search Results
2. Debiasing Surgeon: Fantastic Weights and How to Find Them
- Author
-
Nahon, Rémi, De Moura Matos, IvanLuiz, Nguyen, Van-Tam, Tartaglione, Enzo, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Simultaneous Unlearning of Multiple Protected User Attributes From Variational Autoencoder Recommenders Using Adversarial Training
- Author
-
Escobedo, Gustavo, Ganhör, Christian, Brandl, Stefan, Augstein, Mirjam, Schedl, Markus, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Bellogin, Alejandro, editor, Boratto, Ludovico, editor, Kleanthous, Styliani, editor, Lex, Elisabeth, editor, Malloci, Francesca Maridina, editor, and Marras, Mirko, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Longitudinal Impact of Preference Biases on Recommender Systems' Performance.
- Author
-
Zhou, Meizi, Zhang, Jingjing, and Adomavicius, Gediminas
- Abstract
Recommender systems are ubiquitous on various online platforms and provide significant value to the users in helping them find relevant content/items to consume. After item consumption, users can often provide feedback (i.e., their preference ratings for the item) to the system. Research studies have shown that recommender systems' predictions, observed by users, can cause biases in users' postconsumption preference ratings. Because these ratings are typically fed back to the system as training data for future predictions, this process is likely to influence the system's performance over time. We use a simulation approach to investigate the longitudinal impact of preference biases on the dynamics of recommender systems' performance. Our results reveal that preference biases significantly impair recommendation performance and users' consumption outcomes, and larger biases cause disproportionately large negative effects. Additionally, less popular and less distinctive (in terms of their content) items are more susceptible to preference biases. Furthermore, considering the substantial impact of preference biases on recommendation performance, we examine the issue of debiasing user-submitted ratings. We find that relying solely on historical rating data is unlikely to be effective in debiasing; thus, we propose/evaluate new debiasing approaches that use additional relevant information that can be collected by recommendation platforms. Research studies have shown that recommender systems' predictions that are observed by users can cause biases in users' postconsumption preference ratings. This can happen as part of the standard, normal system use, where biases are typically caused by the system's inherent prediction errors (i.e., because of the less-than-perfect accuracy of recommendation methods). Because users' preference ratings are typically fed back to the system as training data for future predictions, this process is likely to influence the performance of the system in the long run. We use a simulation approach to study the longitudinal impact of preference biases (and their magnitude) on the dynamics of recommender systems' performance. Our simulation results show that preference biases significantly impair the system's prediction performance (i.e., prediction accuracy) as well as users' consumption outcomes (i.e., consumption relevance and diversity) over time. The impact is nonlinear to the size of the bias, that is, large bias causes disproportionately large negative effects. Also, items that are less popular and less distinctive (in terms of their content) are affected more by preference biases. Furthermore, given the impact of preference bias on the recommender systems' performance, we explore the problem of debiasing user-submitted ratings. We empirically demonstrate that relying solely on historical rating data is unlikely to be effective in debiasing. We also propose and evaluate two debiasing approaches that take into account additional relevant information that can be collected by recommendation platforms. Our findings provide important implications for the design of recommender systems. History: Olivia Liu Sheng, Senior Editor; Huimin Zhao, Associate Editor. Supplemental Material: The e-companion is available at https://doi.org/10.1287/isre.2021.0133. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Efficient Image-Space Shape Splatting for Monte Carlo Rendering.
- Author
-
Tong, Xiaochun and Hachisuka, Toshiya
- Subjects
MARKOV chain Monte Carlo ,PIXELS - Abstract
A typical Monte Carlo rendering method contributes one light path only to a single pixel at a time. Reusing light paths across multiple pixels, however, can amortize the cost and improve the efficiency. The state of the art of path reuse is to employ shift mapping to reduce the cost of path reuse, while its computation cost is still proportional to the number of pixels processed in shift mapping. We propose a general framework for efficiently reusing light paths to multiple pixels arranged in arbitrary two-dimensional shapes. Our shape is defined as a set of multiple pixels, and the framework allows us to reuse light paths among pixels in a shape faster than simply evaluating all pixels via shift mapping. The key idea is to sparsely evaluate the contribution of shifted paths at random pixels within the shape and interpolate the contribution to the other pixels. We apply a debiasing estimator to ensure unbiasedness. Our method can be integrated with many existing rendering methods and brings consistent improvement over its single-pixel counterpart. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Correlation adjusted debiased Lasso: debiasing the Lasso with inaccurate covariate model.
- Author
-
Celentano, Michael and Montanari, Andrea
- Subjects
INFERENTIAL statistics ,NUISANCES - Abstract
We consider the problem of estimating a low-dimensional parameter in high-dimensional linear regression. Constructing an approximately unbiased estimate of the parameter of interest is a crucial step towards performing statistical inference. Several authors suggest to orthogonalize both the variable of interest and the outcome with respect to the nuisance variables, and then regress the residual outcome with respect to the residual variable. This is possible if the covariance structure of the regressors is perfectly known, or is sufficiently structured that it can be estimated accurately from data (e.g. the precision matrix is sufficiently sparse). Here we consider a regime in which the covariate model can only be estimated inaccurately, and hence existing debiasing approaches are not guaranteed to work. We propose the correlation adjusted debiased Lasso , which nearly eliminates this bias in some cases, including cases in which the estimation errors are neither negligible nor orthogonal. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Hindsight Bias in Forensic Mental Health Novices and Experts: An Exploratory Study.
- Author
-
Weber, Michael A., Albrecht, Joëlle N., Endrass, Jérôme, Humbel, Delia, Meier, Dominique R., Singh, Jay P., and Gerth, Juliane
- Subjects
- *
COGNITIVE bias , *CONSCIOUSNESS raising , *MENTAL health , *DECISION making , *RISK assessment - Abstract
Decision-making processes are vulnerable to cognitive biases like hindsight bias, with particularly fateful consequences in forensic contexts. However, while debiasing strategies have been effective in various areas, their impact in forensics is underexplored. We investigated hindsight bias and a simple awareness-based debiasing strategy in novices (
n = 52) and forensic professionals (n = 49). Participants were assigned to baseline, biased, or debiased conditions and rated an offender’s risk of re-offending using case vignettes. Significant hindsight bias was found in novices, but not experts who were also more aware of biases. Debiasing proved effective in novices, indicating that raising awareness may enhance equitable forensic decision-making. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
8. A Feasible Estimation of a "Corrected" EQ-5D Social Tariff.
- Author
-
Abellan-Perpiñan, Jose-Maria, Martinez-Perez, Jorge-Eduardo, Sanchez-Martinez, Fernando-Ignacio, and Pinto-Prades, Jose-Luis
- Subjects
- *
UTILITY theory , *DISCRIMINATION against overweight persons , *SOCIAL values , *TARIFF , *SAMPLE size (Statistics) - Abstract
To demonstrate the feasibility of estimating a social tariff free of utility curvature and probability weighting biases and to test transferability between riskless and risky contexts. Valuations for a selection of EQ-5D-3L health states were collected from a large and representative sample (N = 1676) of the Spanish general population through computer-assisted personal interviewing. Two elicitation methods were used: the traditional time trade-off (TTO) and a novel risky-TTO procedure. Both methods are equivalent for better than death states, which allowed us to test transferability of utilities across riskless and risky contexts. Corrective procedures applied are based on rank-dependent utility theory, identifying parameter estimates at the individual level. All corrections are health-state specific, which is a unique feature of our corrective approach. Two corrected value sets for the EQ-5D-3L system are estimated, highlighting the feasibility of developing national tariffs under nonexpected utility theories, such as rank-dependent utility. Furthermore, transferability was not supported for at least half of the health states valued by our sample. It is feasible to estimate a social tariff by using interviewing techniques, sample sizes, and sample representativeness equivalent to prior studies designed to generate national value sets for the EQ-5D. Utilities obtained in distinct contexts may not be interchangeable. Our findings caution against routinely taking transferability of utility for granted. • To date there is no national social tariff or value set fully corrected under nonexpected utility assumptions. • This article presents 2 value sets for the EQ-5D-3L system estimated by applying corrections based on rank-dependent utility theory, identifying parameter estimates at individual level. All corrections done are health-state specific, which is a unique feature of our corrective approach. These findings highlight the feasibility of developing national tariffs under nonexpected utility theories. • Our results suggest that utilities obtained in distinct contexts may not be interchangeable. Transferability across riskless and risky domains was not supported for at least half of the health states valued by our sample. Consequently, it seems that ensuring tariffs are not context dependent is as relevant and challenging as correcting value sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Conceptualizing Conspiratorial Thinking: Explicating Public Conspiracism for Effective Debiasing Strategy.
- Author
-
Kim, Jeong-Nam and Lee, Seungyoon
- Subjects
- *
PROBLEM solving , *SOCIAL institutions , *COMMUNICATION in management , *TRUST , *MANAGEMENT philosophy , *CONSPIRACY theories , *STRATEGIC communication - Abstract
We use the situational theory of problem solving to explicate how publics engage in conspiratorial thinking as a form of cognitive problem solving. In doing so, we develop a new typology of conspiracy theories and introduce conceptual definitions and operationalizations of conspiratorial thinking as both dispositional and situational. We conduct two survey studies which provide evidence of the measures' reliability and validity. We also investigate the debiasing effects of relational and informational strategies employed by social institutions. Finally, we present institutional behavioral strategies, grounded in the strategic behavioral theory of communication management, that can reduce conspiratorial thinking. Our new concepts and measurement approaches describe the origins and processes of public conspiracism and provide a foundation for implementing ethical and effective interventions, ultimately contributing to a more informed and educated society. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Biased Calibration: Exacerbating Instead of Mitigating Entrepreneurial Overplacement with Reference Values.
- Author
-
Blaseg, Daniel and Schwienbacher, Armin
- Subjects
BUSINESSPEOPLE ,COGNITIVE bias ,REFERENCE values ,SELF-evaluation ,CROWD funding - Abstract
Nascent entrepreneurs often believe that their chances of success are better than those of others due to imperfect information about the competencies and accomplishments of other entrepreneurs, leading to overplacement. Theory suggests that the provision of historical outcome data of comparable projects could help entrepreneurs develop more realistic plans and expectations by closing the information gap and enabling the calibration of their beliefs. However, effectively calibrating beliefs by incorporating new reference information requires effortful cognitive processing and rational integration of the data, which may be impeded by the same cognitive biases leading to overplacement initially. Drawing from a unique dataset that allows us to observe substantial parts of the planning process of 971 entrepreneurs, we investigate the effectiveness of providing reference values as a debiasing tool. Rather than rationally leveraging the information for honest self-assessment, our findings suggest that entrepreneurs use the information to differentiate themselves even more from the reference group after they see the historical values. This, in turn, results in an even higher level of overplacement. JEL Classifications: G41, D91, L26, L25. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Persian offensive language detection.
- Author
-
Kebriaei, Emad, Homayouni, Ali, Faraji, Roghayeh, Razavi, Armita, Shakery, Azadeh, Faili, Heshaam, and Yaghoobzadeh, Yadollah
- Subjects
PERSIAN language ,LANGUAGE models ,INVECTIVE ,SOCIAL impact ,SOCIAL networks - Abstract
With the proliferation of social networks and their impact on human life, one of the rising problems in this environment is the rise in verbal and written insults and hatred. As one of the significant platforms for distributing text-based content, Twitter frequently publishes its users' abusive remarks. Creating a model that requires a complete collection of offensive sentences is the initial stage in recognizing objectionable phrases. In addition, despite the abundance of resources in English and other languages, there are limited resources and studies on identifying hateful and offensive statements in Persian. In this study, we compiled a 38K-tweet dataset of Persian Hate and Offensive language using keyword-based data selection strategies. A Persian offensive lexicon and nine hatred target group lexicons were gathered through crowdsourcing for this purpose. The dataset was annotated manually so that at least two annotators investigated tweets. In addition, for the purpose of analyzing the effect of used lexicons on language model functionality, we employed two assessment criteria (FPED and pAUCED) to measure the dataset's potential bias. Then, by configuring the dataset based on the results of the bias measurement, we mitigated the effect of words' bias in tweets on language model performance. The results indicate that bias is significantly diminished, while less than a hundredth reduced the F1 score. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Making Alice Appear Like Bob: A Probabilistic Preference Obfuscation Method For Implicit Feedback Recommendation Models
- Author
-
Escobedo, Gustavo, Moscati, Marta, Muellner, Peter, Kopeinik, Simone, Kowald, Dominik, Lex, Elisabeth, Schedl, Markus, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bifet, Albert, editor, Davis, Jesse, editor, Krilavičius, Tomas, editor, Kull, Meelis, editor, Ntoutsi, Eirini, editor, and Žliobaitė, Indrė, editor
- Published
- 2024
- Full Text
- View/download PDF
13. Modular Debiasing of Latent User Representations in Prototype-Based Recommender Systems
- Author
-
Melchiorre, Alessandro B., Masoudian, Shahed, Kumar, Deepak, Schedl, Markus, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bifet, Albert, editor, Davis, Jesse, editor, Krilavičius, Tomas, editor, Kull, Meelis, editor, Ntoutsi, Eirini, editor, and Žliobaitė, Indrė, editor
- Published
- 2024
- Full Text
- View/download PDF
14. The Near-Miss Bias
- Author
-
Federspiel, Florian M., Dillon-Merrill, Robin, Seifert, Matthias, Rodriguez, Sofía, Price, Camille C., Series Editor, Zhu, Joe, Associate Editor, Hillier, Frederick S., Founding Editor, Borgonovo, Emanuele, Editorial Board Member, Nelson, Barry L., Editorial Board Member, Patty, Bruce W., Editorial Board Member, Pinedo, Michael, Editorial Board Member, Vanderbei, Robert J., Editorial Board Member, Federspiel, Florian M., editor, Montibeller, Gilberto, editor, Seifert, Matthias, editor, and Kleinmuntz, Don N., Foreword by
- Published
- 2024
- Full Text
- View/download PDF
15. El control de los sesgos cognitivos en el contexto jurídico procesal: medidas preventivas y correctivas y deberes de responsabilidad epistémica.
- Author
-
Bustamante Requena, José Francisco
- Subjects
LEGAL procedure ,LEGAL norms ,LEGAL judgments ,JUDICIAL process ,FAIRNESS - Abstract
Copyright of IUS ET VERITAS is the property of Asociación IUS ET VERITAS and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
16. Off by 100% Bias: The Effects of Percentage Changes Greater than 100% on Magnitude Judgments and Consumer Choice.
- Author
-
Fisher, Matthew and Mormann, Milica
- Subjects
PERCENTILES ,CONSUMER preferences ,PREJUDICES ,JUDGMENT (Psychology) ,MARKETING personnel ,CONSUMER credit ,SAVINGS ,HEURISTIC - Abstract
Percentage changes greater than 100% are frequently used in consumer contexts; for example, a cordless vacuum cleaner may boast "125% longer runtime" compared to competitors. Via six studies (n = 2,395) and 11 supplementary studies (n = 3,249), the current research shows that consumers systematically underestimate the magnitude of percentage changes greater than 100%. Specifically, many consumers apply the relative size usage (e.g. "125% of," equivalent to 25% more) instead of the appropriate relative change (e.g. "125% more," equivalent to 100% more + 25% more), which leads them to be off by exactly 100% in their magnitude estimates. The rate of bias decreases when the difference between these two usages is emphasized. The Off by 100% bias occurs across a variety of consumer contexts, influencing behavioral intentions and incentive-compatible choice. The findings make theoretical contributions to research on processing of percentages, probability versus frequency formats, and magnitude judgments. Finally, understanding how different presentation formats of the same information can lead to different magnitude judgments enables marketers and policymakers to ensure more effective communication. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. People see more of their biases in algorithms.
- Author
-
Celiktutan, Begum, Cadario, Romain, and Morewedge, Carey K.
- Subjects
- *
ALGORITHMIC bias , *RACE , *HUMAN research subjects , *PARTICIPANT observation - Abstract
Algorithmic bias occurs when algorithms incorporate biases in the human decisions on which they are trained. We find that people see more of their biases (e.g., age, gender, race) in the decisions of algorithms than in their own decisions. Research participants saw more bias in the decisions of algorithms trained on their decisions than in their own decisions, even when those decisions were the same and participants were incentivized to reveal their true beliefs. By contrast, participants saw as much bias in the decisions of algorithms trained on their decisions as in the decisions of other participants and algorithms trained on the decisions of other participants. Cognitive psychological processes and motivated reasoning help explain why people see more of their biases in algorithms. Research participants most susceptible to bias blind spot were most likely to see more bias in algorithms than self. Participants were also more likely to perceive algorithms than themselves to have been influenced by irrelevant biasing attributes (e.g., race) but not by relevant attributes (e.g., user reviews). Because participants saw more of their biases in algorithms than themselves, they were more likely to make debiasing corrections to decisions attributed to an algorithm than to themselves. Our findings show that bias is more readily perceived in algorithms than in self and suggest how to use algorithms to reveal and correct biased human decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. A large-scale study and six-month follow-up of an intervention to reduce causal illusions in high school students
- Author
-
Naroa Martínez, Helena Matute, Fernando Blanco, and Itxaso Barberia
- Subjects
cognitive bias ,science education ,large-scale study ,follow-up ,causal illusion ,debiasing ,Science - Abstract
Causal illusions consist of believing that there is a causal relationship between events that are actually unrelated. This bias is associated with pseudoscience, stereotypes and other unjustified beliefs. Thus, it seems important to develop educational interventions to reduce them. To our knowledge, the only debiasing intervention designed to be used at schools was developed by Barberia et al. (Barberia et al. 2013 PLoS One 8, e71303 (doi:10.1371/journal.pone.0071303)), focusing on base rates, control conditions and confounding variables. Their assessment used an active causal illusion task where participants could manipulate the candidate cause. The intervention reduced causal illusions in adolescents but was only tested in a small experimental project. The present research evaluated it in a large-scale project through a collaboration with the Spanish Foundation for Science and Technology (FECYT), and was conducted in schools to make it ecologically valid. It included a pilot study (n = 287), a large-scale implementation (n = 1668; 40 schools) and a six-month follow-up (n = 353). Results showed medium-to-large and long-lasting effects on the reduction of causal illusions. To our knowledge, this is the first research showing the efficacy and long-term effects of a debiasing intervention against causal illusions that can be used on a large scale through the educational system.
- Published
- 2024
- Full Text
- View/download PDF
19. Easy-fix attentional focus manipulation boosts the intuitive and deliberate use of base-rate information
- Author
-
Boissin, Esther, Caparos, Serge, Abi Hana, John, Bernard, Cyann, and De Neys, Wim
- Published
- 2024
- Full Text
- View/download PDF
20. A Non‐Intrusive Machine Learning Framework for Debiasing Long‐Time Coarse Resolution Climate Simulations and Quantifying Rare Events Statistics.
- Author
-
Barthel Sorensen, B., Charalampopoulos, A., Zhang, S., Harrop, B. E., Leung, L. R., and Sapsis, T. P.
- Subjects
- *
MACHINE learning , *EXTREME weather , *ATMOSPHERIC rivers , *ATMOSPHERIC models , *STATISTICS - Abstract
Due to the rapidly changing climate, the frequency and severity of extreme weather is expected to increase over the coming decades. As fully‐resolved climate simulations remain computationally intractable, policy makers must rely on coarse‐models to quantify risk for extremes. However, coarse models suffer from inherent bias due to the ignored "sub‐grid" scales. We propose a framework to non‐intrusively debias coarse‐resolution climate predictions using neural‐network (NN) correction operators. Previous efforts have attempted to train such operators using loss functions that match statistics. However, this approach falls short with events that have longer return period than that of the training data, since the reference statistics have not converged. Here, the scope is to formulate a learning method that allows for correction of dynamics and quantification of extreme events with longer return period than the training data. The key obstacle is the chaotic nature of the underlying dynamics. To overcome this challenge, we introduce a dynamical systems approach where the correction operator is trained using reference data and a coarse model simulation nudged toward that reference. The method is demonstrated on debiasing an under‐resolved quasi‐geostrophic model and the Energy Exascale Earth System Model (E3SM). For the former, our method enables the quantification of events that have return period two orders longer than the training data. For the latter, when trained on 8 years of ERA5 data, our approach is able to correct the coarse E3SM output to closely reflect the 36‐year ERA5 statistics for all prognostic variables and significantly reduce their spatial biases. Plain Language Summary: We present a general framework to design machine learned correction operators to improve the predicted statistics of low‐resolution climate simulations. We illustrate the approach, which acts on existing data in a post‐processing manner, on a simplified prototype climate model as well as a realistic climate model, namely the Energy Exascale Earth System Model (E3SM) with 110 km resolution. For the latter, we show that the developed approach is able to correct the low‐resolution E3SM output to closely reflect the climate statistics of historical observations as quantified by the ERA5 data set. We also demonstrate that our model significantly improves the prediction of atmospheric rivers, an example of extreme weather events resolvable by the low resolution model. Key Points: Development of non‐intrusive correction operators for coarse scale climate simulationsDesign of a training procedure that improves dynamics and allows for characterization of extremes with return period longer than the training dataApplication to Energy Exascale Earth System Model and demonstration of improvement on global and regional statistics [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Discounting constituent attitudes: motivated reasoning, ambiguity, and policymaker perceptions of constituent characteristics.
- Author
-
Bergan, Daniel E, Shulman, Hillary C, and Carnahan, Dustin
- Subjects
- *
EXPERT evidence , *AMBIGUITY , *RESEARCH personnel , *ATTITUDE (Psychology) - Abstract
In experimental work, researchers have found that policymakers discount the opinions of constituents with whom they disagree. We build on these results with a national sample of local policymakers in the United States, exploring whether communicators can prevent policymakers from discounting their opinions by providing evidence of their own knowledge about a topic. We find that policymakers discount the opinions of hypothetical constituents with whom they disagree, but there is evidence that providing unambiguous evidence about a letter-writer's positive traits can reduce this discounting. We conclude with a discussion of implications for theory as well as practical implications for communicating with policymakers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Artificial intelligence and judicial decision-making: Evaluating the role of AI in debiasing.
- Author
-
Lopes, Giovana
- Subjects
ARTIFICIAL intelligence ,JUDICIAL process ,RISK assessment ,DECISION making ,FAIR trial - Abstract
Copyright of Journal for Technology in Theory & Practice / Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis (TATuP) is the property of Oekom Verlag GmbH and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
23. Adieu Bias: Debiasing Intuitions Among French Speakers
- Author
-
Nina Franiatte, Esther Boissin, Alexandra Delmas, and Wim De Neys
- Subjects
reasoning ,heuristics and biases ,debiasing ,intuition ,french ,Psychology ,BF1-990 - Abstract
Recent debiasing studies have shown that a short, plain-English explanation of the correct solution strategy can improve reasoning performance. However, these studies have predominantly focused on English-speaking populations, who were tested with problem contents designed for an English-speaking test environment. Here we explore whether the key findings of previous debiasing studies can be extended to native French speakers living in continental Europe (France). We ran a training session with a battery of three reasoning tasks (i.e., base-rate neglect, conjunction fallacy, and bat-and-ball) on 147 native French speakers. We used a two-response paradigm in which participants first gave an initial intuitive response, under time pressure and cognitive load, and then gave a final response after deliberation. Results showed a clear training effect, as early as the initial (intuitive) stage. Immediately after training, most participants solved the problems correctly, without the need for a deliberation process. The findings confirm that the intuitive debiasing training effect extends to native French speakers.
- Published
- 2024
- Full Text
- View/download PDF
24. Dealing with Biases in Emergency Medicine
- Author
-
Ginsburg, Joshua, Olympia, Robert P., editor, Werley, Elizabeth Barrall, editor, Lubin, Jeffrey S., editor, and Yoon-Flannery, Kahyun, editor
- Published
- 2023
- Full Text
- View/download PDF
25. Debiasing Counterfactuals in the Presence of Spurious Correlations
- Author
-
Kumar, Amar, Fathi, Nima, Mehta, Raghav, Nichyporuk, Brennan, Falet, Jean-Pierre R., Tsaftaris, Sotirios, Arbel, Tal, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wesarg, Stefan, editor, Puyol Antón, Esther, editor, Baxter, John S. H., editor, Erdt, Marius, editor, Drechsler, Klaus, editor, Oyarzun Laura, Cristina, editor, Freiman, Moti, editor, Chen, Yufei, editor, Rekik, Islem, editor, Eagleson, Roy, editor, Feragen, Aasa, editor, King, Andrew P., editor, Cheplygina, Veronika, editor, Ganz-Benjaminsen, Melani, editor, Ferrante, Enzo, editor, Glocker, Ben, editor, Moyer, Daniel, editor, and Petersen, Eikel, editor
- Published
- 2023
- Full Text
- View/download PDF
26. EPVT: Environment-Aware Prompt Vision Transformer for Domain Generalization in Skin Lesion Recognition
- Author
-
Yan, Siyuan, Liu, Chi, Yu, Zhen, Ju, Lie, Mahapatra, Dwarikanath, Mar, Victoria, Janda, Monika, Soyer, Peter, Ge, Zongyuan, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Greenspan, Hayit, editor, Madabhushi, Anant, editor, Mousavi, Parvin, editor, Salcudean, Septimiu, editor, Duncan, James, editor, Syeda-Mahmood, Tanveer, editor, and Taylor, Russell, editor
- Published
- 2023
- Full Text
- View/download PDF
27. Targeting the Source: Selective Data Curation for Debiasing NLP Models
- Author
-
Gaci, Yacine, Benatallah, Boualem, Casati, Fabio, Benabdeslem, Khalid, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Koutra, Danai, editor, Plant, Claudia, editor, Gomez Rodriguez, Manuel, editor, Baralis, Elena, editor, and Bonchi, Francesco, editor
- Published
- 2023
- Full Text
- View/download PDF
28. A Decision Support System Including Feedback to Sensitize for Certainty Interval Size
- Author
-
Balla, Nathalie, Barbosa-Povoa, Ana Paula, Editorial Board Member, de Almeida, Adiel Teixeira, Editorial Board Member, Gans, Noah, Editorial Board Member, Gupta, Jatinder N. D., Editorial Board Member, Heim, Gregory R., Editorial Board Member, Hua, Guowei, Editorial Board Member, Kimms, Alf, Editorial Board Member, Li, Xiang, Editorial Board Member, Masri, Hatem, Editorial Board Member, Nickel, Stefan, Editorial Board Member, Qiu, Robin, Editorial Board Member, Shankar, Ravi, Editorial Board Member, Slowiński, Roman, Editorial Board Member, Tang, Christopher S., Editorial Board Member, Wu, Yuzhe, Editorial Board Member, Zhu, Joe, Editorial Board Member, Zopounidis, Constantin, Editorial Board Member, Grothe, Oliver, editor, Rebennack, Steffen, editor, and Stein, Oliver, editor
- Published
- 2023
- Full Text
- View/download PDF
29. Uncertain yet Rational - Uncertainty as an Evaluation Measure of Rational Privacy Decision-Making in Conversational AI
- Author
-
Leschanowsky, Anna, Popp, Birgit, Peters, Nils, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Salvendy, Gavriel, editor, and Wei, June, editor
- Published
- 2023
- Full Text
- View/download PDF
30. Forecasting in Organizations: Reinterpreting Collective Judgment Through Mindful Organizing
- Author
-
Montes, Efrain Rosemberg, Price, Camille C., Series Editor, Zhu, Joe, Associate Editor, Hillier, Frederick S., Founding Editor, Borgonovo, Emanuele, Editorial Board Member, Nelson, Barry L., Editorial Board Member, Patty, Bruce W., Editorial Board Member, Pinedo, Michael, Editorial Board Member, Vanderbei, Robert J., Editorial Board Member, and Seifert, Matthias, editor
- Published
- 2023
- Full Text
- View/download PDF
31. A: Adaptive Augmentation for Effectively Mitigating Dataset Bias
- Author
-
An, Jaeju, Kim, Taejune, Ko, Donggeun, Lee, Sangyup, Woo, Simon S., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wang, Lei, editor, Gall, Juergen, editor, Chin, Tat-Jun, editor, Sato, Imari, editor, and Chellappa, Rama, editor
- Published
- 2023
- Full Text
- View/download PDF
32. Artifact-Based Domain Generalization of Skin Lesion Models
- Author
-
Bissoto, Alceu, Barata, Catarina, Valle, Eduardo, Avila, Sandra, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Karlinsky, Leonid, editor, Michaeli, Tomer, editor, and Nishino, Ko, editor
- Published
- 2023
- Full Text
- View/download PDF
33. Analysis and Mitigation of Religion Bias in Indonesian Natural Language Processing Datasets
- Author
-
Muhammad Arief Fauzan and Ari Saptawijaya
- Subjects
natural language processing ,indonesian nlp ,social bias ,debiasing ,Systems engineering ,TA168 ,Information technology ,T58.5-58.64 - Abstract
Previous studies have shown the existence of misrepresentation regarding various religious identities in Indonesian media. Misrepresentations of other marginalized identities in natural language processing (NLP) datasets have been recorded to inflict harm against such marginalized identities in cases such as automated content moderation, and as such must be mitigated. In this paper, we analyze, for the first time, several Indonesian NLP datasets to see whether they contain unwanted bias and the effects of debiasing on them. We find that two of the three data sets analyzed in this study contain unwanted bias, whose effects trickle down to downstream performance in the form of allocation and representation harm. The results of debiasing at the dataset level, as a response to the biases previously discovered, are consistently positive for the respective dataset. However, depending on the data set and embedding used to train the model, they vary greatly at the downstream performance level. In particular, the same debiasing technique can decrease bias on a combination of datasets and embedding, yet increase bias on another, particularly in the case of representation harm.
- Published
- 2023
- Full Text
- View/download PDF
34. A Non‐Intrusive Machine Learning Framework for Debiasing Long‐Time Coarse Resolution Climate Simulations and Quantifying Rare Events Statistics
- Author
-
B. Barthel Sorensen, A. Charalampopoulos, S. Zhang, B. E. Harrop, L. R. Leung, and T. P. Sapsis
- Subjects
climate modeling ,extreme event statistics ,debiasing ,nudging ,machine learning ,Physical geography ,GB3-5030 ,Oceanography ,GC1-1581 - Abstract
Abstract Due to the rapidly changing climate, the frequency and severity of extreme weather is expected to increase over the coming decades. As fully‐resolved climate simulations remain computationally intractable, policy makers must rely on coarse‐models to quantify risk for extremes. However, coarse models suffer from inherent bias due to the ignored “sub‐grid” scales. We propose a framework to non‐intrusively debias coarse‐resolution climate predictions using neural‐network (NN) correction operators. Previous efforts have attempted to train such operators using loss functions that match statistics. However, this approach falls short with events that have longer return period than that of the training data, since the reference statistics have not converged. Here, the scope is to formulate a learning method that allows for correction of dynamics and quantification of extreme events with longer return period than the training data. The key obstacle is the chaotic nature of the underlying dynamics. To overcome this challenge, we introduce a dynamical systems approach where the correction operator is trained using reference data and a coarse model simulation nudged toward that reference. The method is demonstrated on debiasing an under‐resolved quasi‐geostrophic model and the Energy Exascale Earth System Model (E3SM). For the former, our method enables the quantification of events that have return period two orders longer than the training data. For the latter, when trained on 8 years of ERA5 data, our approach is able to correct the coarse E3SM output to closely reflect the 36‐year ERA5 statistics for all prognostic variables and significantly reduce their spatial biases.
- Published
- 2024
- Full Text
- View/download PDF
35. Artificial intelligence and judicial decision-making: Evaluating the role of AI in debiasing
- Author
-
Giovana Lopes
- Subjects
judicial decision-making ,judicial biases ,artificial intelligence ,risk assessment ,debiasing ,Social sciences (General) ,H1-99 ,Technology (General) ,T1-995 - Abstract
As arbiters of law and fact, judges are supposed to decide cases impartially, basing their decisions on authoritative legal sources and not being influenced by irrelevant factors. Empirical evidence, however, shows that judges are often influenced by implicit biases, which can affect the impartiality of their judgment and pose a threat to the right to a fair trial. In recent years, artificial intelligence (AI) has been increasingly used for a variety of applications in the public domain, often with the promise of being more accurate and objective than biased human decision-makers. Given this backdrop, this research article identifies how AI is being deployed by courts, mainly as decision-support tools for judges. It assesses the potential and limitations of these tools, focusing on their use for risk assessment. Further, the article shows how AI can be used as a debiasing tool, i. e., to detect patterns of bias in judicial decisions, allowing for corrective measures to be taken. Finally, it assesses the mechanisms and benefits of such use.
- Published
- 2024
- Full Text
- View/download PDF
36. A debiasing intervention to reduce the causality bias in undergraduates: the role of a bias induction phase.
- Author
-
Martínez, Naroa, Rodríguez-Ferreiro, Javier, Barberia, Itxaso, and Matute, Helena
- Subjects
RADICALISM ,JUDGMENT (Psychology) ,PSEUDOSCIENCE ,UNDERGRADUATES ,COGNITIVE bias - Abstract
The causality bias, or causal illusion, occurs when people believe that there is a causal relationship between events that are actually uncorrelated. This bias is associated with many problems in everyday life, including pseudoscience, stereotypes, prejudices, and ideological extremism. Some evidence-based educational interventions have been developed to reduce causal illusions. To the best of our knowledge, these interventions have included a bias induction phase prior to the training phase, but the role of this bias induction phase has not yet been investigated. The aim of the present research was to examine it. Participants were randomly assigned to one of three groups (induction + training, training, and control, as a function of the phases they received before assessment). We evaluated their causal illusion using a standard contingency judgment task. In a null contingency scenario, the causal illusion was reduced in the training and induction-training groups as compared to the control group, suggesting that the intervention was effective regardless of whether or not the induction phase was included. In addition, in a positive contingency scenario, the induction + training group generated lower causal judgments than the control group, indicating that sometimes the induction phase may produce an increase in general skepticism. The raw data of this experiment are available at the Open Science Framework at https://osf.io/k9nes/ [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Affective debiasing: Focusing on emotion during consumption attenuates attribute framing effects.
- Author
-
Poor, Morgan and Isaac, Mathew S.
- Subjects
FRAMES (Social sciences) ,AFFECT (Psychology) ,EMOTIONS ,THEORY-practice relationship - Abstract
One of the most pervasive findings in attribute framing research is the valence consistent shift; that is, positively valenced frames (e.g., 95% natural ingredients) are preferred over semantically equivalent but negatively valenced frames (e.g., 5% artificial ingredients). Despite the robustness of this finding, it has primarily been observed in judgments of prospective or hypothetical consumption. When valenced frames are presented during or immediately prior to an actual consumption experience, evidence for the valence consistent shift is weaker and less conclusive. In the present research, we propose and show that individuals' susceptibility to a valenced frame encountered around the time of a related consumption experience depends on whether they focus primarily on their cognitions or their emotions during the experience. Specifically, five experiments provide evidence that the valence consistent shift is attenuated in visual, auditory, and (simulated) gustatory consumption contexts when individuals are prompted to rely more on affective (vs. cognitive) inputs. Implications for both theory and practice are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Statistical inference and large-scale multiple testing for high-dimensional regression models.
- Author
-
Cai, T. Tony, Guo, Zijian, and Xia, Yin
- Abstract
This paper presents a selective survey of recent developments in statistical inference and multiple testing for high-dimensional regression models, including linear and logistic regression. We examine the construction of confidence intervals and hypothesis tests for various low-dimensional objectives such as regression coefficients and linear and quadratic functionals. The key technique is to generate debiased and desparsified estimators for the targeted low-dimensional objectives and estimate their uncertainty. In addition to covering the motivations for and intuitions behind these statistical methods, we also discuss their optimality and adaptivity in the context of high-dimensional inference. In addition, we review the recent development of statistical inference based on multiple regression models and the advancement of large-scale multiple testing for high-dimensional regression. The R package SIHR has implemented some of the high-dimensional inference methods discussed in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Marketers Project Their Personal Preferences onto Consumers: Overcoming the Threat of Egocentric Decision Making.
- Author
-
Herzog, Walter, Hattula, Johannes D., and Dahl, Darren W.
- Subjects
CONSUMER preferences ,DECISION making ,MARKETING ,PILOT projects ,CONSENSUS (Social sciences) - Abstract
This research explores how marketers can avoid the so-called "false consensus effect"—the egocentric tendency to project personal preferences onto consumers. Two pilot studies show that most marketers have a surprisingly strong lay intuition about the existence of this inference bias, admit that they are frequently affected by it, and try to avoid it when predicting consumer preferences. Moreover, the pilot studies indicate that most marketers use a very natural and straightforward approach to avoid the false consensus effect in practice, that is, they simply try to "suppress" (i.e., ignore) their personal preferences when predicting consumer preferences. Ironically, four subsequent studies show that this frequently used tactic can backfire and increase marketers' susceptibility to the false consensus effect. Specifically, the results suggest that these backfire effects are most likely to occur for marketers with a low level of preference certainty. In contrast, the results imply that preference suppression does not backfire but instead decreases the false consensus effect for marketers with a high level of preference certainty. Finally, the studies explore the mechanism behind these results and show how marketers can ultimately avoid the false consensus effect—regardless of their level of preference certainty and without risking backfire effects. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. From Waste to Taste: How "Ugly" Labels Can Increase Purchase of Unattractive Produce.
- Author
-
Mookerjee, Siddhanth (Sid), Cornil, Yann, and Hoegg, JoAndrea
- Subjects
LABEL design ,FOOD production ,FOOD industry ,FOOD labeling ,DISCOUNT prices ,FOOD industrial waste ,IMPERFECTION ,FOOD presentation - Abstract
Food producers and retailers throw away large amounts of perfectly edible produce that fails to meet appearance standards, contributing to the environmental issue of food waste. The authors examine why consumers discard aesthetically unattractive produce, and they test a low-cost, easy-to-implement solution: emphasizing the produce's aesthetic flaw through "ugly" labeling (e.g., labeling cucumbers with cosmetic defects "Ugly Cucumbers" on store displays or advertising). Seven experiments, including two conducted in the field, demonstrate that "ugly" labeling corrects for consumers' biased expectations regarding key attributes of unattractive produce—particularly tastiness—and thus increases purchase likelihood. "Ugly" labeling is most effective when associated with moderate (rather than steep) price discounts. Against managers' intuition, it is also more effective than alternative labeling that does not exclusively point out the aesthetic flaw, such as "imperfect" labeling. This research provides clear managerial recommendations on the labeling and the pricing of unattractive produce while addressing the issue of food waste. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Bias versus error: why projects fall short
- Author
-
Ika, Lavagnon, Pinto, Jeffrey K., Love, Peter E.D., and Pache, Gilles
- Published
- 2023
- Full Text
- View/download PDF
42. Expanding Nature's storytelling: extended reality and debiasing strategies for an eco-agency.
- Author
-
Reis, Cristina M. and Câmara, António
- Subjects
STORYTELLING ,COGNITIVE bias ,SHARED virtual environments ,AUGMENTED reality ,SOCIAL dynamics - Abstract
Communication in sustainability and environmental sciences is primed to be substantially changed with extended reality technology, as the emergent Metaverse gives momentum to building an urgent pro-environmental mindset. Our work focuses on immersive econarratives, supported by virtual and augmented realities, and their potential to favor an improved relationship with the environment. Considering social aggregation dynamics and cognitive bias, this article intends to (1) make the case for a new environmental narrative; (2) position extended reality as privileged settings to sustain this narrative; and (3) suggest that this storytelling should be informed by Nature's empirical evidence, i.e., ecosystem data. We see this as a chance to think this Metaverse with an embedded environmental consciousness, informed by behavior-change research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Content preserving image translation with texture co-occurrence and spatial self-similarity for texture debiasing and domain adaptation.
- Author
-
Kang, Myeongkyun, Won, Dongkyu, Luna, Miguel, Chikontwe, Philip, Hong, Kyung Soo, Ahn, June Hong, and Park, Sang Hyun
- Subjects
- *
CLASSIFICATION - Abstract
Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. Recently, various image translation and debiasing methods have attempted to disentangle texture biased representations for downstream tasks, but accurately discarding biased features without altering other relevant information is still challenging. In this paper, we propose a novel framework that leverages image translation to generate additional training images using the content of a source image and the texture of a target image with a different bias property to explicitly mitigate texture bias when training a model on a target task. Our model ensures texture similarity between the target and generated images via a texture co-occurrence loss while preserving content details from source images with a spatial self-similarity loss. Both the generated and original training images are combined to train improved classification or segmentation models robust to inconsistent texture bias. Evaluation on five classification- and two segmentation-datasets with known texture biases demonstrates the utility of our method, and reports significant improvements over recent state-of-the-art methods in all cases. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Cognitive biases in internal medicine: a scoping review.
- Author
-
Loncharich, Michael F., Robbins, Rachel C., Durning, Steven J., Soh, Michael, and Merkebu, Jerusalem
- Subjects
- *
COGNITIVE bias , *CONFIRMATION bias , *INTERNAL medicine , *MEDICAL logic , *MEDICAL errors - Abstract
Medical errors account for up to 440,000 deaths annually, and cognitive errors outpace knowledge deficits as causes of error. Cognitive biases are predispositions to respond in predictable ways, and they don't always result in error. We conducted a scoping review exploring which biases are most prevalent in Internal Medicine (IM), if and how they influence patient outcomes, and what, if any, debiasing strategies are effective. We searched PubMed, OVID, ERIC, SCOPUS, PsychINFO, and CINAHL. Search terms included variations of "bias", "clinical reasoning", and IM subspecialties. Inclusion criteria were: discussing bias, clinical reasoning, and physician participants. Fifteen of 334 identified papers were included. Two papers looked beyond general IM: one each in Infectious Diseases and Critical Care. Nine papers distinguished bias from error, whereas four referenced error in their definition of bias. The most commonly studied outcomes were diagnosis, treatment, and physician impact in 47 % (7), 33 % (5), and 27 % (4) of studies, respectively. Three studies directly assessed patient outcomes. The most commonly cited biases were availability bias (60 %, 9), confirmation bias (40 %, 6), anchoring (40 %, 6), and premature closure (33 %, 5). Proposed contributing features were years of practice, stressors, and practice setting. One study found that years of practice negatively correlated with susceptibility to bias. Ten studies discussed debiasing; all reported weak or equivocal efficacy. We found 41 biases in IM and 22 features that may predispose physicians to bias. We found little evidence directly linking biases to error, which could account for the weak evidence of bias countermeasure efficacy. Future study clearly delineating bias from error and directly assessing clinical outcomes would be insightful. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Political polarization: a curse of knowledge?.
- Author
-
Beattie, Peter and Beattie, Marguerite
- Subjects
POLARIZATION (Social sciences) ,POLITICAL attitudes ,OUTGROUPS (Social groups) ,ATTRIBUTION of news ,PARTISANSHIP ,EXPERIMENTAL design - Abstract
Purpose: Could the curse of knowledge influence how antagonized we are towards political outgroups? Do we assume others know what we know but still disagree with us? This research investigates how the curse of knowledge may affect us politically, i.e., be a cause of political polarization. Background: Research on the curse of knowledge has shown that even when people are incentivized to act as if others do not know what they know, they are still influenced by the knowledge they have. Methods: This study consists of five studies consisting of both experimental and non-experimental and within- and between-subjects survey designs. Each study collected samples of 152–1,048. Results: Partisans on both sides overestimate the extent to which stories from their news sources were familiar to contrapartisans. Introducing novel, unknown facts to support their political opinion made participants rate political outgroup members more negatively. In an experimental design, there was no difference in judging an opponent who did not know the same issue-relevant facts and someone who did know the same facts. However, when asked to compare those who know to those who do not, participants judged those who do not know more favorably, and their ratings of all issue-opponents were closer to those issueopponents who shared the same knowledge. In a debiasing experiment, those who received an epistemological treatment judged someone who disagreed more favorably. Conclusion: This research provides evidence that the curse of knowledge may be a contributing cause of affective political polarization. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Debiasing Causal Inferences: Over and Beyond Suboptimal Sampling.
- Author
-
Rodríguez-Ferreiro, Javier, Vadillo, Miguel A., and Barberia, Itxaso
- Subjects
- *
CAUSAL inference , *ACTIVE learning , *CRITICAL thinking , *COGNITIVE bias , *PSYCHOLOGY , *CONTROL groups - Abstract
Background: We have previously presented two educational interventions aimed to diminish causal illusions and promote critical thinking. In both cases, these interventions reduced causal illusions developed in response to active contingency learning tasks, in which participants were able to decide whether to introduce the potential cause in each of the learning trials. The reduction of causal judgments appeared to be influenced by differences in the frequency with which the participants decided to apply the potential cause, hence indicating that the intervention affected their information sampling strategies. Objective: In the present study, we investigated whether one of these interventions also reduces causal illusions when covariation information is acquired passively. Method: Forty-one psychology undergraduates received our debiasing intervention, while 31 students were assigned to a control condition. All participants completed a passive contingency learning task. Results: We found weaker causal illusions in students that participated in the debiasing intervention, compared to the control group. Conclusion: The intervention affects not only the way the participants look for new evidence, but also the way they interpret given information. Teaching implications: Our data extending previous results regarding evidence-based educational interventions aimed to promote critical thinking to situations in which we act as mere observers. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Towards an integrated debiasing framework for consumer financial decisions: A reflection on debiasing research.
- Author
-
Jugnandan, Shreeya and Willows, Gizelle D.
- Subjects
CONSUMERS ,DECISION support systems ,DECISION making in investments ,COGNITIVE bias ,CONSUMER behavior - Abstract
Consumers are subject to cognitive biases, which impede the rationality of their financial decisions. This is problematic, given the onus on the individual to make investment and savings decisions. Thus, there is an impetus for research to identify mitigation strategies. This qualitative review surveys the debiasing literature to identify the prevalent debiasing approaches and proposes an integrated model towards debiasing. The identified core debiasing strategies (education and training, decision support systems, information aspects, experience, and financial advice) are organized and integrated into a single model using the 'Antecedents, Decisions, Outcomes' format developed by Paul and Benito. We also propose an agenda for future debiasing research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Dynamic Debiasing Network for Visual Commonsense Generation
- Author
-
Jungeun Kim, Jinwoo Park, Jaekwang Seok, and Junyeong Kim
- Subjects
Multimodal reasoning ,visual commonsense generation ,VisualCOMET ,dataset bias ,debiasing ,causal inference ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The task of Visual Commonsense Generation (VCG) delves into the deeper narrative behind a static image, aiming to comprehend not just its immediate content but also the surrounding context. The VCG model generates three types of captions for each image: 1) the events preceding the image, 2) the characters’ current intents, and 3) the anticipated subsequent events. However, a significant challenge in VCG research is the prevalent yet under-addressed issue of dataset bias, which can result in spurious correlations during model training. This occurs when a model, influenced by biased data, infers associations that frequently appear in the dataset but may not provide accurate or contextually appropriate interpretations. The issue becomes even more complex in multimodal tasks, where different types of data, such as text and image, bring their unique biases. When these modalities are combined as inputs to a model, one modality might exhibit a stronger bias than others. To address this, we introduce the Dynamic Debiasing Network (DDNet) for Visual Commonsense Generation. DDNet is designed to identify the biased modality and dynamically counteract modality-specific biases using causal relationship. By considering biases from multiple modalities, DDNet avoids over-focusing on any single modality and effectively combines information from all modalities. The experimental results on the VisualCOMET dataset demonstrate that our proposed network fosters more accurate commonsense inferences. This emphasizes the critical need for debiasing in multimodal tasks and enhances the reliability of machine-generated commonsense narratives.
- Published
- 2023
- Full Text
- View/download PDF
49. HUMAN-AI COLLABORATION IN CONTENT MODERATION: THE EFFECTS OF INFORMATION CUES AND TIME CONSTRAINTS.
- Author
-
Haoyan Li and Michael Chau
- Subjects
HUMAN-machine systems ,PROBLEM solving ,USER-generated content ,ARTIFICIAL intelligence ,NUDGE theory - Abstract
An extremely large amount of user-generated content is produced by users worldwide every day with the rapid development of online social media. Content moderation has emerged to ensure the quality of posts on various social media platforms. This process typically demands collaboration between humans and AI because of the complementarity of the two agents in different facets. Wondering how AI can better assist humans to make final judgment in the "machine-in-the-loop" paradigm, we propose a lab experiment to explore the influence of different types of cues provided by AI through a nudging approach as well as time constraints on human moderators' performance. The proposed study contributes to the literature on the AI-assisted decision-making pattern, and helps social media platforms in creating an effective human-AI collaboration framework for content moderation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
50. Modeling silent behavior for synthesizing analysts' earnings forecasts: a joint likelihood approach.
- Author
-
Enze Jin, Cong Wang, Kai Guo, Xunhua Guo, and Bin Ke
- Subjects
PROFITABILITY ,ARTIFICIAL intelligence ,INFORMATION technology ,DIGITAL technology ,TECHNOLOGICAL innovations ,ARTIFICIAL neural networks - Abstract
Accurately estimating future firm earnings is crucial for evaluating corporate profitability and stock valuation. Conventional market consensus measures based on professional analysts' forecasts are prone to errors due to analysts' systematic behavioral biases, necessitating effective debiasing methods. In this paper, we propose a new approach to debiasing by focusing on analysts' silent behaviors and uncovering the underlying forecasts of silent analysts through elaborate modeling of their self-selected silent behavior during the process of estimating and releasing earnings forecasts. We formulate analysts' selective forecasts as a data-missing-not-at-random problem and develop a joint likelihood optimization method to infer silent analysts' unrevealed forecasts and consolidate them with the observed ones to achieve more accurate estimations for annual earnings per share. Using a large sample of firms over twenty years, we evaluate our method and find that it outperforms market consensus forecasts in terms of forecast bias and forecast accuracy, statistically and economically. Moreover, the performance superiority is robust across subsamples with different firm-level characteristics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.