Emotion artificial intelligence (AI) is shown to vary systematically in its ability to accurately identify emotions, and this variation creates potential biases. In this paper, we conduct an experiment involving three commercially available emotion AI systems and a group of human labelers tasked with identifying emotions from two image data sets. The study focuses on the alignment between facial expressions and the emotion labels assigned by both the AI and humans. Importantly, human labelers are given the AI's scores and informed about its algorithmic fairness measures. This paper presents several key findings. First, the labelers' scores are affected by the emotion AI scores, consistent with the anchoring effect. Second, information transparency about the AI's fairness does not uniformly affect human labeling across different emotions. Moreover, information transparency can even increase human inconsistencies. Plus, significant inconsistencies in the scoring among different emotion AI models cast doubt on their reliability. Overall, the study highlights the limitations of individual decision making and information transparency regarding algorithmic fairness measures in addressing algorithmic fairness. These findings underscore the complexity of integrating emotion AI into practice and emphasize the need for careful policies on emotion AI. Emotion artificial intelligence (AI) or emotion recognition AI may systematically vary in its recognition of facial expressions and emotions across demographic groups, creating inconsistencies and disparities in its scoring. This paper explores the extent to which individuals can compensate for these disparities and inconsistencies in emotion AI considering two opposing factors; although humans evolved to recognize emotions, particularly happiness, they are also subject to cognitive biases, such as the anchoring effect. To help understand these dynamics, this study tasks three commercially available emotion AIs and a group of human labelers to identify emotions from faces in two image data sets. The scores generated by emotion AI and human labelers are examined for inference inconsistencies (i.e., misalignment between facial expression and emotion label). The human labelers are also provided with the emotion AI's scores and with measures of its scoring fairness (or lack thereof). We observe that even when human labelers are operating in this context of information transparency, they may still rely on the emotion AI's scores, perpetuating its inconsistencies. Several findings emerge from this study. First, the anchoring effect appears to be moderated by the type of inference inconsistency and is weaker for easier emotion recognition tasks. Second, when human labelers are provided with information transparency regarding the emotion AI's fairness, the effect is not uniform across emotions. Also, there is no evidence that information transparency leads to the selective anchoring necessary to offset emotion AI disparities; in fact, some evidence suggests that information transparency increases human inference inconsistencies. Lastly, the different models of emotion AI are highly inconsistent in their scores, raising doubts about emotion AI more generally. Collectively, these findings provide evidence of the potential limitations of addressing algorithmic bias through individual decisions, even when those individuals are supported with information transparency. History: Alessandro Acquisti, Senior Editor; Monideepa Tarafdar, Associate Editor. Supplemental Material: The online appendix is available at https://doi.org/10.1287/isre.2019.0493. [ABSTRACT FROM AUTHOR]