Personality perception is implicitly biased due to many subjective factors, such as cultural, social, contextual, gender, and appearance. Approaches developed for automatic personality perception are not expected to predict the real personality of the target but the personality external observers attributed to it. Hence, they have to deal with human bias, inherently transferred to the training data. However, bias analysis in personality computing is an almost unexplored area. In this article, we study different possible sources of bias affecting personality perception, including emotions from facial expressions, attractiveness, age, gender, and ethnicity, as well as their influence on prediction ability for apparent personality estimation. To this end, we propose a multimodal deep neural network that combines raw audio and visual information alongside predictions of attribute-specific models to regress apparent personality. We also analyze spatio-temporal aggregation schemes and the effect of different time intervals on first impressions. We base our study on the ChaLearn first impressions dataset, consisting of one-person conversational videos. Our model shows state-of-the-art results regressing apparent personality based on the Big-Five model. Furthermore, given the interpretability nature of our network design, we provide an incremental analysis on the impact of each possible source of bias on final network predictions. [ABSTRACT FROM AUTHOR]