As recommender systems become increasingly important in daily human decision-making, users are demanding convincing explanations to understand why they get the specific recommendation results. Although a number of explainable recommender systems have recently been proposed, there still lacks an understanding of what users really need in a recommendation explanation. The actual reason behind users' intention to examine and consume (e.g., click and watch a movie) can be the window to answer this question and is named as self-explanation in this work. In addition, humans usually make recommendations accompanied by explanations, but there remain fewer studies on how humans explain and what we can learn from humangenerated explanations. To investigate these questions, we conduct a novel multi-role, multi-session user study inwhich users interact with multiple types of system-generated explanations as well as human-generated explanations, namely peer-explanation. During the study, users' intentions, expectations, and experiences are tracked in several phases, including before and after the users are presented with an explanation and after the content is examined. Through comprehensive investigations, three main findings have been made: First, we observe not only the positive but also the negative effects of explanations, and the impact varies across different types of explanations. Moreover, human-generated explanation, peer-explanation, performs better in increasing user intentions and helping users to better construct preferences, which results in better user satisfaction. Second, based on users' self-explanation, the information accuracy is measured and found to be a major factor associated with user satisfaction. Some other factors, such as unfamiliarity and similarity, are also discovered and summarized. Third, through annotations of the information aspects used in the human-generated selfexplanation and peer-explanation, patterns of how humans explain are investigated, including what information and how much information is utilized. In addition, based on the findings, a human-inspired explanation approach is proposed and found to increase user satisfaction, revealing the potential improvement of further incorporating more human patterns in recommendation explanations. These findings have shed light on the deeper understanding of the recommendation explanation and further research on its evaluation and generation. Furthermore, the collected data, including human-generated explanations by both the external peers and the users' selves, will be released to support future research works on explanation evaluation. [ABSTRACT FROM AUTHOR]