Since the advent of internet and at least during the corona pandemic digital learning has gained in importance (Barnard et al., 2009; Allen & Seaman, 2017; Köller et al., 2020; Schwam et al., 2020). Especially in online-learning and home-schooling contexts, students need to efficiently self-regulate their learning processes to be successful learners (Bosten & Ice, 2011; Broadbent & Poon, 2017; Schwam et al., 2020). For students to self-regulate their learning effectively, their monitoring processes must be well calibrated, which means that self-assessment judgements should correspond to actual performance (Boud et al., 2013; Baas et al. 2015; Hacker et al., 2008; Labuhn et al., 2010). As Bandura (1986) suggests, the ability to accurately assess one's own abilities is key to successful academic performance. Indeed, self-assessment has overall a positive effect on academic performance (Li & Zhang, 2020; Panadero et al., 2020; Sitzmann et al., 2010; Yan et al., 2021). Therefore, fostering students’ self-assessment should be implemented in the curriculum to support students in developing skills and strategies for accurate self-assessment that can improve their learning processes and outcomes (Brown & Harris, 2014; Yan, 2020). In light of this potential in supporting learning processes and outcomes, self-assessment received more and more attention In the literature (Dochy et al., 1999; Yan et al., 2021) and an increasing number of empirical studies focused on the effect of self-assessment on performance in school (Li & Zhang, 2021; Yan et al., 2021; Youd, 2019). Others focused on different interventions (e.g., different types of feedback or rubrics) and their impact on self-assessment. But there is a significant lack of studies that investigate, which intervention is more strongly related to self-assessment. Therefore, the current study aims to examine and compare the effect of common interventions on self-assessment accuracy in a digital learning environment. Providing feedback is common intervention in helping students become aware of their miscalibration and assess their performance accurately (Panadero et al., 2020; Zimmerman 2000). One example of giving students feedback is grading, but presenting only grades often fails to improve students' self-assessment accuracy (e.g., Foster et al., 2017). Research shows that to promote students’ self-assessment accuracy, performance-referenced feedback has to stimulate self-assessment and monitoring processes (Butler & Winne, 1995; Stone, 2000). For example, feedback that communicates the current actual and target state can help students assess their current performance and initiate self-assessment processes (Labuhn et al., 2010; Zimmermann, 2002). In particular, elaborated feedback providing explicit information regarding the current state and the target state in addition to corrective information can foster student’s self-assessment (Nicol, 2021; Nietfeld et al., 2006). Also, simple corrective Feedback, which provides information about mistakes or correctness indirectly communicates discrepancies between current and actual state. However, elaborative feedback also answers the question of how to get there ("Where to next"; Hattie & Timperley, 2007) by providing feedback on possible solutions and instructions (Narciss, 2006). These additional information or cues students can use to assess their own performance (Butler & Winne, 1995; Labuhn et al., 2010). In conclusion, elaborative feedback offers more explicit cues compared to corrective feedback that students can use to calibrate their self-assessment. Another opportunity to support students’ self-assessment accuracy are rubrics (Andrade, 2019; Jonsson, 2007; Panadero et al., 2014; Panadero & Romero, 2014). Well-designed rubrics communicate assessing criteria or performance standards and help students to compare and assess their work with these identified standards (Bol et al., 2008; Covill, 2012). So rubric-referenced self-assessment can support students to make more accurate judgements about their performance and calibrate their self-assessment (Jonsson, 2014; Reddy & Andrade, 2010). Compared to performance feedback students need to assess more information themselves (e.g., if the communicated performance standard has already been reached or whether the task still needs to be revised). So, on one hand rubrics stimulate self-assessment and monitoring processes to a higher extend compared to performance feedback and, in turn, rubrics may have a greater potential to support students self-assessment accuracy. On the other hand, students are more likely to assess themselves more inaccurately, because rubrics communicate less cues that students can use to calibrate their performance judgements. Overall findings are inconsistent regarding feedback or rubrics and their effects on self-assessment accuracy. Furthermore, most studies compare the effects between performance feedback and no feedback (e.g., Butler et al., 2008; Finn & Tauber 2015; Labuhn et al., 2010; Pulford & Colman, 1997) or the use of rubrics and no rubrics (e.g., Baker & Dunlosky, 2007; Jonnson & Svingby, 2007; Panadero & Romero, 2014). However, to the authors' knowledge, these interventions have not yet been directly compared empirically, which this study wants to address. So the main goal is to compare the effect sizes of performance-referenced rubrics, corrective and elaborated feedback on self-assessment accuracy. By doing that, we investigate how different feedback conditions relate to students self-assessment accuracy in the domain of English as a foreign language writing. Therefore we will compare whether students show more self-assessment accuracy after they receive a rubric, corrective feedback or elaborated feedback on a given writing task compared to students who did not receive any feedback information. Further, we will compare the effects of rubrics, corrective and elaborated feedback on students’ self-assessment accuracy to investigate which feedback information has the largest effect. Due to the lack of an empirical database, it is not possible to formulate concrete hypotheses at this point and to investigate this research question we will perform explorative analyses (see explorative analysis). Also, we will examine these effects at a subsequent similar task, where students do not receive any feedback information to investigate a potentially learning effect. Finally, we investigate if prior performance level of students may moderate the effect of the intervention on self-assessment accuracy (e.g., Kruger & Dunning, 1999; Stone, 2000; Zimmerman, 2002). Lower performing students seem to benefit more from feedback-generated standards than higher performing students, as these standards provide them with more valid cues for self-assessment (Thiede et al., 2010) and low performers have more room to improve (Bol et al., 2005; Ehrlinger et al., 2008; Kruger & Dunning, 1999). Thus, we will compare whether low performing students, who receive a rubric, corrective or elaborative feedback show more self-assessment accuracy compared to high performing students.