Background: Systematic reviews and meta-analyses of interventions can provide critical evidence for educators and policymakers by providing insights into the effectiveness of related interventions. The quality of included studies builds the foundation for a high-quality review, while bias in the included studies increases the risks of unreliable evidence. For example, non-randomized study designs are more likely to provide biased results compared to randomized control trials (RCT) designs (Farrah et al., 2019). Therefore, it is crucial to employ quality control strategies, such as a risk of bias (RoB) analysis, in systematic reviews. Ideally, the RoB analysis would thoroughly demonstrate the quality of the interventions included or provide a caveat for the audience regarding the use of the evidence (Higgins et al., 2017). However, to the best of the authors' knowledge, existing tools for RoB analyses are all designed for health or clinical interventions. These tools may not be suitable for educational settings, due to the differences in the nature of the health and education interventions. For example, some tools evaluate the blindness of intervention conditions (e.g., JBI Checklist for Randomized Controlled Trials; Joanna Briggs Institute, 2017), which may be unrealistic for education interventions where treatment assignments sometimes cannot be blinded. Thus, it is important to develop and use RoB tools tailored to systematic reviews of interventions and evaluations in educational settings. Research Questions: This study aims to evaluate the use of RoB tools in published systematic reviews and meta-analyses of PK-12 education interventions. Three research questions guide this study: 1. How do systematic reviews and meta-analyses of interventions in PK-12 education address study quality? 2. What are the RoB tools used in the systematic reviews and meta-analyses of interventions in PK-12 education? 3. How are these tools being used in the systematic reviews/meta-analyses? Research Design: This study applied a systematic review approach that involved literature searching, screening, full-text review, coding, and critically summarizing results. During the screening process, all studies were double-screened until adequate inter-rater reliability was established, at which point the process moved to a single-reviewer process. All full-text studies were reviewed by two independent reviewers. Data was extracted from the included studies into a standardized form. A senior researcher was involved to deal with the conflicts during the process. Data Collection and Analysis: Literature Search Procedures: We utilized a web-based tool, Paperfetcher (Pallath & Zhang, 2022) to search for all studies published in the journal of Review of Educational Research (RER) between January 2002 to December 2022. The resulting references were saved as .ris files and uploaded to Covidence. Inclusion Criteria: Inclusion criteria were designed to minimize selection bias and provide reliable information on quality control in systematic reviews of PK-12 education interventions (Table 1). Review and Coding Procedures: All studies were screened and reviewed based on the inclusion criteria in Covidence. Studies were screened for relevance, and the full texts were reviewed against the inclusion criteria. The PRISMA screening process was presented in Figure 1. Out of the 87 studies that met the inclusion criteria, a random sample of 50 studies were selected for the analysis. Details of the included studies were extracted and coded into a standardized spreadsheet. Data Analysis: Based on the coded data, descriptive results were summarized. In addition, critical qualitative analysis was conducted after analyzing the usage of quality control and RoB tools. Results: In total, fifty randomly sampled RER studies were included in the analysis (Fig 1). Most were published in the last 10 years (n=38, 75.51%) and included mainly elementary students (n=46, 92%). Most studies (n=45, 90%) controlled study quality to some extent, including setting up inclusion criteria, assessing study quality using RoB tools, and controlling or assessing the quality of evidence statistically (e.g., through moderators). Thirty-one studies (62%) used rigorous inclusion criteria to constrain study designs (e.g., randomized control trials) and outcomes (e.g., standardized outcome measurements). Seven studies (14%) used existing tools of RoB (Table 2), among which six used different versions of Cochrane Risk of Bias Tools, and one used the standards for evidence-based practices in special education designed by the Council for exceptional children. The latter included the reporting participants' demographic information (e.g., gender and race/ethnicity) as a quality indicator. Two meta-analyses stated that RoB tools were tailored for analyses when the RoB items were applicable to their studies. The quality control strategies were utilized in various ways among the included studies. Except by selecting high-quality studies with rigorous inclusion criteria, five studies excluded studies with a high risk of bias or outliers or adjusted their values to minimize the biased effect of those studies on the analysis. Furthermore, some calculated weighted overall effect sizes adjusting outliers or study quality or argued for the validity of the evidence. In addition, nineteen studies (38%) examined moderators related to quality indicators, which may also reduce the biasedness of the evidence evaluation. Conclusions: Although based on a small sample size, the findings of this study still revealed that existing RoB tools are not widely used by interventional systematic reviews and meta-analyses in educational settings. In addition, the existing tools may not entirely align with the specific needs of practice and research in educational settings. Moreover, not all of them consider reporting systematic structure as a quality indicator. Thus, there is a need for the development of an RoB tool tailored to K-12 educational settings that can facilitate evidence-based research and practice. The main limitation of this study is its small sample drawn from a single journal. Future studies should investigate more studies from more journals. Moreover, to better understand the role of quality control strategies in educational intervention research, future research should investigate whether the use of RoB tools is associated with different average effect sizes compared to those without.