This review summarizes the evidence from six randomized controlled trials that judged the effectiveness of systematic review summaries on policymakers' decision making, or the most effective ways to present evidence summaries to increase policymakers' use of the evidence. This review included six randomized controlled studies. A randomized controlled study is one in which the participants are divided randomly (by chance) into separate groups to compare different treatments or other interventions. This method of dividing people into groups means that the groups will be similar and that the effects of the treatments they receive will be compared more fairly. At the time the study is done, it is not known which treatment is the better one. The researchers who did these studies invited people from Europe, North America, South America, Africa, and Asia to take part in them. Two studies looked at “policy briefs,” one study looked at an “evidence summary,” two looked at a “summary of findings table,” and one compared a “summary of findings table” to an evidence summary. None of these studies looked at how policymakers directly used evidence from systematic reviews in their decision making, but two studies found that there was little to no difference in how they used the summaries. The studies relied on reports from decision makers. These studies included questions such as, “Is this summary easy to understand?” Some of the studies looked at users' knowledge, understanding, beliefs, or how credible (trustworthy) they believed the summaries to be. There was little to no difference in the studies that looked at these outcomes. Study participants rated the graded entry format higher for usability than the full systematic review. The graded entry format allows the reader to select how much information they want to read. The study participants felt that all evidence summary formats were easier to understand than full systematic reviews. Plain language summary Policy briefs make systematic reviews easier to understand but little evidence of impact on use of study findings It is likely that evidence summaries are easier to understand than complete systematic reviews. Whether these summaries increase the use of evidence from systematic reviews in policymaking is not clear. What is this review about? Systematic reviews are long and technical documents that may be hard for policymakers to use when making decisions. Evidence summaries are short documents that describe research findings in systematic reviews. These summaries may simplify the use of systematic reviews. Other names for evidence reviews are policy briefs, evidence briefs, summaries of findings, or plain language summaries. The goal of this review was to learn whether evidence summaries help policymakers use evidence from systematic reviews. This review also aimed to identify the best ways to present the evidence summary to increase the use of evidence. What is the aim of this review? This review summarizes the evidence from six randomized controlled trials that judged the effectiveness of systematic review summaries on policymakers' decision making, or the most effective ways to present evidence summaries to increase policymakers' use of the evidence. What are the main findings of this review? This review included six randomized controlled studies. A randomized controlled study is one in which the participants are divided randomly (by chance) into separate groups to compare different treatments or other interventions. This method of dividing people into groups means that the groups will be similar and that the effects of the treatments they receive will be compared more fairly. At the time the study is done, it is not known which treatment is the better one. The researchers who did these studies invited people from Europe, North America, South America, Africa, and Asia to take part in them. Two studies looked at “policy briefs,” one study looked at an “evidence summary,” two looked at a “summary of findings table,” and one compared a “summary of findings table” to an evidence summary. None of these studies looked at how policymakers directly used evidence from systematic reviews in their decision making, but two studies found that there was little to no difference in how they used the summaries. The studies relied on reports from decision makers. These studies included questions such as, “Is this summary easy to understand?” Some of the studies looked at users' knowledge, understanding, beliefs, or how credible (trustworthy) they believed the summaries to be. There was little to no difference in the studies that looked at these outcomes. Study participants rated the graded entry format higher for usability than the full systematic review. The graded entry format allows the reader to select how much information they want to read.. The study participants felt that all evidence summary formats were easier to understand than full systematic reviews. What do the findings of this review mean? Our review suggests that evidence summaries help policymakers to better understand the findings presented in systematic reviews. In short, evidence summaries should be developed to make it easier for policymakers to understand the evidence presented in systematic reviews. However, right now there is very little evidence on the best way to present systematic review evidence to policymakers. How up to date is this review? The authors of this review searched for studies through June 2016. Executive summary/Abstract Background Systematic reviews are important for decision makers. They offer many potential benefits but are often written in technical language, are too long, and do not contain contextual details which makes them hard to use for decision‐making. Strategies to promote the use of evidence to decision makers are required, and evidence summaries have been suggested as a facilitator. Evidence summaries include policy briefs, briefing papers, briefing notes, evidence briefs, abstracts, summary of findings tables, and plain language summaries. There are many organizations developing and disseminating systematic review evidence summaries for different populations or subsets of decision makers. However, evidence on the usefulness and effectiveness of systematic review summaries is lacking. We present an overview of the available evidence on systematic review evidence summaries. Objectives This systematic review aimed to 1) assess the effectiveness of evidence summaries on policy‐makers' use of the evidence and 2) identify the most effective summary components for increasing policy‐makers' use of the evidence. Search methods We searched several online databases (Medline, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials, Global Health Library, Popline, Africa‐wide, Public Affairs Information Services, Worldwide Political Science Abstracts, Web of Science, and DfiD), websites of research groups and organizations which produce evidence summaries, and reference lists of included summaries and related systematic reviews. These databases were searched in March‐April, 2016. Selection criteria Eligible studies included randomised controlled trials (RCTs), non‐randomised controlled trials (NRCTs), controlled before‐after (CBA) studies, and interrupted time series (ITS) studies. We included studies of policymakers at all levels as well as health system managers. We included studies examining any type of “evidence summary”, “policy brief”, or other product derived from systematic reviews that presented evidence in a summarized form. These interventions could be compared to active comparators (e.g. other summary formats) or no intervention. The primary outcomes were: 1) use of systematic review summaries decision‐making (e.g. self‐reported use of the evidence in policy‐making, decision‐making) and 2) policymaker understanding, knowledge, and/or beliefs (e.g. changes in knowledge scores about the topic included in the summary). We also assessed perceived relevance, credibility, usefulness, understandability, and desirability (e.g. format) of the summaries. Results Our database search combined with our grey literature search yielded 10,113 references after removal of duplicates. From these, 54 were reviewed in full text and we included 6 studies (reported in 7 papers, 1661 participants) as well as protocols from 2 ongoing studies. Two studies assessed the use of evidence summaries in decision‐making and found little to no difference in effect. There was also little to no difference in effect for knowledge, understanding or beliefs (4 studies) and perceived usefulness or usability (3 studies). Summary of Findings tables and graded entry summaries were perceived as slightly easier to understand compared to complete systematic reviews. Two studies assessed formatting changes and found that for Summary of Findings tables, certain elements, such as reporting study event rates and absolute differences were preferred as well as avoiding the use of footnotes. No studies assessed adverse effects. The risks of bias in these studies were mainly assessed as unclear or low however, two studies were assessed as high risk of bias for incomplete outcome data due to very high rates of attrition. Authors' conclusions Evidence summaries may be easier to understand than complete systematic reviews. However, their ability to increase the use of systematic review evidence in policymaking is unclear.