Background and AimsThe critical appraisal of the evidence presented in research papers is an early step in the process of evidence-based design (EBD), which aims at transferring research findings into design practice (Stichler, 2010a). However, because architects and designers are often not trained in research methods, they may feel somewhat overwhelmed by the terms used in the description of the methodology of scientific papers. Therefore, when designing healthcare facilities, they may not feel confident discussing the evidence available from research studies with members of the medical professions, who often display a profound knowledge of research methods. Also, architects and designers may encounter difficulties in evaluating the quality of the evidence resulting from numerous research papers, which sometimes arrive at conflicting results.The purpose of this article is to develop a simple method to appraise the quality of research papers suitable for EBD. The most common study designs will be explained, and an algorithm, which could be helpful for guiding architects and designers when assessing the quality of evidence, will be introduced.Appraising the Quality of Evidence in the Process of EBDIn building design, there is a broad spectrum of evidence sources, such as experts' opinions, guidelines, observational studies, and systematic reviews. In the design process, it is necessary to determine which sources are scientifically valid and how the resulting findings are to be incorporated best into design decisions. The methodology of EBD offers a way of acquiring, appraising, weighting, and transferring the existing evidence into design practice (Rosswurm & Larrabee, 1999). This circular process is shown in Figure 1.The circle of the evidence-based design process starts with the definition of the research question and the development of a research strategy in step one. In the second step, relevant papers are acquired and selected. In the third step, which comprises the critical appraisal of the research studies' quality, this article comes into use by providing a simple algorithm for architects, designers, and other practitioners. It should be noted that this appraisal only rates the methodological quality of a given study. When "weighing the evidence" in step four, the results of several studies on the chosen research question are combined. Additionally, external factors, such as an estimation of the evidence's consistency, relevance, and external validity, are taken into account (SIGN 50, 2008). In step five, design recommendations, which are often scaled between weak and strong (Guyatt et al., 2008a), are developed. This judgment is made on the basis of the quality of the evidence (see step three) and its assigned weight (see step four). In the sixth and last step, the design decisions made on the basis of the design recommendations are implemented and evaluated.As it has been outlined, the critical appraisal of the quality of scientific evidence is a crucial step in EBD, answering the question to what extent the results of a research study are valid. However, what determines the "quality of evidence"? In general, it is defined as "the extent to which a study's design, conduct, and analysis has minimized selection, measurement, and confounding bias" (West et al., 2002). The better a study manages to protect against bias and error, the higher the quality of the results produced. Methods to protect against bias are, among others, group allocation, randomization, and blinding. Group allocation means dividing the participants into at least two groups: a treatment group and a control group. Randomization is the masked, random allocation of participants to the treatment or the control group. Blinding also refers to the group allocation: in the best case, during the intervention, neither the participant nor the researcher knows which group a participant belongs to (a doubleblind study). …