Back to Search Start Over

Beyond the Methodological Gold Standards of Behavioral Research: Considerations for Practice and Policy. Social Policy Report. Volume 18, Number 2

Authors :
Society for Research in Child Development
McCall, Robert B.
Green, Beth L.
Source :
Society for Research in Child Development. 2004.
Publication Year :
2004

Abstract

Research methods are tools that can be variously applied - depending on the stage of knowledge in a particular area, the type of research question being asked, and the context of the research. The field of program evaluation, critical for social policy development, often has not adequately embraced the full range of methodological tools needed to understand and capture the complexity of these issues. The dominant paradigm, or "gold standard," for program evaluation remains the experimental method. This standard has merit, particularly because experimental research has the capacity to draw conclusions about cause and effect ("internal validity"). This paper identifies the benefits, common problems, and limitations in three characteristics of experimental studies: theory-driven hypotheses; random assignment of subjects to intervention groups; and experimenter-controlled, uniformly-applied interventions. Research situations are identified in which reliance on the experimental method can lead to inappropriate conclusions. For example, theory-driven hypotheses, if validated, provide a broader base of understanding of an issue and intervention; but some questions should be studied in the absence of theory simply because practice or policy needs the answer. Random assignment can produce cause-and-effect conclusions, but public services are never randomly assigned, and their effectiveness may well depend on participants' motivation or belief in the service (as signaled by their choice to participate). Experimenter-controlled uniform treatment administration insures that we know precisely the nature of the treatment documented to work by the evaluation, but it prohibits tailoring treatment to the individual needs of participants, which is a major "best practice" of service delivery. Suggestions are offered for ways to incorporate alternative research methods that may emphasize "external validity" (match to real-life circumstances), and complement results derived from experimental research designs on social programs. The field of program evaluation, and the policy decisions that rest on it, should utilize and value each research method relative to its merits, the purpose of the study, the specific questions to be asked, the circumstances under which the research is conducted, and the use to which the results will be put. [Commentaries in this issue of "Social Policy Report" include: (1) Beyond Advocacy: Putting History and Research on Research into Debates about the Merits of Social Experiments (Thomas D. Cook); (2) Why We Need More, Not Fewer, Gold Standard Evaluations (Phoebe Cottingham); (3) Don't Throw out the Baby with the Bathwater: Incorporating Behavioral Research into Evaluations (Jeanne Brooks-Gunn); and (4) On Randomized Trials and Bathwater: A Response to Cottingham and Brooks-Gunn (Robert B. McCall).]

Details

Language :
English
ISSN :
1075-7031
Database :
ERIC
Journal :
Society for Research in Child Development
Publication Type :
Periodical
Accession number :
ED595517
Document Type :
Collected Works - Serial<br />Reports - Evaluative