69 results on '"Chris Riley"'
Search Results
2. Developing a direct rating behavior scale for depression in middle school students
- Author
-
Michael P. Van Wie, Stephen P. Kilgus, T. Chris Riley-Tillman, Keith C. Herman, and James Sinclair
- Subjects
Male ,Adolescent ,Emotions ,Applied psychology ,Child Behavior ,Poison control ,Student engagement ,Test validity ,Education ,Rating scale ,Developmental and Educational Psychology ,Humans ,Mass Screening ,Child ,Students ,At-risk students ,Depressive Disorder ,Depression ,Item analysis ,05 social sciences ,Discriminant validity ,Reproducibility of Results ,050301 education ,Direct Behavior Rating ,Behavior Rating Scale ,Female ,Psychology ,0503 education - Abstract
Research has supported the applied use of Direct Behavior Rating Single-Item Scale (DBR-SIS) targets of "academic engagement" and "disruptive behavior" for a range of purposes, including universal screening and progress monitoring. Though useful in evaluating social behavior and externalizing problems, these targets have limited utility in evaluating emotional behavior and internalizing problems. Thus, the primary purpose of this study was to support the initial development and validation of a novel DBR-SIS target of "unhappy," which was intended to tap into the specific construct of depression. A particular focus of this study was on the novel target's utility within universal screening. A secondary purpose was to further validate the aforementioned existing DBR-SIS targets. Within this study, 87 teachers rated 1,227 students across two measures (i.e., DBR-SIS and the Teacher Observation of Classroom Adaptation-Checklist [TOCA-C]) and time points (i.e., fall and spring). Correlational analyses supported the test-retest reliability of each DBR-SIS target, as well as its convergent and discriminant validity across concurrent and predictive comparisons. Receiver operating characteristic (ROC) curve analyses further supported (a) the overall diagnostic accuracy of each target (as indicated by the area under the curve [AUC] statistic), as well as (b) the selection of cut scores found to accurately differentiate at-risk and not at-risk students (as indicated by conditional probability statistics). A broader review of findings suggested that across the majority of analyses, the existing DBR-SIS targets outperformed the novel "unhappy" target. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
- Published
- 2019
- Full Text
- View/download PDF
3. Methods matter: A multi-trait multi-method analysis of student behavior
- Author
-
Faith G. Miller, Megan E. Welsh, Gregory A. Fabiano, T. Chris Riley-Tillman, Austin H. Johnson, D. Betsy McCoach, Sandra M. Chafouleas, and Huihui Yu
- Subjects
Male ,Adolescent ,Child Behavior ,Bivariate analysis ,Structural equation modeling ,Education ,Rating scale ,Multi trait ,Developmental and Educational Psychology ,Humans ,0501 psychology and cognitive sciences ,Child ,Students ,Reliability (statistics) ,Schools ,05 social sciences ,Reproducibility of Results ,050301 education ,Construct validity ,Foundation (evidence) ,Adolescent Behavior ,Direct Behavior Rating ,Female ,Psychology ,0503 education ,050104 developmental & child psychology ,Cognitive psychology - Abstract
Reliable and valid data form the foundation for evidence-based practices, yet surprisingly few studies on school-based behavioral assessments have been conducted which implemented one of the most fundamental approaches to construct validation, the multitrait-multimethod matrix (MTMM). To this end, the current study examined the reliability and validity of data derived from three commonly utilized school-based behavioral assessment methods: Direct Behavior Rating – Single Item Scales, systematic direct observations, and behavior rating scales on three common constructs of interest: academically engaged, disruptive, and respectful behavior. Further, this study included data from different sources including student self-report, teacher report, and external observers. A total of 831 students in grades 3–8 and 129 teachers served as participants. Data were analyzed using bivariate correlations of the MTMM, as well as single and multi-level structural equation modeling. Results suggested the presence of strong methods effects for all the assessment methods utilized, as well as significant relations between constructs of interest. Implications for practice and future research are discussed.
- Published
- 2018
- Full Text
- View/download PDF
4. Examining the Concurrent Criterion-Related Validity of Direct Behavior Rating–Single Item Scales With Students With Social Competence Deficits
- Author
-
Sarah Owens, Janine P. Stichter, Alexander M. Schoemann, T. Chris Riley-Tillman, and Stephen P. Kilgus
- Subjects
05 social sciences ,050301 education ,medicine.disease ,Single item ,Education ,Autism spectrum disorder ,Direct Behavior Rating ,Interpersonal competence ,General Health Professions ,Developmental and Educational Psychology ,medicine ,Criterion validity ,0501 psychology and cognitive sciences ,Social competence ,Psychology ,0503 education ,Reliability (statistics) ,050104 developmental & child psychology ,Cognitive psychology - Abstract
A line of research has supported the development and validation of Direct Behavior Rating–Single Item Scales (DBR-SIS) for use in progress monitoring. Yet, this research was largely conducted within the general education setting with typically developing children. It is unknown whether the tool may be defensibly used with students exhibiting more substantial concerns, including students with social competence difficulties. The purpose of this investigation was to examine the concurrent validity of DBR-SIS in a middle school sample of students exhibiting substantial social competence concerns ( n = 58). Students were assessed using both DBR-SIS and systematic direct observation (SDO) across three target behaviors. Each student was enrolled in one of two interventions: the Social Competence Intervention or a business-as-usual control condition. Students were assessed across three time points, including baseline, mid-intervention, and postintervention. A review of across-time correlations indicated small to moderate correlations between DBR-SIS and SDO data ( r = .25–.45). Results further suggested that the relationships between DBR-SIS and SDO targets were small to large at baseline. Correlations attenuated over time, though differences across time points were not statistically significant. This was with the exception of academic engagement correlations, which remained moderate–high across all time points.
- Published
- 2018
- Full Text
- View/download PDF
5. Confirmation of models for interpretation and use of the Social and Academic Behavior Risk Screener (SABRS)
- Author
-
Wesley A. Sims, T. Chris Riley-Tillman, Stephen P. Kilgus, and Nathaniel P. von der Embse
- Subjects
Academic achievement ,Child Behavior Disorders ,Models, Psychological ,Risk Assessment ,Likert scale ,Developmental psychology ,Education ,Goodness of fit ,Models ,Behavioral and Social Science ,Developmental and Educational Psychology ,Humans ,Child ,Reliability (statistics) ,At-risk students ,Psychiatric Status Rating Scales ,Academic year ,Prevention ,Social Behavior Disorders ,Confirmatory factor analysis ,Inter-rater reliability ,Early Diagnosis ,Behavior Rating Scale ,Psychological ,Specialist Studies in Education ,Psychology - Abstract
The purpose of this investigation was to evaluate the models for interpretation and use that serve as the foundation of an interpretation/use argument for the Social and Academic Behavior Risk Screener (SABRS). The SABRS was completed by 34 teachers with regard to 488 students in a Midwestern high school during the winter portion of the academic year. Confirmatory factor analysis supported interpretation of SABRS data, suggesting the fit of a bifactor model specifying 1 broad factor (General Behavior) and 2 narrow factors (Social Behavior [SB] and Academic Behavior [AB]). The interpretive model was further supported by analyses indicative of the internal consistency and interrater reliability of scores from each factor. In addition, latent profile analyses indicated the adequate fit of the proposed 4-profile SABRS model for use. When cross-referenced with SABRS cut scores identified via previous work, results revealed students could be categorized as (a) not at-risk on both SB and AB, (b) at-risk on SB but not on AB, (c) at-risk on AB but not on SB, or (d) at-risk on both SB and AB. Taken together, results contribute to growing evidence supporting the SABRS within universal screening. Limitations, implications for practice, and future directions for research are discussed herein.
- Published
- 2019
- Full Text
- View/download PDF
6. Establishing Interventions via a Theory-Driven Single Case Design Research Cycle
- Author
-
Stephen P. Kilgus, Thomas R. Kratochwill, and T. Chris Riley-Tillman
- Subjects
Research design ,050103 clinical psychology ,Contextualization ,Management science ,05 social sciences ,Psychological intervention ,050301 education ,Publication bias ,Single-subject design ,Education ,Risk analysis (engineering) ,Intervention (counseling) ,Intervention research ,Developmental and Educational Psychology ,0501 psychology and cognitive sciences ,Experimental methods ,Psychology ,0503 education - Abstract
Recent studies have suggested single case design (SCD) intervention research is subject to publication bias, wherein studies are more likely to be published if they possess large or statistically significant effects and use rigorous experimental methods. The nature of SCD and the purposes for which it might be used could suggest that large effects and rigorous methods should not always be expected. The purpose of the current paper is to propose and describe a theory-driven cycle of SCD intervention research. The proposed SCD-specific cycle serves several purposes including (a) defining the purposes for which SCD research might be adopted, (b) specifying the types of evidence to be collected in establishing an intervention for applied use, and (c) illustrating the phases of SCD-based intervention research (i.e., development, efficacy, effectiveness, contextualization, and implementation). The proposed model is intended to serve as an intermediary between theory and research, facilitating the consi...
- Published
- 2016
- Full Text
- View/download PDF
7. Current Advances and Future Directions in Behavior Assessment
- Author
-
Austin H. Johnson and T. Chris Riley-Tillman
- Subjects
Evidence-based practice ,Screening test ,Management science ,05 social sciences ,050301 education ,Education ,Educational research ,Work (electrical) ,General Health Professions ,Evaluation methods ,Developmental and Educational Psychology ,0501 psychology and cognitive sciences ,Engineering ethics ,Psychology ,0503 education ,050104 developmental & child psychology - Abstract
Multi-tiered problem-solving models that focus on promoting positive outcomes for student behavior continue to be emphasized within educational research. Although substantial work has been conducted to support systems-level implementation and intervention for behavior, concomitant advances in behavior assessment have been limited. This is despite the central role that data derived from behavior assessment methods must play in making defensible multi-tiered decisions such as those for screening and progress monitoring. In this commentary, the role of assessment in the evidence-based practice movement is described, alongside necessary features of behavior assessment methods utilized in multi-tiered systems. The relevance of these features to articles in this special issue is described. Finally, observations and suggestions for future directions regarding the current state of behavior assessment in educational research are offered.
- Published
- 2016
- Full Text
- View/download PDF
8. Direct Behavior Rating Instrumentation
- Author
-
T. Chris Riley-Tillman, Faith G. Miller, Alyssa A. Schardt, and Sandra M. Chafouleas
- Subjects
Scale (ratio) ,05 social sciences ,050401 social sciences methods ,050301 education ,Industrial engineering ,Education ,0504 sociology ,Direct Behavior Rating ,General Health Professions ,Developmental and Educational Psychology ,Instrumentation (computer programming) ,Psychology ,0503 education ,Social psychology - Abstract
The purpose of this study was to investigate the impact of two different Direct Behavior Rating–Single Item Scale (DBR-SIS) formats on rating accuracy. A total of 119 undergraduate students participated in one of two study conditions, each utilizing a different DBR-SIS scale format: one that included percentage of time anchors on the DBR-SIS scale and an explicit reference to duration of the target behavior (percent group) and one that did not include percentage anchors nor a reference to duration of the target behavior (no percent group). Participants viewed nine brief video clips and rated student behavior using one of the two DBR-SIS formats. Rating accuracy was determined by calculating the absolute difference between participant ratings and two criterion measures: systematic direct observation scores and DBR-SIS expert ratings. Statistically significant differences between groups were found on only two occasions, pertaining to ratings of academically engaged behavior. Limitations and directions for future research are discussed.
- Published
- 2016
- Full Text
- View/download PDF
9. Taking the Guesswork out of Locating Evidence-Based Mathematics Practices for Diverse Learners
- Author
-
Elizabeth M. Hughes, Erica S. Lembke, Sarah R. Powell, and T. Chris Riley-Tillman
- Subjects
Program evaluation ,Medical education ,Health (social science) ,Evidence-based practice ,Best practice ,education ,05 social sciences ,Psychological intervention ,050301 education ,Targeted interventions ,Education ,Work (electrical) ,Intervention (counseling) ,Learning disability ,ComputingMilieux_COMPUTERSANDEDUCATION ,Developmental and Educational Psychology ,Mathematics education ,medicine ,0501 psychology and cognitive sciences ,medicine.symptom ,Psychology ,0503 education ,050104 developmental & child psychology - Abstract
Legislations mandates that educators use evidence-based practices (EBPs) that are supported by scientifically based research. EBPs have demonstrated a likelihood to work for students with disabilities. EBPs should match targeted needs of the student receiving the instruction, which sometimes requires educators to search for the best intervention to meet specific student needs. This article discusses the impetus for practices supported by evidence, where to find interventions and strategies, and what to do when targeted interventions do not exist. Additionally, this article emphasizes the need to evaluate effectiveness of intervention at the student level.
- Published
- 2016
- Full Text
- View/download PDF
10. Evaluating the technical adequacy of DBR-SIS in tri-annual behavioral screening: A multisite investigation
- Author
-
T. Chris Riley-Tillman, Megan E. Welsh, Faith G. Miller, Sandra M. Chafouleas, Gregory A. Fabiano, and Austin H. Johnson
- Subjects
Male ,education ,Child Behavior ,Poison control ,Child Behavior Disorders ,Education ,Injury prevention ,Statistics ,Developmental and Educational Psychology ,Humans ,Mass Screening ,0501 psychology and cognitive sciences ,Point estimation ,Child ,Students ,Bootstrapping (statistics) ,Schools ,Receiver operating characteristic ,05 social sciences ,050301 education ,Human factors and ergonomics ,Predictive value ,Direct Behavior Rating ,Female ,Psychology ,0503 education ,Social psychology ,050104 developmental & child psychology - Abstract
The implementation of multi-tiered systems in schools necessitates the use of screening assessments which produce valid and reliable data to identify students in need of tiered supports. Data derived from these screening assessments may be evaluated according to their classification accuracy, or the degree to which cut scores correctly identify individuals as "at-risk" or "not-at-risk." The current study examined the performance of mean scores derived from over 1700 students in Grades 1, 2, 4, 5, 7, and 8 using Direct Behavior Rating-Single Item Scales. Students were rated across three time points (Fall, Winter, Spring) by their teachers in three areas: (a) academically engaged behavior, (b) disruptive behavior, and (c) respectful behavior. Classification accuracy indices and comparisons among behaviors were derived using Receiver Operating Characteristic (ROC) curve analyses, partial area under the curve (pAUC) tests, and bootstrapping methods to evaluate the degree to which mean behavior ratings accurately identified students who demonstrated elevated behavioral symptomology on the Behavioral and Emotional Screening System. RESULTS indicated that optimal cut-scores for mean behavior ratings and a composite rating demonstrated high levels of specificity, sensitivity, and negative predictive value, with sensitivity point estimates for optimal cut-scores exceeding.70 for individual behaviors and.75 for composite scores across grade groups and time points. Language: en
- Published
- 2016
- Full Text
- View/download PDF
11. The Next Big Idea: A Framework for Integrated Academic and Behavioral Intensive Intervention
- Author
-
Amy Peterson, Louis Danielson, Laura Berry Kuchle, Rebecca Zumeta Edmonds, and T. Chris Riley-Tillman
- Subjects
Teamwork ,Medical education ,Health (social science) ,Big Idea ,media_common.quotation_subject ,Teaching method ,Psychological intervention ,Academic achievement ,Special education ,Education ,Learning disability ,Developmental and Educational Psychology ,medicine ,Mathematics education ,medicine.symptom ,Psychology ,Merge (version control) ,media_common - Abstract
Despite advances in evidence-based core instruction and intervention, many students with disabilities continue to achieve poor academic and behavioral outcomes. Many of these students are not sufficiently responsive to standardized programs and require more intensive, individualized supports. While many interventions and school problem-solving teams focus primarily on either academic or behavioral concerns, students with the most intensive needs often have interrelated needs in both areas. The next big idea in special education should be to merge these efforts, building upon all that we have learned about problem solving at all levels of support, to improve outcomes for these students. Data-based individualization provides a framework for integrating academic and behavioral problem solving and intervention.
- Published
- 2015
- Full Text
- View/download PDF
12. A comparison of measures to screen for social, emotional, and behavioral risk
- Author
-
Sandra M. Chafouleas, T. Chris Riley-Tillman, Faith G. Miller, Megan E. Welsh, Gregory A. Fabiano, and Daniel Cohen
- Subjects
Male ,Adolescent ,Referral ,Emotions ,education ,Sample (statistics) ,Child Behavior Disorders ,Education ,Social skills ,Risk Factors ,Rating scale ,Developmental and Educational Psychology ,Humans ,Child ,Students ,At-risk students ,School Health Services ,Psychiatric Status Rating Scales ,Mental health ,United States ,Early Diagnosis ,Direct Behavior Rating ,Female ,Psychology ,Clinical psychology - Abstract
The purpose of this study was to examine the relation between teacher-implemented screening measures used to identify social, emotional, and behavioral risk. To this end, 5 screening options were evaluated: (a) Direct Behavior Rating - Single Item Scales (DBR-SIS), (b) Social Skills Improvement System - Performance Screening Guide (SSiS), (c) Behavioral and Emotional Screening System - Teacher Form (BESS), (d) Office discipline referrals (ODRs), and (e) School nomination methods. The sample included 1974 students who were assessed tri-annually by their teachers (52% female, 93% non-Hispanic, 81% white). Findings indicated that teacher ratings using standardized rating measures (DBR-SIS, BESS, and SSiS) resulted in a larger proportion of students identified at-risk than ODRs or school nomination methods. Further, risk identification varied by screening option, such that a large percentage of students were inconsistently identified depending on the measure used. Results further indicated weak to strong correlations between screening options. The relation between broad behavioral indicators and mental health screening was also explored by examining classification accuracy indices. Teacher ratings using DBR-SIS and SSiS correctly identified between 81% and 91% of the sample as at-risk using the BESS as a criterion. As less conservative measures of risk, DBR-SIS and SSiS identified more students as at-risk relative to other options. Results highlight the importance of considering the aims of the assessment when selecting broad screening measures to identify students in need of additional support.
- Published
- 2015
- Full Text
- View/download PDF
13. Using Consensus Building Procedures With Expert Raters to Establish Comparison Scores of Behavior for Direct Behavior Rating
- Author
-
Sayward E. Harrison, Rose Jaffery, Sandra M. Chafouleas, T. Chris Riley-Tillman, Mark C. Bowler, and Austin H. Johnson
- Subjects
Alternative methods ,Psychometrics ,Best practice ,Applied psychology ,Direct observation ,Expert consensus ,computer.software_genre ,Education ,Behavioral data ,Direct Behavior Rating ,General Health Professions ,Developmental and Educational Psychology ,Data mining ,Psychology ,computer - Abstract
To date, rater accuracy when using Direct Behavior Rating (DBR) has been evaluated by comparing DBR-derived data to scores yielded through systematic direct observation. The purpose of this study was to evaluate an alternative method for establishing comparison scores using expert-completed DBR alongside best practices in consensus building exercises, to evaluate the accuracy of ratings. Standard procedures for obtaining expert data were established and implemented across two sites. Agreement indices and comparison scores were derived. Findings indicate that the expert consensus building sessions resulted in high agreement between expert raters, lending support for this alternative method for identifying comparison scores for behavioral data.
- Published
- 2015
- Full Text
- View/download PDF
14. Formative Assessment Using Direct Behavior Ratings: Evaluating Intervention Effects of Daily Behavior Report Cards
- Author
-
Daniel Cohen, Wesley A. Sims, and Chris Riley-Tillman
- Subjects
05 social sciences ,education ,050301 education ,Intervention effect ,Education ,Formative assessment ,Direct Behavior Rating ,Intervention (counseling) ,General Health Professions ,Developmental and Educational Psychology ,Specialist Studies in Education ,0501 psychology and cognitive sciences ,Psychology ,0503 education ,Social psychology ,050104 developmental & child psychology ,Clinical psychology - Abstract
This study examined the treatment sensitivity of Direct Behavior Rating–Single Item Scales (DBR-SIS) in response to an evidence-based intervention delivered in a single-case, multiple-baseline design. DBR-SIS was used as a formative assessment in conjunction with a frequently used intervention in schools, a Daily Behavior Report Card (DRC). The intervention and concurrent assessment were conducted by five teachers in a rural Midwestern elementary school with five male students displaying mild to moderate behavioral challenges in the classroom. Study findings indicated DBR-SIS displays appropriate treatment sensitivity following intervention implementation. Agreement in the documentation of response and nonresponse to intervention implementation between DBR-SIS and systematic direct observation (SDO) data was evident across visual and empirical analyses. In addition, through a multiple-baseline design, this study documented negligible to no change in student behavior following implementation of a DRC in an applied classroom setting. These findings support previous calls for continued examination of the forms and components of DRC employed in schools. Finally, the study found educators rated the use of a combined DRC intervention and progress monitoring with DBR-SIS as favorable.
- Published
- 2017
- Full Text
- View/download PDF
15. Teacher Perceptions of the Usability of School-Based Behavior Assessments
- Author
-
Gregory A. Fabiano, Faith G. Miller, Sandra M. Chafouleas, and T. Chris Riley-Tillman
- Subjects
Teacher perceptions ,Medical education ,Screening test ,business.industry ,education ,05 social sciences ,050301 education ,Usability ,Education ,Program validation ,Clinical Psychology ,Pedagogy ,Evaluation methods ,Developmental and Educational Psychology ,0501 psychology and cognitive sciences ,School based ,Attitude change ,Relevance (information retrieval) ,business ,Psychology ,0503 education ,050104 developmental & child psychology - Abstract
Teacher perceptions of school-based behavior assessments were assessed over the course of a school year. Specifically, the utility and relevance of Direct Behavior Ratings–Single Item Scales, a hybrid direct observation method, relative to two school-based behavioral rating scales, the Social Skills Improvement System–Performance Screening Guide and the Behavioral and Emotional Screening System–Teacher Form, were examined. Participants included 65 teachers who completed the Usage Rating Profile-Assessment on each measure after three assessment periods (fall, winter, and spring). Results indicated that although overall usability ratings did not differ, factor scores differed as a function of both measure and assessment period. Implications for practice and directions for future research are discussed.
- Published
- 2014
- Full Text
- View/download PDF
16. Direct Behavior Rating: An evaluation of time-series interpretations as consequential validity
- Author
-
Ethan R. Van Norman, Peter M. Nelson, T. Chris Riley-Tillman, Theodore J. Christ, and Sandra M. Chafouleas
- Subjects
Male ,Teacher perceptions ,Schools ,Series (mathematics) ,Child Behavior ,Intervention effect ,Generalized linear mixed model ,Education ,Data point ,Direct Behavior Rating ,Statistics ,Developmental and Educational Psychology ,Humans ,Female ,Child ,Students ,Empirical evidence ,Psychology ,Social psychology ,Categorical variable - Abstract
Direct Behavior Rating (DBR) is a repeatable and efficient method of behavior assessment that is used to document teacher perceptions of student behavior in the classroom. Time-series data can be graphically plotted and visually analyzed to evaluate patterns of behavior or intervention effects. This study evaluated the decision accuracy of novice raters who were presented with single-phase graphical plots of DBR data. Three behaviors (i.e., academically engaged, disruptive, and respectful) and three graphical trends (i.e., positive, no trend, and negative) were analyzed by 27 graduate and five undergraduate participants who had minimal visual analysis experience. All graphs were unique, with data points arranged to form one of three "true" trends. Raters correctly classified graphs with positive, no, and negative trends an average of 76, 98, and 67% of instances. The generalized linear mixed model was used to handle significance tests for the categorical data. Results indicate that accuracy was influenced by the trend direction, with the most accurate ratings in the no trend condition. Despite the significant effect for trend direction, the current study provides empirical evidence for accuracy of DBR trends and interpretations. Novice raters and visual analysts yielded accurate decisions regarding the trend of plotted data for student behavior.
- Published
- 2014
- Full Text
- View/download PDF
17. Direct behavior rating as a school-based behavior universal screener: Replication across sites
- Author
-
Sandra M. Chafouleas, T. Chris Riley-Tillman, Stephen P. Kilgus, Theodore J. Christ, and Megan E. Welsh
- Subjects
Male ,Schools ,Receiver operating characteristic ,Concurrent validity ,Child Behavior ,Child Behavior Disorders ,Bivariate analysis ,Sensitivity and Specificity ,Education ,Developmental psychology ,Direct Behavior Rating ,Scale (social sciences) ,Developmental and Educational Psychology ,Predictive power ,Humans ,Mass Screening ,Female ,Child ,Students ,Psychology ,Set (psychology) ,Mass screening - Abstract
The purpose of this study was to evaluate the utility of Direct Behavior Rating Single Item Scale (DBR-SIS) targets of disruptive, engaged, and respectful behavior within school-based universal screening. Participants included 31 first-, 25 fourth-, and 23 seventh-grade teachers and their 1108 students, sampled from 13 schools across three geographic locations (northeast, southeast, and midwest). Each teacher rated approximately 15 of their students across three measures, including DBR-SIS, the Behavioral and Emotional Screening System (Kamphaus & Reynolds, 2007), and the Student Risk Screening Scale (Drummond, 1994). Moderate to high bivariate correlations and area under the curve statistics supported concurrent validity and diagnostic accuracy of DBR-SIS. Receiver operating characteristic curve analyses indicated that although respectful behavior cut scores recommended for screening remained constant across grade levels, cut scores varied for disruptive behavior and academic engaged behavior. Specific cut scores for first grade included 2 or less for disruptive behavior, 7 or greater for academically engaged behavior, and 9 or greater for respectful behavior. In fourth and seventh grades, cut scores changed to 1 or less for disruptive behavior and 8 or greater for academically engaged behavior, and remained the same for respectful behavior. Findings indicated that disruptive behavior was particularly appropriate for use in screening at first grade, whereas academically engaged behavior was most appropriate at both fourth and seventh grades. Each set of cut scores was associated with acceptable sensitivity (.79-.87), specificity (.71-.82), and negative predictive power (.94-.96), but low positive predictive power (.43-.44). DBR-SIS multiple gating procedures, through which students were only considered at risk overall if they exceeded cut scores on 2 or more DBR-SIS targets, were also determined acceptable in first and seventh grades, as the use of both disruptive behavior and academically engaged behavior in defining risk yielded acceptable conditional probability indices. Overall, the current findings are consistent with previous research, yielding further support for the DBR-SIS as a universal screener. Limitations, implications for practice, and directions for future research are discussed.
- Published
- 2014
- Full Text
- View/download PDF
18. The Impact of Target, Wording, and Duration on Rating Accuracy for Direct Behavior Rating
- Author
-
Rose Jaffery, Sandra M. Chafouleas, Theodore J. Christ, Rohini Sen, and T. Chris Riley-Tillman
- Subjects
Prosocial behavior ,Learner engagement ,Direct Behavior Rating ,General Health Professions ,Developmental and Educational Psychology ,Video technology ,Duration (project management) ,Psychology ,Social psychology ,Education ,Cognitive psychology - Abstract
The purpose of this study was to extend evaluation of rater accuracy using Direct Behavior Rating–Single-Item Scales (DBR-SIS). Extension of prior research was accomplished through use of criterion ratings derived from both systematic direct observation and expert DBR-SIS scores, and also through control of the durations over which the target behavior was displayed in each clip. Rating targets included academically engaged, disruptive, and respectful behavior. Undergraduate participants ( n = 113) viewed video clips of classroom instruction and subsequently rated target students’ behavior using DBR-SIS. Results indicated that both rater bias and rater error were present to some degree based on the behavior target, wording, and duration displayed, with greater bias found for the target involving respectful behavior. In addition, positive wording for academically engaged behavior resulted in improved accuracy, yet negative wording resulted in improved accuracy for disruptive and possibly respectful behavior. Implications for research and practice are discussed.
- Published
- 2013
- Full Text
- View/download PDF
19. Test order in teacher-rated behavior assessments: Is counterbalancing necessary?
- Author
-
D. Betsy McCoach, Sandra M. Chafouleas, T. Chris Riley-Tillman, Megan E. Welsh, Gregory A. Fabiano, Janice Kooken, and Faith G. Miller
- Subjects
Male ,Adolescent ,education ,Applied psychology ,Control (management) ,Emotions ,Child Behavior ,PsycINFO ,Social Skills ,Social skills ,Humans ,0501 psychology and cognitive sciences ,Internal validity ,Child ,Students ,Problem Behavior ,Test order ,Psychological research ,Mental Disorders ,05 social sciences ,Multilevel model ,050301 education ,Psychiatry and Mental health ,Clinical Psychology ,Order (business) ,School Teachers ,Psychology ,0503 education ,Social psychology ,050104 developmental & child psychology - Abstract
Counterbalancing treatment order in experimental research design is well established as an option to reduce threats to internal validity, but in educational and psychological research, the effect of varying the order of multiple tests to a single rater has not been examined and is rarely adhered to in practice. The current study examines the effect of test order on measures of student behavior by teachers as raters utilizing data from a behavior measure validation study. Using multilevel modeling to control for students nested within teachers, the effect of rating an earlier measure on the intercept or slope of a later behavior assessment was statistically significant in 22% of predictor main effects for the spring test period. Test order effects had potential for high stakes consequences with differences large enough to change risk classification. Results suggest that researchers and practitioners in classroom settings using multiple measures evaluate the potential impact of test order. Where possible, they should counterbalance when the risk of an order effect exists and report justification for the decision to not counterbalance. (PsycINFO Database Record
- Published
- 2016
20. The Role of Assessment in a Prevention Science Framework
- Author
-
Keith C. Herman, Wendy M. Reinke, and T. Chris Riley-Tillman
- Subjects
medicine.medical_specialty ,Medical education ,Management science ,Public health ,Psychological intervention ,Poison control ,Surveillance Methods ,Population health ,Suicide prevention ,Education ,Prevention science ,Developmental and Educational Psychology ,medicine ,Life course approach ,Psychology - Abstract
The articles in this Special Topic issue present a range of assessment models and challenges for improving the identification and early intervention of students in need of additional supports. Although each article targets a unique aspect of student learning (learning behaviors, math skills, reading comprehension, behavioral functioning, and ratings of engaged and disruptive behavior), collectively they highlight the importance of assessment practices in effective problem solving. In our commentary, we use prevention science as a framework for considering the contributions of the articles in this special topic with a particular focus on the role of assessment. A recent report from the National Research Council and Institute of Medicine (2009) attributed much of the progress in advancing knowledge about prevention of emotional and behavior problems over the past 2 decades to the relatively young field of prevention science. As an interdisciplinary field, prevention science provides a step-by-step model for solving public health problems, including educational underachievement. Specifically, prevention science is a systematic method for identifying, monitoring, and altering meaningful targets that have been demonstrated to be associated with critical youth outcomes. Accurate and efficient assessment tools are essential at each step of the prevention science research cycle. Core Elements of Prevention Science In their seminal article, Kellam, Koretz, and Moscicki (1999) traced the development of prevention science to the integration of three related fields: epidemiology, life course development, and intervention trials technology. Epidemiology is the foundation for prevention science. It refers to the study of the distribution of disease or health-related behaviors/events and is a core element of any public health approach, like prevention science. The purpose of epidemiology is to provide real-time data about health events so as to identify intervention targets and inform intervention policies and practices. Within the field of epidemiology, surveillance is the strategy for continuous, systematic collection, analysis, and interpretation of health-related processes over time to guide planning, implementation, and evaluation of practices (World Health Organization, 2012). Surveillance practices can help identify emerging public health crises, determine the effect of public health interventions, and provide ongoing information about population health. In the following, we discuss epidemiology and surveillance, life course development, and intervention trial technology as the bases for prevention science. Epidemiology and Surveillance Public health officials use surveillance data to monitor the prevalence and incidence of diseases across world populations. Surveillance is most widely understood as an element of public health approaches to somatic disease prevention. For instance, the general public may be aware that public health officials have ongoing surveillance systems for tracking infectious diseases such as the flu. These systems allow officials to monitor outbreaks, identity causes, and prevent the spread of diseases, but they can also be applied to select appropriate response strategies and then monitor the effects of interventions. Surveillance methods have been extended to include public health approaches to preventing emotional and behavior problems. For instance, surveillance systems have been established for monitoring the prevalence and incidence of crimes, substance abuse, and mental disorders (Biglan, Mrazek, Carnine, & Flay, 2003). Although behavioral surveillance systems have tended to lag behind systems for more traditional somatic diseases, emerging technology has led to exciting advances in many of these systems (Biglan et al., 2003; Wagner, Whitehill, Vernick, & Parker, 2012). For instance, community-level violence prevention scientists now are able to use real-time crime reports to assess both the need for intervention and the effect of tried interventions (Wagner et al. …
- Published
- 2012
- Full Text
- View/download PDF
21. Meta-Analysis of Interventions for Basic Mathematics Computation in Single-case Research
- Author
-
Scott A. Methe, Cheryl Neiman, T. Chris Riley-Tillman, and Stephen P. Kilgus
- Subjects
Correlation ,Experimental control ,Meta-analysis ,Improvement rate ,Statistics ,Developmental and Educational Psychology ,Subtraction ,Research studies ,Mathematics education ,Psychological intervention ,Single-subject design ,Psychology ,Education - Abstract
This study examined interventions for addition and subtraction that were implemented through single-case design (SCD) research studies. We attempted to extend prior SCD meta-analyses by examining differences in effect sizes across several moderating variables and by including a novel index of effect, improvement rate difference (IRD). We also examined the extent to which effect sizes differed by degree of experimental control achieved in the studies. Forty-seven effect sizes were obtained across 11 studies. IRD effect sizes ranged from .59 to .90 and suggested a moderate to large effect for the math interventions. Variables that appeared to moderate the effects were student age, time spent in intervention, and intervention type. We also identified a relationship between experimental control and the obtained effect sizes. Findings indicated that further SCD research in basic arithmetic and rigorous experiments are necessary to establish an evidence base that accurately characterizes intervention effectiveness.
- Published
- 2012
- Full Text
- View/download PDF
22. The Influence of Alternative Scale Formats on the Generalizability of Data Obtained From Direct Behavior Rating Single-Item Scales (DBR-SIS)
- Author
-
Sandra M. Chafouleas, Stephen P. Kilgus, Theodore J. Christ, T. Chris Riley-Tillman, and Amy M. Briesch
- Subjects
Scale (ratio) ,Context (language use) ,Education ,Facet (psychology) ,Rating scale ,Direct Behavior Rating ,General Health Professions ,Statistics ,Developmental and Educational Psychology ,Generalizability theory ,Psychology ,Social psychology ,Scaling ,Scale type - Abstract
The current study served to extend previous research on scaling construction of Direct Behavior Rating (DBR) in order to explore the potential flexibility of DBR to fit various intervention contexts. One hundred ninety-eight undergraduate students viewed the same classroom footage but rated student behavior using one of eight randomly assigned scales (i.e., differed with regard to number of gradients, length of scale, discrete vs. continuous). Descriptively, mean ratings typically fell within the same scale gradient across conditions. Furthermore, results of generalizability analyses revealed negligible variance attributable to the facet of scale type or interaction terms involving this facet. Implications for DBR scale construction within the context of intervention-related decision making are presented and discussed.
- Published
- 2012
- Full Text
- View/download PDF
23. Direct behavior rating scales as screeners: A preliminary investigation of diagnostic accuracy in elementary school
- Author
-
T. Chris Riley-Tillman, Sandra M. Chafouleas, Megan E. Welsh, and Stephen P. Kilgus
- Subjects
Male ,Time Factors ,Adolescent ,Psychometrics ,Concurrent validity ,Diagnostic accuracy ,Test validity ,Single item ,Risk Assessment ,Education ,Behavioral risk ,Social Facilitation ,Rating scale ,Statistics ,Developmental and Educational Psychology ,Humans ,Mass Screening ,Longitudinal Studies ,Sex Distribution ,Child ,Students ,School Health Services ,Psychological Tests ,Chi-Square Distribution ,Reproducibility of Results ,Social Participation ,Faculty ,ROC Curve ,Socioeconomic Factors ,Attention Deficit and Disruptive Behavior Disorders ,Direct Behavior Rating ,Child, Preschool ,Scale (social sciences) ,Female ,Psychology - Abstract
This study presents an evaluation of the diagnostic accuracy and concurrent validity of Direct Behavior Rating Single Item Scales for use in school-based behavior screening of second-grade students. Results indicated that each behavior target was a moderately to highly accurate predictor of behavioral risk. Optimal universal screening cut scores were also identified for each scale, with results supporting reduced false positive rates through the simultaneous use of multiple scales.
- Published
- 2012
- Full Text
- View/download PDF
24. Commentary on 'Building Local Capacity for Training and Coaching Data-Based Problem Solving With Positive Behavior Intervention and Support Teams'
- Author
-
T. Chris Riley-Tillman and Wendy M. Reinke
- Subjects
Relation (database) ,Management science ,business.industry ,School psychology ,Psychological intervention ,Training (civil) ,Coaching ,Education ,Psychiatry and Mental health ,Intervention (counseling) ,Developmental and Educational Psychology ,Mathematics education ,Positive behavior ,School based ,Psychology ,business ,Applied Psychology - Abstract
This invited commentary includes observations about the article “Building Local Capacity for Training and Coaching Data-Based Problem Solving with Positive Behavior Interventions and Support Teams,” published in the July 2011 issue of the Journal of Applied School Psychology. In this article Newton and colleagues present an interesting field based case study of the Team-Initiated Problem Solving (TIPS) model in relation to training school based teams to increase problem solving practices. In this invited commentary our goal is to consider the usefulness of the TIPS model for educational professionals. To accomplish this goal our thoughts are organized in a series of application questions.
- Published
- 2011
- Full Text
- View/download PDF
25. Considering Systematic Direct Observation after a Century of Research—Commentary on the Special Issue
- Author
-
Janine P. Stichter and T. Chris Riley-Tillman
- Subjects
050103 clinical psychology ,education.field_of_study ,Data collection ,Operational definition ,media_common.quotation_subject ,05 social sciences ,Population ,050301 education ,Education ,Epistemology ,Clinical Psychology ,Educational research ,Scholarship ,Documentation ,Phenomenon ,Developmental and Educational Psychology ,Curiosity ,0501 psychology and cognitive sciences ,Psychology ,education ,0503 education ,Social psychology ,media_common - Abstract
Systematic Direct Observation (SDO) has played a pivotal role in the field of Emotional and/or Behavioral Disorders (EBD) since its inception as a key part of understanding more about the behaviors, contexts that impact them, and the effective supports necessary for this population. Thinking more critically about how we have, and should continue to use, this methodology is an ongoing charge for all of us. As we prepared ourselves to do this for the purposes of this article, and our own curiosity, we took a bit of time to explore the not-sorecent writings on the science of direct observation. Early on, we came across an outstanding article by Jersild and Meigs (1939). We strongly recommend that any scholar interested in the topic of SDO assessment research read that article. Essentially, it serves as a well-articulated reminder that the state of SDO research has changed little since 1939. While there are certainly many studies utilizing SDO, and with an ever-increasing population, Jersild and Meigs' hope for a more systematic investigation of SDO remains elusive. The extensive list of questions is as relevant today as it was then. Further, their recommendations are conceptually consistent with issues put forth in this special issue and ones we will champion in this commentary.As we fast-forward 75 years, one of the consistent hallmarks of EBD research and practice is the general consensus that SDO is the gold standard method (Baer, Wolf, & Risley, 1987). What does that exactly mean and how did this "understanding" come to exist? Is this belief fully defensible based on a comprehensive literature base that has investigated SDO through a clearly understood process for evaluating effective assessment procedures? Rather, is much of our literature base and corresponding assumptions based on cumulative "use" data? In other words, have we yet to realize under what conditions and when SDO is valid and reliable? Moreover, which specific SDO coding schemes are valid and reliable in what conditions? And in some cases, has SDO become the default "gold standard" assessment method in the absence of other valid and reliable options? In this commentary we consider these century-old issues, the contribution of the other articles in the special issue to this question, and suggest an increased emphasis toward a systematic line of research to fully consider the psychometric properties of specific SDO for targeted purposes.Despite almost a century of documented research on SDO in child behavior (see Jersild & Meigs, 1939, Lewis et al., this issue), the field continues to seek more specific coding schemes that are reliable and valid. This may be in part because the tendency has been to generalize SDO research across populations and settings, as opposed to emphasizing more targeted research specifying unique coding schemes. Lewis and colleagues provide a descriptive account of this phenomenon within this issue as they highlight how SDOs are traditionally chosen almost exclusively based on the current dimension of the target behavior. Many SDO coding manuals come with procedures for data collection and operational definitions; but few come with documentation indicating the unique purpose for which the coding scheme-has been validated. Messick (1980) cautioned this and provided some insights. He operationally defined frameworks to systematically address various forms of validity, and perhaps even more relevant for the current discussion, identify the purpose of the assessment.Not all SDO is really developed or intended for use in applied settings; rather, such specific SDO codes serve a specific research paradigm. Others may be uniquely suited to capture multiple interaction codes for very specific contexts (e.g., teacher/student interaction effects), but are not valid for capturing the same student behavior within a discrete trial setting. Similarly, some SDO coding systems that are valid for students with extreme overt behaviors may not maintain validity or reliability when used with students with low incidence behavior, depression or even different age groups. …
- Published
- 2014
- Full Text
- View/download PDF
26. Generalizability and Dependability of Behavior Assessment Methods to Estimate Academic Engagement: A Comparison of Systematic Direct Observation and Direct Behavior Rating
- Author
-
Amy M. Briesch, T. Chris Riley-Tillman, and Sandra M. Chafouleas
- Subjects
Curriculum-based measurement ,Psychometrics ,Direct Behavior Rating ,Teaching method ,Applied psychology ,Developmental and Educational Psychology ,Dependability ,Student engagement ,Generalizability theory ,Psychology ,Curriculum ,Education ,Developmental psychology - Abstract
Although substantial attention has been directed toward building the psychometric evidence base for academic assessment methods (e.g., state mastery tests, curriculum-based measurement), similar ex...
- Published
- 2010
- Full Text
- View/download PDF
27. Direct Behavior Rating (DBR): Generalizability and Dependability Across Raters and Observations
- Author
-
Christina Boice, T. Chris Riley-Tillman, Sandra M. Chafouleas, and Theodore J. Christ
- Subjects
Educational measurement ,Psychometrics ,Applied Mathematics ,Applied psychology ,Education ,Developmental psychology ,Alternative assessment ,Rating scale ,Direct Behavior Rating ,Developmental and Educational Psychology ,Task analysis ,Dependability ,Generalizability theory ,Psychology ,Applied Psychology - Abstract
Generalizability theory was used to examine the generalizability and dependability of outcomes from two single-item Direct Behavior Rating (DBR) scales: DBR of actively manipulating and DBR of visually distracted. DBR is a behavioral assessment tool with specific instrumentation and procedures that can be used by a variety of service delivery providers (e.g., teacher, teacher aide, parent, etc.) to collect time-series data on student behavior. The purpose of this study was to extend the findings presented by Chafouleas et al. with an examination of DBR outcomes as they are generalized across raters and rating occasions. One hundred twenty-five undergraduates viewed and rated student behavior on video clips while the children engaged in an unsolvable Lego puzzle task. A series of decision studies were used to evaluate the effects of alternate assessment conditions (variable numbers of raters and rating occasions) and interpretive assumptions (definitions of the universe of generalization). Results support the general conclusion that ratings from individual or small groups of simultaneous raters, when generalized only to that specific individual or group of individuals, can approach reliability criteria for low- and high-stakes decisions. Implications are discussed.
- Published
- 2010
- Full Text
- View/download PDF
28. The Impact of Observation Duration on the Accuracy of Data Obtained From Direct Behavior Rating (DBR)
- Author
-
T. Chris Riley-Tillman, Theodore J. Christ, Amy M. Briesch, Christina H. Boice-Mallach, and Sandra M. Chafouleas
- Subjects
Observation duration ,Rating scale ,Direct Behavior Rating ,Instrumentation ,Pediatrics, Perinatology and Child Health ,Statistics ,Developmental and Educational Psychology ,Anchoring ,Video technology ,Psychology ,Social psychology ,Applied Psychology - Abstract
In this study, evaluation of direct behavior rating (DBR) occurred with regard to two primary areas: (a) accuracy of ratings with varied instrumentation (anchoring: proportional or absolute) and procedures (observation length: 5 min, 10 min, or 20 min) and (b) one-week test—retest reliability. Participants viewed video clips of a typical third grade student and then used single-item DBR scales to rate disruptive and academically engaged behavior. Overall, ratings tended to overestimate the actual occurrence of behavior. Although ratings of academic engagement were not affected by duration of the observation, ratings of disruptive behavior were, as the longer the duration, the more the ratings of disruptive behavior were overestimated. In addition, the longer the student was disruptive, the greater the overestimation effect. Results further revealed that anchoring the DBR scale as proportional versus absolute number of minutes did not affect rating accuracy. Finally, test—retest analyses revealed low to moderate consistency across time points for 10-min and 20-min observations, with increased consistency as the number of raters or number of ratings increased (e.g., four 5-min vs. one 20-min). Overall, results contribute to the technical evaluation of DBR as a behavior assessment method and provide preliminary information regarding the influence of duration of an observation period on DBR data.
- Published
- 2010
- Full Text
- View/download PDF
29. Foundation for the Development and Use of Direct Behavior Rating (DBR) to Assess and Evaluate Student Behavior
- Author
-
Sandra M. Chafouleas, Theodore J. Christ, and T. Chris Riley-Tillman
- Subjects
Data collection ,Guiding Principles ,business.industry ,School psychology ,Applied psychology ,Usability ,Education ,Rating scale ,Direct Behavior Rating ,General Health Professions ,Immediacy ,Developmental and Educational Psychology ,Instrumentation (computer programming) ,Psychology ,business ,Social psychology - Abstract
Direct Behavior Rating (DBR) is a method of social—emotional and behavior assessment that combines the immediacy of systematic direct observation and the efficiency of behavior rating scales. The purpose of this article is to discuss the defensibility and usability of DBR. This article provides a brief summary of (a) the past, present, and future directions of social— emotional and behavior assessment methods in schools; (b) the defining features of DBR; (c) the guiding principles for DBR development and evaluation; and (d) DBR research to date. Special emphasis is placed on single-item scale DBR (SIS-DBR) and three general outcome behaviors that are most relevant for use in schools. Research and recommendations for standard SIS-DBR instrumentation and procedures are reviewed, along with future directions for research and practice.
- Published
- 2009
- Full Text
- View/download PDF
30. Direct Behavior Rating (DBR)
- Author
-
Theodore J. Christ, Sandra M. Chafouleas, and T. Chris Riley-Tillman
- Subjects
Direct Behavior Rating ,Intervention (counseling) ,General Health Professions ,Developmental and Educational Psychology ,Behavioral assessment ,Psychology ,School based intervention ,Education ,Clinical psychology - Published
- 2009
- Full Text
- View/download PDF
31. Culture & biometrics: regional differences in the perception of biometric authentication technologies
- Author
-
Graham I. Johnson, Kathy Buckner, David Benyon, and Chris Riley
- Subjects
Biometrics ,business.industry ,media_common.quotation_subject ,Internet privacy ,Data security ,Occupational safety and health ,Human-Computer Interaction ,Theories of technology ,Philosophy ,Artificial Intelligence ,Cultural diversity ,Perception ,Hofstede's cultural dimensions theory ,Psychology ,business ,Social psychology ,Regional differences ,media_common - Abstract
Previous research has identified user concerns about biometric authentication technology, but most of this research has been conducted in European contexts. There is a lack of research that has investigated attitudes towards biometric technology in other cultures. To address this issue, data from India, South Africa and the United Kingdom were collected and compared. Cross-cultural attitudinal differences were seen, with Indian respondents viewing biometrics most positively while respondents from the United Kingdom were the least likely to have a positive opinion about biometrics. Multiple barriers to the acceptance of biometric technology were identified with data security and health and safety fears having the greatest overall impact on respondents’ attitudes towards biometrics. The results of this investigation are discussed with reference to Hofstede’s cultural dimensions and theories of technology acceptance. It is argued that contextual issues specific to each country provide a better explanation of the results than existing theories based on Hofstede’s model. We conclude that cultural differences have an impact on the way biometric systems will be used and argue that these factors should be taken into account during the design and implementation of biometric systems.
- Published
- 2009
- Full Text
- View/download PDF
32. The impact of training on the accuracy of Direct Behavior Ratings (DBR)
- Author
-
T. Chris Riley-Tillman, Mine D. Schlientz, Sandra M. Chafouleas, Amy M. Briesch, and Christy M. Walcott
- Subjects
Direct Behavior Rating ,Applied psychology ,Developmental and Educational Psychology ,Psychology ,Social psychology ,Training (civil) ,Education - Published
- 2009
- Full Text
- View/download PDF
33. Examining the Use of Direct Behavior Rating on Formative Assessment of Class-Wide Engagement
- Author
-
Kathryn Weegar, T. Chris Riley-Tillman, and Scott A. Methe
- Subjects
Class (computer programming) ,Response to intervention ,Applied psychology ,Direct observation ,Student engagement ,Education ,Variety (cybernetics) ,Formative assessment ,Direct Behavior Rating ,Intervention (counseling) ,General Health Professions ,Developmental and Educational Psychology ,Psychology ,Social psychology - Abstract
High-quality formative assessment data are critical to the successful application of any problem-solving model (e.g., response to intervention). Formative data available for a wide variety of outcomes (academic, behavior) and targets (individual, class, school) facilitate effective decisions about needed intervention supports and responsiveness to those supports. The purpose of the current case study is to provide preliminary examination of direct behavior rating methods in class-wide assessment of engagement. A class-wide intervention is applied in a single-case design (B-A-B-A), and both systematic direct observation and direct behavior rating are used to evaluate effects. Results indicate that class-wide direct behavior rating data are consistent with systematic direct observation across phases, suggesting that in this case study, direct behavior rating data are sensitive to classroom-level intervention effects. Implications for future research are discussed.
- Published
- 2009
- Full Text
- View/download PDF
34. The impact of item wording and behavioral specificity on the accuracy of direct behavior ratings (DBRs)
- Author
-
Theodore J. Christ, Amy M. Briesch, Teresa J. LeBel, Sandra M. Chafouleas, and T. Chris Riley-Tillman
- Subjects
Formative assessment ,Learner engagement ,Direct Behavior Rating ,Rating scale ,Applied psychology ,Developmental and Educational Psychology ,Behavioral assessment ,Video technology ,Predictor variables ,Psychology ,Social psychology ,Education - Published
- 2009
- Full Text
- View/download PDF
35. An Initial Comparison Of Collaborative And Expert-Driven Consultation On Treatment Integrity
- Author
-
T. Chris Riley-Tillman, Constance Kelleher, and Thomas J. Power
- Subjects
Multiple baseline design ,Nursing ,Intervention (counseling) ,Intervention design ,Applied psychology ,Developmental and Educational Psychology ,Psychology (miscellaneous) ,Predictor variables ,Psychology ,Training methods ,Empirical evidence ,Critical variable - Abstract
Although over 15 years have passed since Witt (1990) noted that no empirical evidence exists to support the contention that a collaborative approach to consultation leads to more positive outcomes than a hierarchical or expert driven approach, this issue generally remains unaddressed (Schulte & Osborne, 2003). While the literature documenting the benefits of consultation has continued to grow, a true head-to-head comparison has not been conducted. The purpose of the present study was to directly address Witt's call by empirically examining the impact of two consultation styles on a critical variable, practitioner treatment integrity. It was hypothesized that the involvement of practitioners in all aspects of intervention design would increase their level of treatment integrity. Two single-subject experiments using multiple baseline across subjects designs were used to examine the difference in level of treatment integrity for an imported, expert-driven intervention and a partnership-designed intervention....
- Published
- 2008
- Full Text
- View/download PDF
36. Generalizability of Scaling Gradients on Direct Behavior Ratings
- Author
-
T. Chris Riley-Tillman, Theodore J. Christ, and Sandra M. Chafouleas
- Subjects
Direct Behavior Rating ,Rating scale ,Applied Mathematics ,Statistics ,Developmental and Educational Psychology ,Task analysis ,Dependability ,Generalizability theory ,Psychology ,Scaling ,Applied Psychology ,Education ,Cognitive psychology - Abstract
Generalizability theory is used to examine the impact of scaling gradients on a single-item Direct Behavior Rating (DBR). A DBR refers to a type of rating scale used to efficiently record target behavior(s) following an observation occasion. Variance components associated with scale gradients are estimated using a random effects design for persons (p) by raters (r) by occasions (o). Data from 106 undergraduate student participants are used in the analysis. Each participant viewed and rated video clips of six elementary-aged students who were engaged in a difficult task. Participant ratings are collected three times for each of two behaviors within three scale gradient conditions (6-, 10-, 14-point scale). Scale gradient does not substantially contribute to the magnitude of observed score variances. In contrast, the largest proportions of variance are attributed to rater and error across all scale gradient conditions. Implications, limitations, and future research considerations are discussed.
- Published
- 2008
- Full Text
- View/download PDF
37. Daily Behavior Report Cards and Systematic Direct Observation: An Investigation of the Acceptability, Reported Training and Use, and Decision Reliability Among School Psychologists
- Author
-
Tanya L. Eckert, Sandra M. Chafouleas, T. Chris Riley-Tillman, and Amy M. Briesch
- Subjects
media_common.quotation_subject ,education ,School psychology ,Applied psychology ,Education ,Formative assessment ,Direct Behavior Rating ,Perception ,Intervention (counseling) ,Developmental and Educational Psychology ,Instrumentation (computer programming) ,Association (psychology) ,Psychology ,Reliability (statistics) ,media_common ,Clinical psychology - Abstract
More than ever, educators require assessment procedures and instrumentation that are technically adequate as well as efficient to guide data-based decision making. Thus, there is a need to understand perceptions of available tools, and the decisions made when using collected data, by the primary users of those data. In this paper, two studies that surveyed members of the National Association of School Psychologists with regard to two procedures useful in formative assessment, (i.e., Daily Behavior Report Cards; Systematic Direct Observation), are presented. Participants reported greater overall levels of training and use of Systematic Direct Observation than Daily Behavior Report Cards, yet both techniques were rated as equally acceptable for use in formative assessment. Furthermore, findings supported that school psychologists tend to make similar intervention decisions when presented with both types of data. Implications, limitations, and future directions are discussed.
- Published
- 2008
- Full Text
- View/download PDF
38. Examining the Agreement of Direct Behavior Ratings and Systematic Direct Observation Data for On-Task and Disruptive Behavior
- Author
-
Julie A. M. Chanese, T. Chris Riley-Tillman, Sandra M. Chafouleas, Amy D. Glazer, and Kari A. Sassu
- Subjects
Data collection ,Disruptive behavior ,Direct observation ,Replicate ,Academic achievement ,Task (project management) ,Developmental psychology ,Direct Behavior Rating ,Pediatrics, Perinatology and Child Health ,Developmental and Educational Psychology ,Psychology ,Association (psychology) ,Social psychology ,Applied Psychology - Abstract
The purpose of this study was to replicate previous findings indicating a moderate association between teacher perceptions of behavior as measured by direct behavior ratings (DBRs) and systematic direct observation (SDO) conducted by an external observer. In this study, data regarding student on-task and disruptive behavior were collected via SDO from trained external observers and via DBRs from classroom teachers. Data were collected across 15 teachers and three observation sessions, and the agreement between the two methods was compared as a way to examine concurrent validity. Results supported previous work suggesting that DBRs are significantly correlated with SDO data, thereby suggesting that the DBR might be used as a compatible tool with SDO. Implications for practice, limitations of the study, and directions for future research are discussed.
- Published
- 2008
- Full Text
- View/download PDF
39. Generating Usable Knowledge
- Author
-
Julie A. M. Chanese, Sandra M. Chafouleas, Amy M. Briesch, and T. Chris Riley-Tillman
- Subjects
Measure (data warehouse) ,Process management ,Rating scale ,Intervention (counseling) ,School psychology ,Developmental and Educational Psychology ,Psychological intervention ,Psychology ,USable ,Social psychology ,Reliability (statistics) ,Utilization - Abstract
In this study, a self-report measure of intervention usage, the Usage Rating Profile for Interventions (URP-I) is developed and empirically examined with regard to factor structure and internal consistency. Results supported that intervention usage is associated with at least 4 different constructs, and that a measure consisting of 25 items may provide a reliable index of the 4 factors. The 4 factors identified included Acceptability, Knowledge, Feasibility, and Integrity. Findings extend and integrate existing work-related items to acceptability research aimed at predicting usage. Implications, limitations, and future directions are discussed.
- Published
- 2008
- Full Text
- View/download PDF
40. COMPARING METHODS OF IDENTIFYING REINFORCING STIMULI IN SCHOOL CONSULTATION
- Author
-
Catherine Fiorello, T. Chris Riley-Tillman, and Sharon Damon
- Subjects
education ,Evaluation methods ,Convergent thinking ,Developmental and Educational Psychology ,Psychological intervention ,Educational psychology ,Psychology (miscellaneous) ,Stimulus (physiology) ,Reinforcement ,Psychology ,Cognitive psychology ,Clinical psychology ,Program validation - Abstract
Reinforcement-based interventions, the most frequently used treatments for school-age children, rely on accurately identifying stimuli that will serve to reinforce appropriate classroom behavior. Research has consistently demonstrated that the results from a forced-choice pairing procedure are the best predictors of reinforcing stimuli. Interestingly, systematic evaluation of potential reinforcers is rarely implemented in the school consultation setting. Considering the importance of the reinforcer on reinforcement-based interventions, and the literature focusing on the significance of the selection procedure on accurately identifying a reinforcer, this is concerning. The purpose of these two studies was to examine the effectiveness of identifying reinforcing stimuli for students in the consultation setting using two different methods: stimulus forced-choice and asking the teacher to identify potential reinforcers. The effectiveness of the selected stimuli as reinforcers was studied on two student outcome...
- Published
- 2008
- Full Text
- View/download PDF
41. No One Knows: offenders with learning difficulties and learning disabilities
- Author
-
Jenny Talbot and Chris Riley
- Subjects
Coping (psychology) ,business.industry ,media_common.quotation_subject ,Prison ,Public relations ,Pediatrics ,Comprehension ,Officer ,Pedagogy ,Learning disability ,Prison reform ,medicine ,ComputingMilieux_COMPUTERSANDSOCIETY ,Pshychiatric Mental Health ,medicine.symptom ,Psychology ,business ,Know-how ,Criminal justice ,media_common - Abstract
Accessible summary • Nobody knows how many people with learning difficulties get into trouble with the police. • This article is about a project called No One Knows. It is finding out what happens when people with learning difficulties get into trouble with the police. • Sometimes when people with learning difficulties get into trouble with the police they have to go to court. Sometimes they are sent to prison or have to visit a probation officer. Young people with a learning difficulty might have to go to a youth offending team. • Some people with learning difficulties find it hard to understand what is happening. This can be upsetting. • When people talk about the police, courts, prison, probation and youth offending they call it the criminal justice system. People who work in the criminal justice system do not always know how to support people with a learning difficulty. We want to know what people with a learning difficulty think about this. We want to know what people who work in the criminal justice system think as well. • At the end of the project we will write a report about what people have told us. The report will tell the government what they should do to make things better for people with learning difficulties when they get into trouble with the police. Summary The prevalence of offenders with learning difficulties and learning disabilities is not agreed upon. What is clear, however, is that, regardless of actual numbers, many offenders have learning difficulties that reduce their ability to cope within the criminal justice system, for example, not understanding fully what is happening to them in court or being unable to access various aspects of the prison regime, including some offending behaviour programmes. Offenders with learning difficulties are not routinely identified and, as a result, often do not receive the support they need. No One Knows is a UK wide programme led by the Prison Reform Trust that aims to effect change by exploring and publicizing the experiences of people with learning difficulties who come into contact with the criminal justice system. The article highlights the aims of No One Knows and describes what, for the purpose of the programme, we mean by ‘learning difficulties and learning disabilities’. Problems in identifying precise numbers of offenders with learning difficulties and learning disabilities are discussed and attention drawn to recent research on prevalence. The context and some of the challenges of ‘prison life’ are identified and a number of early research findings from No One Knows are presented.
- Published
- 2007
- Full Text
- View/download PDF
42. Generalizability and Dependability of Direct Behavior Ratings to Assess Social Behavior of Preschoolers
- Author
-
Sandra M. Chafouleas, Theodore J. Christ, Julie A. M. Chanese, Amy M. Briesch, and T. Chris Riley-Tillman
- Subjects
Formative assessment ,Empirical research ,Psychometrics ,Direct Behavior Rating ,Rating scale ,Applied psychology ,Developmental and Educational Psychology ,Dependability ,Generalizability theory ,Psychology ,Social psychology ,Reliability (statistics) ,Education - Abstract
One potentially feasible tool for use in the formative assessment of social behavior is the direct behavior rating, yet empirical support for the reliability of its use is limited. In this study, generalizability theory was used to provide preliminary psychometric data regarding the generalizability and dependability of the direct behavior rating to measure the social behavior of preschoolers. Two typical preschool behaviors (works to resolve conflicts, interacts cooperatively with peers) were selected for investigation within the direct behavior rating created for this study. Overall, results varied depending on which behavior was rated and the number of raters whose ratings were considered. The results suggested that a fairly substantial proportion of measurement variance was attributable to the different raters, and that the four raters varied in their mean level of ratings within and across the 15 students. In addition, although the actual number of days was dependent on the number of ratings collected per day, results suggested direct behavior ratings are likely to approximate or exceed reliability-like coefficients of .70 after 7 ratings are collected across 4-7 days, and .90 after 10 ratings. Limitations, future directions, and implications are discussed. ********** In applied settings, both effective and efficient assessment procedures are needed to facilitate good decision-making about the academic and social behavior of students (Chafouleas, Riley-Tillman, & Sugai, in press). Although reliable and valid tools readily exist for accomplishing this task (e.g., curriculum-based assessment, systematic direct observation), existing measures are not without flaws. First, feasibility of use, particularly in a formative fashion, can be an issue in settings often faced with limited resources. Second, to date, greater attention has been directed toward the study of tools for assessing academic behavior (e.g., curriculum-based assessment) than for assessing social behavior. This is unfortunate given increasing evidence suggesting a strong reciprocal connection between problem behavior and academic difficulties (e.g., Lane, O'Shaughnessy, Lambros, Gresham, & Beebe-Frankenberger, 2002; Nelson, Benner, & Gonzalez, 2003; Torgesen et al., 1999). Thus, greater attention should be directed to the study of effective and efficient methods for assessing social behavior. The purpose of this study was to provide preliminary psychometric data regarding the generalizability and dependability of a direct behavior rating (DBR) for assessing social behavior of preschoolers. Defining a Need to Develop Formative Measures of Social Behavior Historically, systematic direct observation procedures have served as the primary data source to assess social behavior within the classroom setting. Direct observation procedures typically record the frequency, rate, duration, and/or latency of behavior using time sampling procedures that are specified before the observation period (Hintze & Matthews, 2004; Salvia & Ysseldyke, 2004; Shapiro & Kratochwill, 1988). Systematic direct observation is often preferable to other behavioral assessment procedures (e.g., behavior rating scales, teacher interviews) because the data are collected at the time that the behavior occurs (Cone, 1978). In addition, systematic direct observation procedures are sufficiently flexible to be applied across a variety of situations (i.e., type of behavior, type of observation system). Despite these advantages, the time commitment required to collect an adequate data set can strain available resources. In one recent examination of systematic direct observation to assess on-task behavior, the researchers concluded that sufficient reliability was not obtained to support high-stakes decisions until more than 20 data points were collected over a 2-week period on a schedule of two observations per day (Hintze & Matthews, 2004). …
- Published
- 2007
- Full Text
- View/download PDF
43. Daily Behavior Report Cards
- Author
-
T. Chris Riley-Tillman, Mary J. LaFrance, Sandra M. Chafouleas, Kari A. Sassu, and Shamim S. Patwa
- Subjects
050103 clinical psychology ,Data collection ,education ,05 social sciences ,Applied psychology ,Direct observation ,050301 education ,Task (project management) ,Consistency (negotiation) ,Direct Behavior Rating ,Pediatrics, Perinatology and Child Health ,Learning disability ,Deci ,Developmental and Educational Psychology ,medicine ,0501 psychology and cognitive sciences ,medicine.symptom ,Psychology ,0503 education ,Applied Psychology ,Report card ,Clinical psychology - Abstract
In this study, the consistency of on-task data collected across raters using either a Daily Behavior Report Card (DBRC) or systematic direct observation was examined to begin to understand the decision reliability of using DBRCs to monitor student behavior. Results suggested very similar conclusions might be drawn when visually examining data collected by an external observer using either systematic direct observation or a DBRC. In addition, similar conclusions might be drawn upon visual analysis of either systematic direct observation or DBRC data collected by an external observer versus a teacher-completed DBRC. Examination of effect sizes from baseline to intervention phases suggested greater potential for different conclusions to be drawn about student behavior, dependent on the method and rater. In summary, overall consistency of data across method and rater found in this study lends support to the use of DBRCs to estimate global classroom behavior as part of a multimethod assessment. Implications, limitations, and future research directions are discussed.
- Published
- 2007
- Full Text
- View/download PDF
44. Reliability of Direct Behavior Ratings - Social Competence (DBR-SC) data: How many ratings are necessary?
- Author
-
Stephen P. Kilgus, Alexander M. Schoemann, Janine P. Stichter, T. Chris Riley-Tillman, and Katie Bellesheim
- Subjects
Male ,050103 clinical psychology ,education ,PsycINFO ,Education ,Correlation ,Social Skills ,Social skills ,Rating scale ,Developmental and Educational Psychology ,Humans ,0501 psychology and cognitive sciences ,Child ,Students ,Competence (human resources) ,05 social sciences ,050301 education ,Reproducibility of Results ,Social Behavior Disorders ,Teacher education ,Direct Behavior Rating ,Behavior Rating Scale ,Social competence ,Female ,Psychology ,0503 education ,Social psychology ,Clinical psychology - Abstract
The purpose of this investigation was to evaluate the reliability of Direct Behavior Ratings-Social Competence (DBR-SC) ratings. Participants included 60 students identified as possessing deficits in social competence, as well as their 23 classroom teachers. Teachers used DBR-SC to complete ratings of 5 student behaviors within the general education setting on a daily basis across approximately 5 months. During this time, each student was assigned to 1 of 2 intervention conditions, including the Social Competence Intervention-Adolescent (SCI-A) and a business-as-usual (BAU) intervention. Ratings were collected across 3 intervention phases, including pre-, mid-, and postintervention. Results suggested DBR-SC ratings were highly consistent across time within each student, with reliability coefficients predominantly falling in the .80 and .90 ranges. Findings further indicated such levels of reliability could be achieved with only a small number of ratings, with estimates varying between 2 and 10 data points. Group comparison analyses further suggested the reliability of DBR-SC ratings increased over time, such that student behavior became more consistent throughout the intervention period. Furthermore, analyses revealed that for 2 of the 5 DBR-SC behavior targets, the increase in reliability over time was moderated by intervention grouping, with students receiving SCI-A demonstrating greater increases in reliability relative to those in the BAU group. Limitations of the investigation as well as directions for future research are discussed herein. (PsycINFO Database Record
- Published
- 2015
45. Acceptability and Reported Use of Daily Behavior Report Cards Among Teachers
- Author
-
Kari A. Sassu, T. Chris Riley-Tillman, and Sandra M. Chafouleas
- Subjects
050103 clinical psychology ,05 social sciences ,Applied psychology ,Psychological intervention ,050301 education ,Sample (statistics) ,Direct Behavior Rating ,Intervention (counseling) ,Pediatrics, Perinatology and Child Health ,Developmental and Educational Psychology ,0501 psychology and cognitive sciences ,Positive behavior ,Psychology ,Reinforcement ,0503 education ,Applied Psychology ,Clinical psychology - Abstract
In this study, a sample of teachers was surveyed regarding their reported use and acceptability of daily behavior report cards (DBRCs). Almost two thirds of responding teachers indicated that they have used versions of DBRCs in their practice. Respondents' use of DBRCs was not restricted to a single purpose or situation. Additional findings suggested that the format of DBRCs varies widely, suggesting that teachers have found the DBRC to be highly adaptive in representing a broad array of possibilities rather than having a single, scripted purpose. An additional noteworthy finding relates to the general acceptance of DBRCs by teachers as both behavior-monitoring tools and as components in interventions. In summary, results provide support to previous claims that the DBRC is both a used and accepted tool in practice, suggesting that DBRCs deserve closer attention in research and practice related to positive behavior supports. Limitations, future directions, and implications are discussed.
- Published
- 2006
- Full Text
- View/download PDF
46. Promoting behavioral competence: An introduction to the practitioner's edition
- Author
-
Sandra M. Chafouleas, James L. McDougal, Jessica Blom-Hoffman, David N. Miller, Robert J. Volpe, and T. Chris Riley-Tillman
- Subjects
Medical education ,business.industry ,School psychology ,Developmental and Educational Psychology ,Social science ,business ,Psychology ,Publication ,Competence (human resources) ,Education - Abstract
The widely discussed gap between research and practice has been a continuing problem in the fields of school psychology and education. In particular, the extent to which information generated by research is effectively presented to and utilized by school practitioners is frequently limited by a variety of factors. In an attempt to address this issue, an annual series of special issues of Psychology in the Schools called the “Practitioner's Edition” was developed, with the inaugural issue appearing in 2006. The intent of these special issues is to publish useful, relevant, and practical evidence-based information on topics of significant interest and importance to school psychologists and other school-based practitioners. This article introduces the second Practitioner's Edition, which is focused on promoting behavioral competence. © 2007 Wiley Periodicals, Inc. Psychol Schs 44: 1–5, 2007.
- Published
- 2006
- Full Text
- View/download PDF
47. A school practitioner's guide to using daily behavior report cards to monitor student behavior
- Author
-
Sandra M. Chafouleas, T. Chris Riley-Tillman, and Amy M. Briesch
- Subjects
Formative assessment ,Medical education ,Response to intervention ,Service delivery framework ,Intervention (counseling) ,School psychology ,Premise ,Developmental and Educational Psychology ,Psychology ,Popularity ,Report card ,Education ,Developmental psychology - Abstract
With the growing popularity of a response to intervention model of service delivery, the role of intervention management is becoming more prominent. Although many aspects of intervention management have received significant attention, one area that warrants further development involves feasible methods for monitoring student behavior in a formative fashion. By formative, we mean behavior that is frequently monitored, such as on a daily basis, with the premise that the information will be used to make appropriate intervention decisions. Within a problem-solving model of intervention development, implementation, and evaluation, at least one educational professional must be responsible for using an effective tool for monitoring behavior. Yet, identifying and using such a tool can be a challenge in applied settings in which resources are often limited. The purpose of this article is to briefly review available tools for behavior monitoring, with emphasis on reviewing the potential of the Daily Behavior Report Card to serve as a supportive methodology to more established measures of behavior assessment. Examples and guidelines for use of the Daily Behavior Report Card in behavior monitoring are provided. © 2007 Wiley Periodicals, Inc. Psychol Schs 44: 77–89, 2007.
- Published
- 2006
- Full Text
- View/download PDF
48. Brief report: An experimental analogue of consultee 'resistance' effects on the consultant's therapeutic behavior—A preliminary investigation
- Author
-
Joseph Cautilli, Phil Hineline, T. Chris Riley-Tillman, and Saul Axelrod
- Subjects
Psychotherapist ,Phenomenon ,Parent training ,Resistance (psychoanalysis) ,Tracking (education) ,Negative attitude ,Social learning ,Psychology ,Functional analysis (psychology) ,Session (web analytics) - Abstract
This study presents an experimental analogue of resistance in the consultation process. Using an ABAB reversal design, the experimenter measured the ecological effects of teacher resistant behaviors on consultant therapeutic behavior. The study defined therapeutic behaviors as teaching, confronting and problem identification, analysis, and evaluation statements as outlined by Bergan and Kratochwill (1990). In this study, the author instructed one student from a masters program in behavior analysis that this was a study of resistance in the consultation process with teachers. The experimenter instructed the subjects that analysis of the sessions would determine if any resistance occurred and how they managed it. The teacher was a double agent, in the sense that she was working with the experimenter. The study measured subjects' behavior on therapeutic statements made to the teacher during varying levels of resistant statements made by teachers. The experimenter met with teacher on weekly basis. The experimenter instructed the teacher on the type of session that they were supposed to provide. The experimenter instructed the teachers on when to be resistant and when to be nonresistant in the program. When a stable baseline occurred, the experimenter instructed the teachers to become resistant. The resistance continued for four active sessions. After this phase, the experimenter instructed the teacher to become compliant again for several sessions. When the experimenter observed stability in the data, the experimenter instructed the teacher to become resistant until the end of the study. Key Words: Resistance, experimental analogue, functional analysis, consultation relationship. ********** Resistance can be defined as anything that a client or consultee does that impedes progress (Wickstrom & Witt, 1983). What is termed resistance in consultation can have serious implications for treatment integrity (Wickstrom, Jones, LaFleur & Witt, 1998). Resistance to change in verbal therapies and consultation is a phenomenon that has substantial representation (Cautilli & Santilli-Connor, 2000; Patterson & Chamberlain, 1994) with some early representation within the behavioral literature (e.g., DeVoge & Beck, 1978; Skinner, 1957). Resistance appears to interest a broad spectrum of clinicians both behavioral (e.g., Lazurus & Fay, 1982; Munjack, & Oziel, 1978; DeVoge & Beck, 1978) and non-behavioral (e.g., Mandanes, 1981) in orientation. The Oregon Social Learning Center studied resistance as it occurred in parent training sessions. In one study, Patterson and Forgatch (1985) explored the impact of therapist behavior (the independent variable) on client resistance (dependent variable). These researchers used an ABAB experimental design and observed the resistance displayed by parents in parent training for two conditions. The baseline involved the therapist using verbal behavior to convey "support" or "facilitate" (short statements indicating attention or agreement). In the treatment phase, the behavior of the therapist was to "confront" and "teach." Resistance was measured by a coding system developed by Patterson and colleagues (Chamberlain, Patterson, Reid, Kavanagh, & Forgatch, 1984) which identified as resistant such behaviors as talking over/interrupting, challenging / confronting, negative attitude, "own agenda," not tracking as resistant. As was predicted by the model, teaching and confronting led to increases in resistance, while facilitate and support led to decreases in resistance. In Patterson's model, resistance serves three main functions: (a) it reduces the amount of confrontation and teaching the consultee receives; (b) it increases the number of sessions needed to bring about therapeutic change; and (c) it reduces the therapists' "liking" for the consultee. Patterson and Chamberlain (1994) found in cases where the mother's resistance decreased, greater gains were evident in parental discipline. …
- Published
- 2006
- Full Text
- View/download PDF
49. Current behavioral models of client and consultee resistance: A critical review
- Author
-
Joseph Cautilli, Phil Hineline, Saul Axelrod, and T. Chris Riley-Tillman
- Subjects
Therapeutic relationship ,Psychological review ,Empirical research ,Psychotherapist ,School psychology ,Parent training ,book.journal ,Resistance (psychoanalysis) ,Context (language use) ,Psychology ,book ,Functional analysis (psychology) - Abstract
Resistance is the phenomena that occurs in the therapeutic relationship when the patient refuses to complete tasks assigned by the therapist which would benefit the patient in improving their psychological situation. Resistance is also used to describe situations in the consulting relationship where the consultee does not do what the consultant suggests. Often resistance leads to poor treatment integrity and/or staff burn out. As a result, this resistance is a factor that warrants a behavioral interpretation and investigation. Currently several behavioral models of resistance exist. In this paper, we explore each of these models and critique the logical and empirical support. Future research directions will be discussed. Keywords: Resistance, Behavioral Models, Functional Assessment, Consultation ********** Introduction The functional analysis of verbal behavior began in 1945, with the publication of the Harvard Symposium on Operationalism in Psychological Review. In paper by B.F. Skinner entitled "The Operational Analysis of Psychological Terms" it was argued that by observing the contingencies and setting conditions under which a verbal community typically used the ordinary language terms, the interpreter could interpret the terms in a descriptive functional assessment. This approach is critical to the scientific investigation of events that on the surface may not appear to be readily available to a behavioral interpretation or behavioral research (Leigland, 1996). Leigland lamented that behaviorally oriented clinicians have done little research on terms that have been important to non-behavioral clinicians. One term, which appears to have importance to traditional clinicians and consultants, is "resistance". Resistance can be defined as anything that a client or consultee does that impedes progress (Wickstrom & Witt, 1983). What is termed resistance in consultation can have serious implications for treatment integrity (Wickstrom, Jones, LaFleur & Witt, 1998). Resistance to change in verbal therapies and consultation is a phenomenon that has substantial representation (Cautilli & SantilliConnor, 2000; Patterson & Chamberlain, 1994) with some early discussion within the behavioral literature (e.g., DeVoge & Beck, 1978; Skinner, 1957). Resistance appears to interest a broad spectrum of clinicians both behavioral (e.g., DeVoge & Beck, 1978; Lazurus & Fay, 1982) and non-behavioral (e.g., Mandanes, 1981) in orientation. However, supporting data are lacking to most of the theoretical conceptualizations including behavioral interpretations (Patterson & Chamberlain, 1994). In deconstructing resistance or conducting an analysis of its use, behavioral psychologists find therapists and consultants use the word in the context of therapeutic failure. For example, Dougherty (2000) refers to resistance as a consultee's failure to participate constructively in the process of consultation. Resistance can occur in the treatment relationship, where the client does not improve or, in the consulting relationship, where the consultee fails to implement the treatment. The clinical literature is replete with examples from different traditions of techniques to manage this problem if it arises in the therapeutic context (e.g., Spinks & Birchler, 1982). In one study using regression analysis, Patterson and Chamberlain (1994) showed that parental resistance to parent training reduced therapist effectiveness and these parents showed fewer improvements in discipline. As pointed out by Cautilli and Santilli-Connor (2000), the term resistance is often used to describe a relationship in which the client, or in the case of consultation, the consultee, does not comply with the tasks that the therapist or consultant suggests. A review of the literature shows that many factors increase the probability that a consultee will be "resistant. …
- Published
- 2005
- Full Text
- View/download PDF
50. What do daily behavior report cards (DBRCs) measure? An initial comparison of DBRCs with direct observation for off-task behavior
- Author
-
James L. McDougal, Alexandra M. Hilt, Sandra M. Chafouleas, T. Chris Riley-Tillman, and Carlos J. Panahon
- Subjects
Teacher perceptions ,medicine.medical_specialty ,Behavior problem ,media_common.quotation_subject ,education ,Direct observation ,Audiology ,Education ,Developmental psychology ,Problem severity ,Direct Behavior Rating ,Perception ,Evaluation methods ,Developmental and Educational Psychology ,medicine ,Psychology ,Report card ,media_common - Abstract
This study investigated the similarity of information provided from a daily behavior report card (DBRC) as rated by the teacher to direct observation data obtained from external observers. In addition, the similarity of ratings was compared with variations of problem severity (mild, severe) and teacher training (none, some). Results suggested a moderate association between teacher perceptions of behavior as measured by DBRC ratings and direct observation conducted by an external observer. In addition, 23–45% of the variance in DBRC ratings was consistent with the direct observation data. Severity of the behavior problem or the inclusion of training was not found to significantly affect the similarity of ratings. In summary, results tentatively suggest that the DBRC may be a viable supplement to direct observation for estimating behavior in applied settings. Limitations, future research directions, and implications are discussed. © 2005 Wiley Periodicals, Inc. Psychol Schs 42: 669–676, 2005.
- Published
- 2005
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.