107 results on '"Amy M. Briesch"'
Search Results
102. A school practitioner's guide to using daily behavior report cards to monitor student behavior
- Author
-
Sandra M. Chafouleas, T. Chris Riley-Tillman, and Amy M. Briesch
- Subjects
Formative assessment ,Medical education ,Response to intervention ,Service delivery framework ,Intervention (counseling) ,School psychology ,Premise ,Developmental and Educational Psychology ,Psychology ,Popularity ,Report card ,Education ,Developmental psychology - Abstract
With the growing popularity of a response to intervention model of service delivery, the role of intervention management is becoming more prominent. Although many aspects of intervention management have received significant attention, one area that warrants further development involves feasible methods for monitoring student behavior in a formative fashion. By formative, we mean behavior that is frequently monitored, such as on a daily basis, with the premise that the information will be used to make appropriate intervention decisions. Within a problem-solving model of intervention development, implementation, and evaluation, at least one educational professional must be responsible for using an effective tool for monitoring behavior. Yet, identifying and using such a tool can be a challenge in applied settings in which resources are often limited. The purpose of this article is to briefly review available tools for behavior monitoring, with emphasis on reviewing the potential of the Daily Behavior Report Card to serve as a supportive methodology to more established measures of behavior assessment. Examples and guidelines for use of the Daily Behavior Report Card in behavior monitoring are provided. © 2007 Wiley Periodicals, Inc. Psychol Schs 44: 77–89, 2007.
- Published
- 2006
103. Development of a problem-focused behavioral screener linked to evidence-based intervention
- Author
-
Robert J. Volpe, Brian Daniels, Gregory A. Fabiano, and Amy M. Briesch
- Subjects
Male ,Evidence-based practice ,Adolescent ,Child Behavior ,Test validity ,Factor structure ,Education ,New England ,Intervention (counseling) ,Developmental and Educational Psychology ,medicine ,Attention deficit hyperactivity disorder ,Humans ,Mass Screening ,School based intervention ,Child ,Students ,Evidence-Based Medicine ,Schools ,Behavioral assessment ,Reproducibility of Results ,Problem focused ,medicine.disease ,Adolescent Behavior ,Attention Deficit Disorder with Hyperactivity ,Child, Preschool ,Female ,Psychology ,Factor Analysis, Statistical ,Clinical psychology - Abstract
This study examines the factor structure, reliability and validity of a novel school-based screening instrument for academic and disruptive behavior problems commonly experienced by children and adolescents with attention deficit hyperactivity disorder (ADHD). Participants included 39 classroom teachers from two public school districts in the northeastern United States. Teacher ratings were obtained for 390 students in grades K-6. Exploratory factor analysis supports a two-factor structure (oppositional/disruptive and academic productivity/disorganization). Data from the screening instrument demonstrate favorable internal consistency, temporal stability and convergent validity. The novel measure should facilitate classroom intervention for problem behaviors associated with ADHD by identifying at-risk students and determining specific targets for daily behavior report card interventions.
- Published
- 2014
104. An evaluation of observational methods for measuring response to classwide intervention
- Author
-
Elizabeth M. Hemphill, Amy M. Briesch, Robert J. Volpe, and Brian Daniels
- Subjects
Observer Variation ,Schools ,Adolescent ,education ,Behavior change ,Psychological intervention ,Video Recording ,Contrast (statistics) ,Student engagement ,Observational methods in psychology ,Education ,Developmental psychology ,Intervention (counseling) ,Developmental and Educational Psychology ,Mathematics education ,Intervention implementation ,Humans ,Psychology ,Baseline (configuration management) ,Child ,Students ,Behavior Observation Techniques ,Mathematics - Abstract
Although there is much research to support the effectiveness of classwide interventions aimed at improving student engagement, there is also a great deal of variability in terms of how response to group-level intervention has been measured. The unfortunate consequence of this procedural variability is that it is difficult to determine whether differences in obtained results across studies are attributable to the way in which behavior was measured or actual intervention effectiveness. The purpose of this study was to comparatively evaluate the most commonly used observational methods for monitoring the effects of classwide interventions in terms of the degree to which obtained data represented actual behavior. The 5 most common sampling methods were identified and evaluated against a criterion generated by averaging across observations conducted on 14 students in one seventh-grade classroom. Results suggested that the best approximation of mean student engagement was obtained by observing a different student during each consecutive 15-s interval whereas observing an entire group of students during each interval underestimated the mean level of behavior within a phase and the degree of behavior change across phases. In contrast, when observations were restricted to the 3 students with the lowest levels of engagement, data revealed greater variability in engagement across baseline sessions and suggested a more notable change in student behavior subsequent to intervention implementation.
- Published
- 2014
105. The influence of student characteristics on the dependability of behavioral observation data
- Author
-
Tyler David Ferguson, Amy M. Briesch, and Robert J. Volpe
- Subjects
Male ,Schools ,Psychometrics ,Applied psychology ,Psychological intervention ,Child Behavior ,Reproducibility of Results ,Student engagement ,Sample (statistics) ,Variance (accounting) ,Achievement ,Education ,Tier 2 network ,Developmental and Educational Psychology ,Dependability ,Humans ,Generalizability theory ,Female ,Psychology ,Child ,Students - Abstract
Although generalizability theory has been used increasingly in recent years to investigate the dependability of behavioral estimates, many of these studies have relied on use of general education populations as opposed to those students who are most likely to be referred for assessment due to problematic classroom behavior (e.g., inattention, disruption). The current study investigated the degree to which differences exist in terms of the magnitude of both variance component estimates and dependability coefficients between students nominated by their teachers for Tier 2 interventions due to classroom behavior problems and a general classroom sample (i.e., including both nominated and non-nominated students). The academic engagement levels of 16 (8 nominated, 8 non-nominated) middle school students were measured by 4 trained observers using momentary time-sampling procedures. A series of G and D studies were then conducted to determine whether the 2 groups were similar in terms of the (a) distribution of rating variance and (b) number of observations needed to achieve an adequate level of dependability. Results suggested that the behavior of students in the teacher-nominated group fluctuated more across time and that roughly twice as many observations would therefore be required to yield similar levels of dependability compared with the combined group. These findings highlight the importance of constructing samples of students that are comparable to those students with whom the measurement method is likely to be applied when conducting psychometric investigations of behavioral assessment tools.
- Published
- 2013
106. Assessing influences on intervention implementation: revision of the usage rating profile-intervention
- Author
-
Sandra M. Chafouleas, Sabina Rak Neugebauer, T. Chris Riley-Tillman, and Amy M. Briesch
- Subjects
Male ,Schools ,Psychometrics ,media_common.quotation_subject ,Applied psychology ,Behavior change ,MEDLINE ,Reproducibility of Results ,Faculty ,Education ,Vignette ,Intervention (counseling) ,Developmental and Educational Psychology ,Intervention implementation ,Humans ,Quality (business) ,Female ,Psychology ,Students ,Reliability (statistics) ,Clinical psychology ,media_common - Abstract
Although treatment acceptability was originally proposed as a critical factor in determining the likelihood that a treatment will be used with integrity, more contemporary findings suggest that whether something is likely to be adopted into routine practice is dependent on the complex interplay among a number of different factors. The Usage Rating Profile-Intervention (URP-I; Chafouleas, Briesch, Riley-Tillman, & McCoach, 2009) was recently developed to assess these additional factors, conceptualized as potentially contributing to the quality of intervention use and maintenance over time. The purpose of the current study was to improve upon the URP-I by expanding and strengthening each of the original four subscales. Participants included 1005 elementary teachers who completed the instrument in response to a vignette depicting a common behavior intervention. Results of exploratory and confirmatory factor analyses, as well as reliability analyses, supported a measure containing 29 items and yielding 6 subscales: Acceptability, Understanding, Feasibility, Family-School Collaboration, System Climate, and System Support. Collectively, these items provide information about potential facilitators and barriers to usage that exist at the level of the individual, intervention, and environment. Information gleaned from the instrument is therefore likely to aid consultants in both the planning and evaluation of intervention efforts.
- Published
- 2012
107. An investigation of the generalizability and dependability of direct behavior rating single item scales (DBR-SIS) to measure academic engagement and disruptive behavior of middle school students
- Author
-
Stephen P. Kilgus, T. Chris Riley-Tillman, Theodore J. Christ, Anne C. Black, Sandra M. Chafouleas, and Amy M. Briesch
- Subjects
Male ,Psychometrics ,education ,Child Behavior ,Student engagement ,Child Behavior Disorders ,behavioral disciplines and activities ,Education ,Developmental psychology ,Developmental and Educational Psychology ,Dependability ,Humans ,Generalizability theory ,Child ,Students ,Analysis of Variance ,Models, Statistical ,Schools ,Variance (accounting) ,Degree (music) ,Facet (psychology) ,Direct Behavior Rating ,Female ,Educational Measurement ,Psychology ,Program Evaluation - Abstract
A total of 4 raters, including 2 teachers and 2 research assistants, used Direct Behavior Rating Single Item Scales (DBR-SIS) to measure the academic engagement and disruptive behavior of 7 middle school students across multiple occasions. Generalizability study results for the full model revealed modest to large magnitudes of variance associated with persons (students), occasions of measurement (day), and associated interactions. However, an unexpectedly low proportion of the variance in DBR data was attributable to the facet of rater, as well as a negligible variance component for the facet of rating occasion nested within day (10-min interval within a class period). Results of a reduced model and subsequent decision studies specific to individual rater and rater type (research assistant and teacher) suggested degree of reliability-like estimates differed substantially depending on rater. Overall, findings supported previous recommendations that in the absence of estimates of rater reliability and firm recommendations regarding rater training, ratings obtained from DBR-SIS, and subsequent analyses, be conducted within rater. Additionally, results suggested that when selecting a teacher rater, the person most likely to substantially interact with target students during the specified observation period may be the best choice.
- Published
- 2008
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.