The literature suggests that there exists a need for fewer but more dearly defined subgroupings in behavior rating instruments. The present study focuses on the items from seven behavior rating instruments in an effort to organize the items conceptually into as few behavior clusters as possible. The procedures utilized extensive card sort activities and a panel of experts who examined each of the items and made decisions regarding subgroup placement of the items. The find ings provide one way to organize behavior descriptors and thereby unify behavior rating as recommended by several authors. In addition, the classification of behav iors provides the field with a conceptual organization which may form the basis for future conceptual and empirical activities. In recent years, there has been a proliferation of behavior rating instruments, each designed to assist in the assessment of the behavior of children and youth. These instruments have been used increasingly as part of the initial screening and identification procedures for students thought to have behavior problems (Bullock & Wilson, 1986; Smith, Wood, & Grimes, 1985). In addition, they have been used to aid in the development of intervention plans and tracking of behavior change in subjects (Bullock & Wilson, 1986; Wilson, 1980) and in research studies of various types (e.g., Achenbach & Edelbrock, 1986; Bullock, Wilson, & Sarnacki, 1988; Campbell, Bullock, & Wilson, 1989; Eaker, Allen, Gray, & Heckel, 1983; Quay & Peterson, 1983; Sarnacki, 1987). Behavior rating instruments have evolved from an intuitive clinical notion about behavior to complex analyses using exploratory factor analytic approaches to develop clusters of highly correlated behaviors. The latter approach has resulted in a great variety of labels being assigned to the various behavior clusters. Because of the lack of coordination or unifying theory in the factor analytic work that has taken place, cluster labels have not been applied consistently. In addition, exploratory factor analysis produces indeterminant solu tions (Kim & Mueller, 1978) such that any given solution depends upon (a) the items placed in the analysis, (b) the computation method, and (c) the type of rotation used for a final solution. Therefore, since factor solutions can be very different, the resulting dimensional interpretations can also differ depending on aspects of the analysis rather than the under lying dimensional realities of behavioral disorders. Some factor analytic work has been done to sort out the variety of item clusters represent ing behavior dimensions that presently exist in behavior rating scales. For example, Achenbach and Edelbrock ( 1978) examined 27 different scales and arrived at four different categories: overcontrolled, undercontrolled, pathological detachment, and learning prob lems. In a related study of 24 rating instruments, Hoge (1983) summarized 53 different item grouping names into three categories: personality adjustment, social adjustment, and adjustment to academic demands. A study of two instruments with confirmatory factor analysis conducted by Hale and Zuckerman (1981 ) found that a three-factor model was best for explaining the mathematical characteristics of a group of items' covariance pattern. A special thanks is extended to Juane Heflin. J. Michael Reese, Ronald L. Sarnacki, and Patti Westerlage who, in addition to the authors, served as the panel of experts for the project. Behavioral Disorders February 1990 [Vol.15 No. 2 87-99] 87 This content downloaded from 157.55.39.78 on Mon, 20 Jun 2016 07:33:25 UTC All use subject to http://about.jstor.org/terms These factors were identified as overreaction, underreaction, and inadequacy-immaturity. The three studies reported above show different results: there are two different dimension interpretations — Achenbach and Edelbrock (1978) and Hoge (1983); and two different number of factors — Hoge (1983) and Achenbach and Edelbrock (1978), fourfactors versus Hale and Zuckerman (1981), three factors. The lack of agreement in previous research suggestsa need forfurther investigation of the problem. Problems associated with indeter minacy, as pointed out earlier, suggest a need for a different approach. The results of all three studies imply, however, that there is a multitude of rating scale dimensions which can be simplified and any analysis of rating scale data should result in a relatively few number of dimensional categories into which behavioral rating items can be placed. One issue that is basic to the labeling of item clusters is the correspondence between the label and the categorical nature of the behaviors described by the items. Labeling a cluster of rating items is a result of the interpretation of the behavior described in the items. For example, the interpretation of an item describing drug consumption behavior that is included in a cluster labeled Aggressive Tendency would be different if the same behavior were in a cluster labeled Irresponsible Activities. The three studies cited above addressed the issue of labeling through the use of statistical analysis of behavior rating data. Statistical methodology, however, is appropriate for testing the results of investigations devised to reflect theoretical notions, but not generate them. The matter of grouping behaviors described by rating items is the basis of a theoretical interpretation of the behaviors. There is a need to generate a unified conceptualization of behavioral disorders so that research and any work that is based upon research can be considered comparable. That is, if work is conducted by different researchers with different interpretations of the notion of behavioral disorders, then similar work may result in different findings. If these different findings result in practical applications, then the applica tions may appear very different and may not produce the same results. Ultimately, differen tial interpretations of behaviors considered relevant to behavioral disorders will result in multiple systems of identification, treatment, and interpretation of the results of treatment, none of which will be consistently understood. This confusion presently exists in the area of behavioral disorders and will likely not be changed unless the theoretical differences can be defined and compared. This sort of definition can only be done by examining a large number of behavioral items and generating a unified theory about how they are grouped together and what is the most appropriate category title for the groupings. If it is possible to interpret the larger universe of behavior rating items with relatively few distinct well-defined categories, then the difference between subsets of items (i.e., particu lar rating instruments) can be better understood. The research reported here is an attempt to do just that; that is, to examine several behavior rating instruments and determine in a unifying fashion the basic elements of disordered behavior generally reflected in the behavior rating items used in the instruments. A unified theory would allow researchers to determine the comparability of research done with a variety of different theoretical catego ries; it would probably assist in the consistency of student classification, and it might also help those involved in the education of behaviorally disordered students to measure and interpret student behavior in a consistent fashion with results that are more comparable than with the present set of rating instruments.