101. A Comparison of Final Scoring Methods under the Multistage Adaptive Testing Framework
- Author
-
Hacer Karamese
- Abstract
Multistage adaptive testing (MST) has become popular in the testing industry because the research has shown that it combines the advantages of both linear tests and item-level computer adaptive testing (CAT). The previous research efforts primarily focused on MST design issues such as panel design, module length, test length, distribution of test length across modules, panel assembly methods, interim scoring, and routing methods to understand their impact on measurement precision. However, final scoring methods have been understudied in the MST context. Therefore, this study aimed to provide some guidance to practitioners by evaluating the performance of final scoring methods under various MST conditions. To achieve this aim, this study conducted a series of simulation studies to compare the relative performance of three final scoring methods: item response theory (IRT) ability estimation, estimated number-correct true scoring, and equated number-correct (ENC) scoring. By varying panel design, test length, routing method, interim scoring method, and panel quality, the results from this study would assist the practitioners in deciding the viable final scoring method for their testing programs. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
- Published
- 2022