In this study, we define the term "screener test," elaborate key considerations in test design, and describe how to incorporate the concepts of practicality and argument-based validation to drive an evaluation of screener tests for language assessment. A screener test is defined as a brief assessment designed to identify an examinee as a member of a particular population or subpopulation. Consequently, its focus of measurement is to provide information that distinguishes the targeted subpopulations. Although the trade-off between measurement quality and practicality is an important consideration for any assessment (Bachman & Palmer, 1996), practicality is a particularly critical feature of low-stakes screener tests in language assessment given their use in routing examinees to other assessments, rather than serving as the basis for higher-stakes decision making. In order to demonstrate how an evaluation may be applied to a screener test, we describe the development and evaluation of a proposed screener test for the TOEFL Primary Reading test. The claims articulated through the development process and evidence collected throughout development and pilot testing enable a wide-ranging, comparative evaluation of five- and 10-item TOEFL Primary Reading screener tests that systematically incorporate the concepts of measurement quality, impact, and practicality.