Full metadata
Title
Predictive Utility of a Proficiency Cut Score in a Benchmark Assessment
Description
Since the No Child Left Behind (NCLB) Act required classifications of students’ performance levels, test scores have been used to measure students’ achievement; in particular, test scores are used to determine whether students reach a proficiency level in the state assessment. Accordingly, school districts have started using benchmark assessments to complement the state assessment. Unlike state assessments administered at the end of the school year, benchmark assessments, administered multiple times during the school year, measures students’ learning progress toward reaching the proficiency level. Thus, the results of the benchmark assessments can help districts and schools prepare their students for the subsequent state assessments so that their students can reach the proficiency level in the state assessment. If benchmark assessments can predict students’ future performance measured in the state assessments accurately, the assessments can be more useful to facilitate classroom instructions to support students’ improvements. Thus, this study focuses on the predictive accuracy of a proficiency cut score in the benchmark assessment. Specifically, using an econometric research technique, Regression Discontinuity Design, this study assesses whether reaching a proficiency level in the benchmark assessment had a causal impact on increasing the probability of reaching a proficiency level in the state assessment. Finding no causal impact of the cut score, this study alternatively applies a Precision-Recall curve - a useful measure for evaluating predictive performance of binary classification. By using this technique, this study calculates an optimal proficiency cut score in the benchmark assessment that maximizes the accuracy and minimizes the inaccuracy in predicting the proficiency level in the state assessment. Based on the results, this study discusses issues regarding the conventional approaches of establishing cut scores in large-scale assessments and suggests some potential approaches to increase the predictive accuracy of the cut score in benchmark assessments.
Date Created
2021
Contributors
- Terada, Takeshi (Author)
- Chen, Ying-Chih (Thesis advisor)
- Edwards, Michael (Thesis advisor)
- Garcia, David (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
112 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.2.N.161892
Level of coding
minimal
Cataloging Standards
Note
Partial requirement for: Ph.D., Arizona State University, 2021
Field of study: Educational Policy and Evaluation
System Created
- 2021-11-16 05:00:12
System Modified
- 2021-11-30 12:51:28
- 3 years ago
Additional Formats