Developing and Scaling a CEFR-Aligned English Placement Test for University Level: A Psychometric Validation Study
Main Article Content
Abstract
Purpose
This study reports on the development, scaling, and validation of the AIU-STEP, a CEFR-aligned English placement test designed for incoming university students. The aim was to create a psychometrically robust instrument capable of accurately classifying learners across CEFR levels and supporting institutional decisions related to course placement and curriculum planning.
Methodology
The test, consisting of reading, grammar, writing analysis, and listening components, was administered to two large student cohorts (N = 1,942 in 2024/2025; N = 2,662 in 2025/2026). Analyses included descriptive statistics, reliability estimation, item difficulty and discrimination indices, exploratory factor analysis, and ROC-based standard setting to establish cut scores linked to CEFR bands.
Results/Findings
The AIU-STEP demonstrated strong psychometric properties, with reliability coefficients ranging from .72 to .93 across subtests and .95–.96 for the full test. Most items fell within optimal difficulty and discrimination ranges, and cut scores remained highly stable across administrations. CEFR distributions revealed an upward shift in proficiency in the second cohort, particularly at the C1 and C2 levels. Factor analysis confirmed a clear four-factor structure aligned with the test's intended constructs.
Implications
Findings indicate that the AIU-STEP is a valid, reliable, and scalable tool for CEFR-aligned placement in higher education. The test provides accurate classification across proficiency levels, supports data-driven curriculum placement, and offers a model for institutional adoption of CEFR-based assessment. Ongoing validation and periodic recalibration are recommended to maintain long-term alignment and responsiveness to changing student proficiency profiles.