Abstract
For test scores from standardised language proficiency tests, the various aspects of validity can be established through rigorous test design, delivery and scoring procedures. Whether and how the resulting test scores are judiciously interpreted and used in institutional decision-making is at least as important as other aspects of validity, yet the most difficult to ascertain within the test provider's remit.
This study set out to gain insights into how IELTS scores or other English language proficiency evidence are used in university admissions selection, and how minimum score requirements are set or changed. Taking a multiple case study approach, it aimed to explore and appreciate the contextual factors and practical considerations contributing to institutions' current practices, and to develop recommendations for good practice.
Six different admissions contexts were sampled according to level of decision-making, level of study, and type of university. A case study was constructed for each admissions context based on an in-depth interview with a relevant staff member – an admissions officer, a program director, or a pre-sessional program coordinator, supplemented by published information on the institution's website. Salient themes and findings from the case studies were critically examined in a panel discussion comprising the research team and three IELTS partners representatives.
Regarding the use of test scores and other proficiency evidence, it was found that IELTS scores are used and trusted as a default form of evidence to satisfy the English language requirement for admission. A more holistic approach that takes account of other forms of evidence is adopted in postgraduate admissions contexts and in cases with borderline test scores. Trustworthiness of the evidence, as well as fairness and transparency, were identified as guiding principles for the current practices, while the prioritising of practicality and efficiency have supported the use of test scores as sole evidence and somewhat discouraged the use of multiple forms of proficiency evidence.
With reference to setting and changing minimum score requirements, the case studies suggested that existing practices involve discussion and approval processes through multiple layers of decision-making entities, but not formal standard-setting exercises. Changes to the minimum requirements are often motivated by recruitment considerations, benchmarked against rival institutions or neighbouring departments, but with limited engagement with guidance from IELTS.
Based on the findings from the case studies and the panel discussion, we provide some recommendations for good practice in test score use in university admissions selection, and offer some suggestions to the IELTS partners for further engagement with score users.