Literaturnachweis - Detailanzeige
Autor/inn/en | Kang, Okim; Rubin, Don; Kermad, Alyssa |
---|---|
Titel | The Effect of Training and Rater Differences on Oral Proficiency Assessment |
Quelle | In: Language Testing, 36 (2019) 4, S.481-504 (24 Seiten)Infoseite zur Zeitschrift
PDF als Volltext |
Zusatzinformation | ORCID (Kang, Okim) |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 0265-5322 |
DOI | 10.1177/0265532219849522 |
Schlagwörter | Evaluators; Second Language Learning; Language Tests; English (Second Language); Speech Communication; Oral Language; Language Proficiency; Comparative Analysis; Interrater Reliability; Novices; Social Bias; Error Patterns; Social Attitudes; Teaching Methods; Computer Assisted Testing; Predictor Variables; Native Speakers; Language Attitudes; Outcomes of Education; Stereotypes; Language Variation; Test of English as a Foreign Language Zweitsprachenerwerb; Language test; Sprachtest; English as second language; English; Second Language; Englisch als Zweitsprache; Oral interpretation; Mündlicher Sprachgebrauch; Language skill; Language skills; Sprachkompetenz; Interrater-Reliabilität; Fehlertyp; Social attidude; Soziale Einstellung; Teaching method; Lehrmethode; Unterrichtsmethode; Prädiktor; Muttersprachler; Sprachverhalten; Lernleistung; Schulerfolg; Klischee; Sprachenvielfalt |
Abstract | As a result of the fact that judgments of non-native speech are closely tied to social biases, oral proficiency ratings are susceptible to error because of rater background and social attitudes. In the present study we seek first to estimate the variance attributable to rater background and attitudinal variables on novice raters' assessments of L2 spoken English. Second, we examine the effects of minimal training in reducing the potency of those trait-irrelevant rater factors. Accordingly, we examined the relative impact of rater differences on TOEFL iBT® speaking scores. Eighty-two untrained raters judged 112 speech samples produced by TOEFL® examinees. Findings revealed that approximately 20% of untrained raters' score variance was, in part, a result of their background and attitudinal factors. The strongest predictor was the raters' own native speaker status. However, minimal online training dramatically reduced the impact of rater background and attitudinal variables for a subsample of high- and low-severity raters. Implications suggest that brief and user-friendly rater-training sessions offer the promise of mitigating rater bias, at least in the short run. This procedure can be adopted in assessment and other related fields of applied linguistics. (As Provided). |
Anmerkungen | SAGE Publications. 2455 Teller Road, Thousand Oaks, CA 91320. Tel: 800-818-7243; Tel: 805-499-9774; Fax: 800-583-2665; e-mail: journals@sagepub.com; Web site: http://sagepub.com |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2020/1/01 |