skip to main content skip to footer

Understanding Mean Score Differences Between the e‐rater Automated Scoring Engine and Humans for Demographically Based Groups in the GRE General Test CART GRE

Author(s):
Ramineni, Chaitanya; Williamson, David M.
Publication Year:
2018
Report Number:
GRE-18-01
Source:
ETS Research Report
Document Type:
Report
Page Count:
33
Subject/Key Words:
e-rater, Automated Scoring and Natural Language Processing, Graduate Record Examination (GRE), Essay Scoring, Shell Text, Subgroup Statistics, Classification and Regression Trees Software (CART), Human Scoring, Logistic Regression, Regression Weights

Abstract

Notable mean score differences for the e‐rater automated scoring engine and for humans for essays from certain demographic groups were observed for the GRE General Test in use before the major revision of 2012, called rGRE. The use of e‐rater as a check‐score model with discrepancy thresholds prevented an adverse impact on the examinee score at the item or test level. Despite this control, there remains a need to understand the root causes of these demographically based score differences and to identify potential mechanisms for avoiding future instances of discrepancy. In this study, we used a combination of statistical methods and human review to propose hypotheses about the root cause of score differences and whether such discrepancies reflect inadequacies of e‐rater, human scoring, or both. The human rating process was found to be influenced strongly by the scale structure and did not fully correspond to the e‐rater scoring mechanism. The human raters appeared to be using conditional logic and a rule‐based approach to their scoring, while e‐rater uses linear weighting of all the features. These analyses have implications for future research and operational policies for the scoring of the rGRE.

Read More