Automated Scoring of Mathematics Tasks in the Common Core Era: Enhancements to M-rater in Support of CBAL Mathematics and the Common Core Assessments CBAL
- Author(s):
- Fife, James H.
- Publication Year:
- 2013
- Report Number:
- RR-13-26
- Source:
- ETS Research Report
- Document Type:
- Report
- Page Count:
- 44
- Subject/Key Words:
- Automated Scoring, Graph Items, Mathematics Tasks, Common Core State Assessments, m-rater, Cognitively Based Assessment of, for, and as Learning (CBAL), Algebra, Mathematical Markup Language (MathML)
Abstract
The m-rater scoring engine has been used successfully for the past several years to score CBAL™ mathematics tasks, for the most part without the need for human scoring. During this time, various improvements to m-rater and its scoring keys have been implemented in response to specific CBAL needs. In 2012, with the general move toward creating innovative tasks for the Common Core assessment initiatives, in traditional testing programs, and with potential outside clients, and to further support CBAL, m-rater was enhanced in ways that move ETS’s automated scoring capabilities forward and that provide needed functionality for CBAL: (a) the numeric equivalence scoring engine was augmented with an open-source computer algebra system; (b) a design flaw in the graph editor, affecting the way the editor graphs smooth functions, was corrected; (c) the graph editor was modified to give assessment specialists the option of requiring examinees to set the viewing window; and (d) m-rater advisories were implemented in situations in which m-rater either cannot score a response or may provide the wrong score. In addition, 2 m-rater scoring models were built that presented some new challenges.
Read More
- Request Copy (specify title and report number, if any)
- http://dx.doi.org/10.1002/j.2333-8504.2013.tb02333.x