This paper introduces a rubric for assessing QR in student papers and analyzes the inter-rater reliability of the instrument based on a reading session involving 11 participants. Despite the disciplinary diversity of the group (which included a faculty member from the arts and literature, two staff members, and representatives from five natural and social science departments), the rubric produced reliable measures of QR use and proficiency in a sample of student papers. Readers agreed on the relevance and extent of QR in 75.0 and 81.9 percent of cases respectively (corresponding to Cohen’s κ= 0.611 and 0.693). A four-category measure of quality produced slightly less agreement (66.7 percent, κ = 0.532). Collapsing the index into a 3-point scale raises the inter-rater agreement to 77.8 percent (κ = 0.653). The substantial agreement attained by this rubric suggests that it is possible to construct a reliable instrument for the assessment of QR in student arguments.
Grawe, Nathan D.; Lutsky, Neil S.; and Tassava, Christopher J.
"A Rubric for Assessing Quantitative Reasoning in Written Arguments,"
1, Article 3.
Available at: http://scholarcommons.usf.edu/numeracy/vol3/iss1/art3