The Harriet W. Sheridan Center for Teaching and Learning

Inclusive Assessment of Student Learning

Assessment tools also need to be rethought to allow all students to accurately demonstrate their knowledge and abilities. To address these challenges, scholarship on inclusive assessment focuses on three approaches: framing of feedback, transparent assignment design, and blind and systematic grading processes.

Inclusive assessment can be difficult, due to dynamics of implicit bias and stereotype threat, both of which can impair course performance and lead to a reduced sense of belonging in the field (Good, Rattan & Dweck, 2008; Kiefer & Sekaquaptewa, 2007; Steele & Aronson, 1995). One metanalysis of experimental studies about grading bias found that, among university courses, there was an effect size of .37, or "slightly over one-third of a standard deviation difference in grades between students in the bias condition [i.e., provided information about student characteristics, such as prior academic performance or a name] and students in the comparison condition" (Malouff and Thorsteinsson, 2016, p. 249).

While inclusive teaching methods can help address some of these issues, assessment tools also need to be rethought to allow all students to accurately demonstrate their knowledge and abilities. To address these challenges, scholarship on inclusive assessment focuses on three approaches: framing of feedback, transparent assignment design, and blind and systematic grading processes.

Framing of feedback

Framing of feedback is essential for addressing dynamics of stereotype threat. For example, one study found it helpful to emphasize that tests and assignments are a diagnostic of students’ current skill levels, which can be improved with practice, instead of describing them as a measure of permanent ability (Aronson, 2002). (An excellent example of this from a Brown faculty member is given in the textbox to the right.) Comments made on papers or exams can also be made more inclusive by the way in which they are framed. Research finds higher revision rates and more improvement in writing among students from historically underrepresented groups when critical feedback embodies three characteristics: 

  1. reflection of a teacher’s high standards,
  2. students’ potential to reach them, and
  3. substantive feedback to improve (e.g., “I’m giving you these comments because I have very high expectations and I know that you can reach them.”) (Yaeger et al., 2014).

Jan Tullis, who teaches introductory geoscience, frames her feedback for exams to help students achieve their best results. For example, when passing out exams to complete, she frames the exam by noting that she is impressed by their preparation and confident that they have learned a lot. When exams are handed back, each has a written comment, such as "great improvement." Those who do poorly receive a comment such as, "I am sure this does not reflect your interest or ability. Please do come in to talk so I can help you study more effectively." Students who do not respond in a few days receive an email follow-up.

Transparent assignment design

A large study found significant benefits for students’ retention and sense of belonging when instructors revised just two assignments per term, with the aim of increasing transparency (Winkelmes et al., 2016). Assignments were made more transparent when instructors:

  • made explicit their relevance to students’ lives or future success
  • articulated key steps to complete the task, and
  • made criteria for success through tools such as rubrics or past student examples.

Greatest benefits of these assignment re-designs were seen for underrepresented and first-generation students. Examples of more and less transparent assignments can be found in the Transparency in Learning and Teaching (TILT) website.

Anonymized and systematic grading

Anonymized and systematic grading can help mitigate dynamics of implicit bias. Blind grading (i.e., hiding a student’s name on a paper or test) can eliminate the cues for implicit bias (Killpack & Melón, 2016). Transparent and clearly defined grading protocols (e.g., grading papers with rubrics, which are distributed to students in advance) also can provide structures to mitigate bias (Thompson & Sekaquaptewa, 2002). One study concludes,"Thus, if students’ work is graded without using a rubric (out of 100, with a standard deviation of 15), then grading bias might mean a reduction on average of 5.4 points (out of 100) between groups (bias vs. no bias). With a rubric, the average difference would be trivial, less than one point" (Gerritson in Malouff & Thorsteinsson, 2016). Guidance on developing rubrics can be found in the Sheridan Center library (see e.g., Stevens and Levi’s (2005) work, Introduction to Rubrics) and the Center's online resources.

This resource was authored by Dr. Mary Wright, Associate Provost for Teaching and Learning, Executive Director of Sheridan Center for Teaching and Learning, and Professor (Research) in Sociology, with input from Sheridan Center colleagues.

References

Aronson, J. (2002). Stereotype threat: Contending and coping with unnerving expectations. In J. Aronson, Ed. Improving Academic Achievement: Impact of Psychological Factors on Education (pp. 279-301). New York: Academic Press.

Good, C., Rattan, A., & Dweck, C.S. (2012). Why do women opt out? Sense of belonging and women’s representation in mathematics. Journal of Personality and Social Psychology, 102(4): 700–717.

Kiefer, A.K., & Sekaquaptewa, D. (2007). Implicit stereotypes, gender identification, and math-related outcomes: A prospective study of female college students. Psychological Science, 18(1): 13-18

Killpack, T. L., & Melón, L. C. (2016). Toward inclusive STEM classrooms: What personal role do faculty play? CBE Life Sciences Education, 15(3). Available: http://www.lifescied.org/content/15/3/es3.long#ref-100

Malouff, J. M., & Thorsteinsson, E. B. (2016). Bias in grading: A meta-analysis of experimental research findings. Australian Journal of Education, 60(3): 245-256.

Saunders, S., & Kardia, D. (2000). Inclusive classrooms: Part one of a two-part series. The Hispanic Outlook in Higher Education, 10(15): 21.

Steele, C.M. (2011). Whistling Vivaldi: How stereotypes affect us and what we can do. New York: W.W. Norton & Company.

Steele, C.M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of Personality and Social Psychology, 69(5): 797-811.

Stevens, D. D., & Levi, A. J. (2005). Introduction to Rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Sterling, VA: Stylus Publishing.

Thompson, M., & Sekaquaptewa, D. (2002). When being different is detrimental: Solo status and the performance of women and minorities. Analysis of Social Issues and Public Policy, 2(1): 183-203.

Winkelmes, M., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K.H. (2016). A teaching intervention that increases underserved college students’ success. Peer Review, 18(1-2). Available: https://dgmg81phhvh63.cloudfront.net/content/user-photos/Publications/Archives/Peer-Review/PR_WISP16_Vol18No1-2.pdf

Yeager, D.S., Purdie-Vaughns, V., Garcia, J., Apfel, N., Brzustoski, P., Master, A., Hessert, W.T., & Williams, M.E. (2014). Breaking the cycle of mistrust: Wise interventions to provide critical feedback across the racial divide. Journal of Experimental Psychology: General, 143(2): 804-824.