My recent Teaching and Learning Innovations workshop, “Managing the Paper Load,” centered on helping faculty engage in holistic assessment of student writing. We discussed the difference between responding to student writing during the drafting and revision process, when students can use feedback to improve their work, versus assessing final drafts. Instructors often feel compelled to mark up final drafts, at least in part to justify the grades, when students are not necessarily going to be able to make good use of those comments (or may ignore or avoid them altogether).
Evaluating a paper holistically takes into consideration the total impression it conveys, providing a summative assessment of how effectively it meets the expectations of the assignment. This type of assessment focuses less on individual components such as surface errors (unless they are persistent and interfere with understanding the ideas the students are trying to communicate) and more on the overall effect of a piece of writing. We use a holistic rubric in the Composition Program at CI as part of our team scoring process for evaluating portfolios in both first year as well as upper division writing classes. Our team is currently piloting Google Classroom as a way of managing ePortfolio submission and assessment, so I created my workshop in Google Classroom and added the workshop participants as students to allow them to explore this tool themselves. (Other CI faculty who would like to join the Classroom can email me and I will send them an invitation.)
Instructors who want to focus in on the various components of an assignment may find an analytic rubric useful. Faculty can develop detailed rubrics that are aligned with the stated learning outcomes and expectations of an assignment. Such rubrics take time to develop and refine but ultimately provide an efficient and consistent method of assessment. CI Assistant Professor of Management Dylan Cooper, who participated in the workshop, provided an excellent example of such an assignment and rubric. We discussed various ways of adapting his rubric to a more holistic assessment process, such as in this example in which I created a simple Google Form based on his criteria for this assignment.
As a result of our workshop, Dylan is experimenting with a more holistic approach to assessing his current batch of student papers, written in response to a different assignment for which the above rubric would not apply. Dylan has not disposed of written comments altogether. Rather, he is posting feedback for students on CI Learn and then will track who accesses them. If it turns out that less than half of students read their comments, next time he is going to invite interested students to contact him individually instead. I am so inspired by Dylan’s commitment to exploring new methods of assessment and am eager to chat with him about the results.
In preparation for the workshop, I also received this example from CI Professor of History and Interim Director of Undergraduate Studies, Marie Francois. Marie was pivotal in our campus-wide efforts to revamp our General Education Goals and Outcomes and subsequently worked to apply the rubrics that emerged from those efforts in her own classes. Our revised GE Outcomes at CI were very much inspired by the American Association of Colleges and Universities’ VALUE Rubrics, which also provide a useful starting point for those working to develop rubrics for their own writing assignments.
Assessing writing is a critical task, but it doesn’t have to be overwhelming. As we discussed in a previous workshop on Writing to Learn, we don’t want faculty to avoid assigning writing assignments for fear of facing a seemingly insurmountable onslaught of student work to grade. We have plenty of strategies and resources for integrating writing and managing the paper load in content-intensive courses. Let’s talk.
Stacey Anderson is an Assistant Professor of English and Composition Director at CI.