Teaching Innovations at Vanderbilt: Lisa Fazio and Peerceptiv
By Faith Rovenolt, CFT undergraduate intern
As a student, I have complicated feelings about peer review. It can be incredibly helpful, but only if it’s implemented well and if all students involved put real time and effort into giving useful feedback. That’s why I think Dr. Lisa Fazio’s use of Peerceptiv could be useful to many classrooms across Vanderbilt’s campus. Fazio, an Assistant Professor of Psychology and Human Development, has used this online, anonymous peer review organizer for five years in her various courses (PSY-PC 1117: FYWS – Make It Stick, 1250: Developmental Psychology, & 3650: Advanced Topical Seminar – Cognition in the Real World).
Fazio uses Peerceptiv as a place for students to submit their draft and then review each other’s submissions. Peerceptiv handles both the process of assigning reviewers and recording their feedback. All feedback is anonymous, and takes the form of both quantitative ratings and written thoughts on the work, from strengths to suggestions for improvement. Students then get three grades for the draft: one for having done it, another for the quality of writing submitted, and a third for the quality of reviews given. Both of the latter two are judged by the program. Quality of writing is based on the reviews received and quality of reviews on consistency with other reviews for the same work. The system takes into account things like whether a reviewer consistently gives only bad or only good reviews, dinging their peer review grade and weighing their input on others’ drafts accordingly. Students can see all of the feedback they receive as well as the helpfulness of their own reviews.
Unsurprisingly, students can be distrustful of putting their grades in the software’s binary hands. It helps that Peerceptiv was developed using a significant body of research on student peer review, but Fazio also assures her students that she checks through the system’s results. Eventually, the students come to value the process—and the system—as the peer reviews help them improve their draft before submitting the final, which is graded by Dr. Fazio and is worth more than the draft grade.
Fazio thinks this tool could be used in any course with an assignment where feedback before a final version is useful. Additionally, Peerceptiv easily scales up to large classes. Things to keep in mind, though:
- Having a clear discussion on peer review beforehand is critical
- A rubric for peer review feedback is necessary—the quality of peer feedback will depend on what characteristics and qualities the rubric prompts the students to look at
- Three peer reviews per assignment is the minimum to get feedback equivalent in quality to a TA’s or instructor’s, but more reviews are better. Extra credit can motivate additional peer reviews.
According to Fazio, the biggest benefit of using Peerceptiv is that the final drafts are much better, to the advantage of both the instructor and students. Students also learn the critical skills necessary to give better peer review while also benefitting from seeing each other’s work and using that to inform their own writing skills. It doesn’t require any more additional work for the instructor than a traditional peer review process would, and it can potentially be integrated with Brightspace.
Fazio’s lab also researches retrieval practice and implements it in her courses through low stakes daily reading quizzes that cover textbook readings as well as past class material. See my last post for how another professor is implementing that in her course.