Ryerson decision raises questions about effectiveness of student evaluations of professors

Is the student always right?

That’s the question UBC now faces following a precedent-setting arbitration on student evaluations of teaching (SETs) at Ryerson University.

The decision made this past summer instructs Ryerson “to ensure that [SET] results are not used to measure teaching effectiveness for promotion or tenure.” It also ends a 15-year dispute between Ryerson University and the Ryerson Faculty Association over the influence of SETs on employment-related decisions.

William Kaplan, the sole arbitrator in the decision, acknowledged the importance SETs play in measuring student satisfaction and said he believed they should continue. But he expressed significant concern over their “inherent limitations.”

“Insofar as assessing teaching effectiveness is concerned – especially in the context of tenure and promotion – SETs are imperfect at best and downright biased and unreliable at worst,” reads the decision.

Kaplan noted how student bias towards the gender, race, age and other personal characteristics of instructors can skew SET results. His comments reflect common criticism made about SETs in academic circles.

Aside from eliminating SETs from employment-related decision-making, the decision also calls for changes be made to future SETs, known as Faculty Course Surveys (FCS) at Ryerson. These changes include replacing the numerical weighting system of rating with an alphabetical one and eliminating the use of averages to compare faculty members with other members and departments.

“Averages are blunt, easily distorted (by bias) and inordinately affected by outlier/extreme responses,” Kaplan argued.

It is unclear what influence Ryerson’s arbitration decision could have on the future of SETs and employment-related decision-making at UBC.

According to a Senate policy passed in May 2007, data collected from SETs informs the assessment of faculty members for “merit and/or performance adjustment salary awards, promotion, tenure and institutional recognition.”

“[SETs] are relevant but they are one piece of a number of data points that I think can be used to inform evaluating teaching,” said Simon Bates, associate provost of Teaching and Learning at UBC. “It’s a real issue when there’s an overreliance on it.”

Bates said that UBC embraces a holistic method of assessing teachers that includes, among other things, in-class peer evaluations of faculty members by their colleagues, a practice lauded in the Ryerson arbitration decision. According to Bates, UBC also uses a “more sophisticated” method of analyzing SET data that avoids averages which Kaplan indicated can allow outliers to distort results.

But Bates said there was room for improvement. He talked about ways to minimize subjectivity, including reassessing the use of ordinal scales. He also said survey questions should be specifically geared to the student experience.

“As long as we're asking students about things that they are well-placed to be able to comment on, I don't see a big issue,” Bates said. “They need to be able to provide that feedback.”

The UBC Faculty Association said it is currently looking at the arbitration decision.

“We are committed to fair and evidence-based procedures involving reappointment, tenure and promotion at UBC,” Bronwen Sprout, president of the UBC Faculty Association, said in a statement to The Ubyssey. “We will be studying the [Ryerson] decision as we prepare to negotiate a new collective agreement with the university.”

Max Holmes, Vice President of Academic and University Affairs at the AMS, said that SETs give students a valuable opportunity to “provide feedback on their learning experience,” though he acknowledged that a discussion on how they could improve should happen and that the AMS should be part of it.

Bates said that a group of associate deans representing all faculties has already talked about the Ryerson arbitration decision and plans to discuss it further.

“One thing [the Ryerson decision] will spark is a broader campus conversation around the utility and frankly limitations of student evaluations because they are imperfect,” Bates said.

“Although a group of associate deans might initiate that conversation, that’s a conversation that has to include faculty members and students as well.”