Professor rating systems are flawed
By Jason Herring, January 12, 2016 —
During my orientation week at the University of Calgary, I was given all the standard advice — how to build solid study habits, ways to get involved at school and how to find my way around campus. But the most valuable advice I received was a warning from upper-year students in my program, who advised me to pick my professors carefully if I wanted to have a positive academic experience.
I’ve spoken with more students in my faculty since then, and nearly everyone has a bone to pick with some of their former instructors. Stories abound about professors who can’t communicate properly, assign an unreasonable workload in comparison to their colleagues and end up driving half the class to sit in other lecture halls by the second week.
Everyone has anecdotes, and I won’t bore you with mine. Instead, I’d recommend looking at the Universal Student Ratings of Instruction.
First, I used the website ratemyprofessors.com to compile a list of teachers who were given an average rating of 2.5 or less, who had over 50 ratings and were teaching at the U of C this year. Eleven professors satisfied this criteria, with a smattering of others falling just outside the range.
Some interesting statistics come to light after I checked the USRI reports on these professors’ most recent classes, which can be viewed from myucalgary. For these classes, the average percentage of students who filled out an evaluation was 47.2 per cent, with some classes having a turnout as low as 24.2 per cent. In these classes, the average rating out of 7.00 given by students was 4.66. It’s also worth noting that two instructors I was looking for data about were absent from the USRI database entirely.
Despite being an incredibly cumbersome system, the USRIs have a lot of valuable information. One of the statistics I found most interesting was the mean rating for the entire department, which can be compared with individual instructors to see how they stack up with their colleagues. Six of the instructors I reasearched were rated more than one full point lower than their department average. The other three were still below the department average, but by less than one point.
Another thing I noticed when looking at the USRI reports is that a low response rate typically corresponded with a higher mean rating. That illustrates a fundamental flaw with in-class evaluations — students who dislike an instructor are less likely to attend class regularly, skewing ratings upwards from students who like the teaching enough to continue attending. If evaluation dates were announced to students beforehand, response rates would increase and students’ opinions would be represented more accurately.
The good news is that poor teachers at the U of C are the exception. For every poor experience I’ve had with an instructor, I’ve met four or five professors who care greatly about their students.
The U of C won’t do anything about professors students have problems with — these professors are most likely tenured or kept at the university for research-based merits. But they can make it easier for proper evaluations to be conducted and viewed.
Letting students know when USRIs are taking place would increase response rates, leading to a more representative evaluation. And if USRI results were more readily and uniformly available, students would have a much easier time accessing this vital information.
Jason Herring is a second-year computer science student. He writes a monthly column about problems facing University of Calgary students called Old Man Yells at Cloud.