Higher Education Reform

What Happens When Lectures Are Ranked? Higher Education Reform
At least you have to look around the site a bit before coming across the 'hotness' rating.

What Happens When Lectures Are Ranked?

November 3, 2014 1107

ratemyprofessor.com homepage

At least you have to look around the site a bit before coming across the ‘hotness’ rating.

What does happen happens when lecturers are ranked?

When I was an undergraduate student, there were no student surveys to ask me for my assessment of classes and lecturers at the end of each term. I remember that a few student representatives, including myself, once proposed to our department that a similar assessment procedure should be established. This proposal was firmly refused. The department’s senior academics felt that enough avenues were already in place for student feedback on their classes, as well as for grievances.

In hindsight, I do believe that their judgement was well-founded. Regular meetings between academic staff and student representatives provided an effective communication channel, and the director of undergraduate programs held regular office hours and dealt effectively with complaints.

Since that time, things have changed a lot, and student satisfaction surveys are now de rigueur, both in the United Kingdom and internationally. In the UK, the National Student Survey (NSS) was launched in 2005. It enables final-year students to voice their opinions on their degree programs by awarding scores to different aspects of university life. The NSS is run by Ipsos MORI, a prominent market research firm, and its results are publicly available. Universities and academic departments thus take pay considerable attention to the results of the NSS, and they often use positive results as part of their marketing strategies. At the same time, individual lecturers and classes are now routinely assessed through department-level standardized satisfaction surveys.

In the United States, student satisfaction surveys are likewise well established in academic life. They are regularly used internally by universities and colleges, and there are also various public platforms for students to air their opinions on their lecturers. On ratemyprofessors.com, students assign scores to professors regarding their helpfulness, the clarity of their lectures, and the ease with which their classes can be passed. Likewise, it’s possible for students to rate lecturers in terms of their physical attractiveness — or “hotness,” in the website’s parlance. Of one particular professor, for example, we learn that he is helpful (he scores 4.6 on a five-point scale), that his teaching is fairly clear (4.1), but that his classes are rather difficult (2.4). In one anonymous students view: “You have to work really hard for a good grade lots of reading and two tests and two long papers. Class is fun and interesting, and he is really friendly and helpful.”

The ratings that appear on ratemyprofessors.com are taken quite seriously in the US, and results are reported in prominent news outlets. Students may express their opinions in rather harsh words, and in a piece on salon.com, one lecturer describes the hurt he felt when he read a thoroughly negative appraisal of his teaching. Then there is Professors Strike Back, an opportunity for lecturers to respond to criticism by students… For a good example of this, take a look at a post titled “Good looking professor wants you to show pity for less attractive professors.” Student evaluations have obviously come to be taken very seriously.

In South Korea, several websites claim that the contract renewal for foreign lecture depends to a very large degree on the results of their student satisfaction surveys (1, 2).

In contemporary academia, student evaluations and the ranking of lecturers’ work that they entail are a big issue.

What consequences do these all-pervasive rankings have for the work lecturers do? In principle, student satisfaction surveys may be a useful means for obtaining a transparent view of students’ learning experience and for improving one’s classes. For self-evident reasons, students may find it difficult or may not be motivated to express their views in face-to-face communication with their lecturers. A survey at the end of a term or semester gives them a much more neutral and easily accessible opportunity to make themselves heard. For example, coming to South Korea and adjusting to a new and initially unfamiliar academic system, I have experimented widely with different methods of assessment. My students’ opinions in the end-of-semester surveys administered by the university have been a valuable tool for genuinely improving my classes. Without the surveys, it would have been notably more difficult to gain a comprehensive understanding of how my students experience my teaching.

There is, of course, a notable problem when it comes to the validity of such surveys’ results. Student satisfaction surveys may be a measure of many things — the standard of teaching and learning achieved in a given class, students’ level of interest in that class (i.e. the fascinating elective course she has been wanting to take for year, or the boring mandatory class everyone wants to avoid), a personality clash between student and lecturer, the extent to which the class meets students’ prior expectations and understanding of academic work, and so forth. For this reason, it is highly problematic to use the results of student satisfaction surveys for an assessment of the quality of lecturers’ work. To do so implies a severe misunderstanding of what such surveys measure.

Student satisfaction surveys may be a robust tool for the improvement of one’s teaching, particularly if they are coupled other tools, such as regular peer observations of classes and self-assessments. Colleagues in the UK told me that their departments do combine such a broad range of methods when it comes to the monitoring of teaching standards. The methodological basics of student satisfaction surveys are forgotten by those who view them simple as a source of insights into how ‘well’ or ‘poorly’ a lecturer teaches.

And yet, exactly this practice does not seem to be all that uncommon. The rise of the student satisfaction survey has occurred in a period that has also witnessed the colonisation of academic life by the operational logic of business. One part of this trend has been the casualization of academic labour, a profound shift in the proportion between permanent and short-term academic staff, and a re-organisation of the hierarchies that define academic work. Some universities seem to use student satisfaction surveys as a resource for governing their casual workforce. A recent article in The Chronicle of Higher Education looks at this issue at universities in the US. Tellingly, it is titled “For Adjuncts, a Lot Is Riding on Student Evaluations.”

It concludes:

For most tenure-track and tenured professors, course evaluations are used as guidance or feedback, a way to tweak their courses based on student concerns. At their worst, the evaluations are an annoyance, as students vent their frustrations or lament a poor grade. But for adjuncts, student evaluations often carry much more weight. In a way, that makes sense: Most adjuncts are, after all, hired to teach. But in the absence of other metrics or methods, many colleges use evaluations as a key means—or the only means—of determining whether to renew a contingent professor’s contract. The evaluations, of course, can be deeply flawed. And while poor course evaluations can result in losing a teaching position, several adjuncts say, positive evaluations carry no benefit at all: They don’t lead to pay raises, office space, or equipment.

The article makes it clear that such practices, while not uncommon, are by far not universal in American academia. However, wherever they are in fact used by academic management, they must surely have detrimental consequences on the quality of teaching, let alone labor relations between managers and casual staff and the working life of academics employed under precarious conditions. These consequences are obvious, and the piece in the Chronicle I just cited discusses some of them.

Here, I simply wish to raise the question whether this really, really is the right way for us to work together in academia? The issues I have outlined do not only concern lecturers in the US; they are part of a broad international trend towards the commercialization and hierarchization of academic labor. Student satisfaction surveys can be a useful and important means for improving our teaching. If we use them instead as a tool in the politics of academic labor, we may end up doing our students a disservice, and we put paid to the notion that there can be an ‘academic community’ in any meaningful sense.


My career so far has taken me to a fairly wide range of places, and this has allowed me to experience a wide range of approaches to sociology and social science. In my blog, I reflect on this diversity and its implications for the future of the discipline. Over the last few years, I have also become interested in exploring the contours of academic life under neoliberal hegemony. Far-reaching transformations are taking place at universities around the world, in terms of organisational structures, patterns of authority, and forms of intellectual activity. With my posts, I hope to draw attention to some of these transformations.

View all posts by Daniel Nehring

Related Articles

Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
AI Upskilling Can and Should Empower Business School Faculty
Higher Education Reform
July 10, 2024

AI Upskilling Can and Should Empower Business School Faculty

Read Now
Reflections of a Former Student Body President: ‘Student Government is a Thankless Job’
Insights
July 1, 2024

Reflections of a Former Student Body President: ‘Student Government is a Thankless Job’

Read Now
Responsible Management Education Week 2024: Sage Asks ‘What Does It Mean to You?’
Business and Management INK
June 19, 2024

Responsible Management Education Week 2024: Sage Asks ‘What Does It Mean to You?’

Read Now
Universities Should Reimagine Governance Along Co-Operative Lines

Universities Should Reimagine Governance Along Co-Operative Lines

Instead of adhering to a corporate model based on individual achievement, the authors argue that universities need to shift towards co-operative governance that fosters collaborative approaches to teaching and research

Read Now
Striving for Linguistic Diversity in Scientific Research

Striving for Linguistic Diversity in Scientific Research

Each country has its own unique role to play in promoting greater linguistic diversity in scientific communication.

Read Now
The Power of Fuzzy Expectations: Enhancing Equity in Australian Higher Education

The Power of Fuzzy Expectations: Enhancing Equity in Australian Higher Education

Having experienced firsthand the transformational power of education, the authors wanted to shed light on the contemporary challenges faced by regional and remote university students.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments