Research Ethics

The Tragedy of the (Over-Surveyed) Commons Research Ethics
Aren't they always these days? (Photo: Rusty Clark/Flickr/CC BY 2.0)

The Tragedy of the (Over-Surveyed) Commons

June 10, 2015 1318

Survey crew roadsign

Aren’t they always these days? (Photo: Rusty Clark/Flickr/CC BY 2.0)

By any metric, Garrett Hardin’s The Tragedy of the Commons article in Science, a copy of his address as 1968 president of the American Association for the Advancement of Science, rates among the most important in the history of ecology. Hardin’s thesis builds around the metaphor of the commons, a pasture open to all, on which it benefits each herder to run as many livestock as possible, and to pay no heed to the inevitable degradation and collapse. His pessimistic message: that individual self-interest makes restraint in reproduction and consumption irrational, leading to irreversible pollution and environmental damage.

Hardin’s vision and eloquence make him essential reading in many disciplines. But his pessimism, and his suggestion that only “mutual coercion, mutually agreed upon” could resolve the commons-like tragedy of human overpopulation, have also seen him parodied as a crank by more optimistic economists and opponents of environmental prudence.

The Conversation logo

This article by Rob Brooks originally appeared at The Conversation, a Social Science Space partner site, under the title “The tragedy of the over-surveyed commons”

I’m less interested today in the importance of Hardin’s thesis than his amazing 26000+ citations. I mean who doesn’t love a good metric? I’m also interested in something he said about young people playing their music too loud (hell, it was 1968!), and advertisers polluting our visual landscape:

In a still more embryonic state is our recognition of the evils of the commons in matters of pleasure. There is almost no restriction on the propagation of sound waves in the public medium. The shopping public is assaulted with mindless music, without its consent. Our government is paying out billions of dollars to create supersonic transport which will disturb 50,000 people for every one person who is whisked from coast to coast 3 hours faster. Advertisers muddy the airwaves of radio and television and pollute the view of travellers.

Hardin was lucky enough to live and work in an era less obsessed than our own is with feedback, surveys and metrics. If he had been with us today, I’m sure he would have saved a special place on the degraded commons to relegate those who inflict upon us all the burden of collecting meaningless data and unheeded opinion.

What better way to ruin a perfectly nice stay in a hotel than to spy, propped on the crisp white pillow case, an obsequious request that we rate how the room has been turned out? Can there be any more irritating feature of buying a new app or e-book than a pop-up message asking us to rate the experience?

I bought the product, I stayed in your hotel. If you want to know if this time worked, then see if I come back and buy something else. You have the information in your databases and frequent-visitor files. Pay somebody with quant skills to find the real answer.

If you need to know if the room is being attended well then quietly tally up the complaints! I don’t ask you to fill out a form to ask if you liked the color of my credit card, the angle of my signature, when I paid the bill. Do I?

Student surveys

Every meaningless survey is doing the world damage. In academia two types of survey have the potential do more damage than most: the student survey and the management survey – often administered by monkey.

Any well-designed measurement tool, properly applied, can provide useful information for lecturers to improve their teaching and their courses. Just as well-designed assessment can give students the kinds of guidance they need to learn properly.

Once upon a time lecturers designed their own surveys, which they asked students to fill in in the final lecture. Conscientious lecturers asked questions to gauge what students were learning well, or not so well, and how the lecturers might improve. And improve they did; the student survey became an important tool of the good lecturers, the ones who were responsive to students, deliberate in how they taught, and willing to change in order to improve.

Little wonder, then, that administrators, often with the support of student leaders, saw the potential to measure and improve teaching quality. And so they pushed toward standardized item banks and then just a few standard questions, rolled them out across universities and started using them in promotion and performance review processes. Standardization, and careful thought about questionnaire design certainly improved measurement, but at the cost of the kinds of information most useful to lecturers who want to improve.

Instead, by the peculiar alchemy that happens when people quantify rather than think, student assessment scores transmuted into shiny fool’s gold, a glimmer of a tool which, viewed from just the right angle, might tell us how much students really really like the lecturer. Those lecturers who scored highly used the numbers in their promotion applications. Those who didn’t waited for another year in the hope of showing evidence of “feedback-driven improvement.”

And students went from filling out one survey per semester, during class time, to doing it for every lecturer in every course. In the students’ own time on their own computers. Little wonder then that return rates of 10-20 percent are now considered normal.

The usefulness of student surveys remains hotly disputed. In some years it appears like the lecturer’s clothing or hairstyle weighs more heavily on results than the teaching and learning process. Others argue that student surveys measure only popularity or charisma, rather than the quality of teaching. Merlin Crossley, dean of the faculty in which I work, is more upbeat about their usefulness and the way they correlate with other measures of teaching quality.

I believe that by and large they don’t measure anything at all. When fewer than one in five students responds, you aren’t measuring anything more than idiosyncratic noise in how motivated students are to fill in yet another survey. Because students have been treated like the overgrazed commons, chronically over-surveyed since the day of enrollment. No lecturer benefits from opting out of the relentless evaluation cycle (except for not having to confront one’s “numbers” and read the more toxic open-ended responses). Few heads of teaching benefit by encouraging staff to focus on their teaching and treat evaluations for the flawed tool they are.

If student surveys are to be used for anything meaningful, we need to recognize the costs to students and the value of drawing a representative sample. Make it easy for them to respond. And always obey the first commandment I got from my honours supervisor:

don’t gather information if you don’t know exactly what you are going to do with it.

But instead of restraint, surveys continue to degrade due to careless over-use, from once-useful tools for measurement, into misunderstood and misapplied metrics. The descent is complete when the metric falls into the wrong hands. When some managers (and I stress the some) place blind faith in the pseudonumbers that dribble forth from broken student surveys, reading what amount to no more than tea-leaves with the confidence of cops operating a fine-tuned academic radar gun.

Surveys, monkeys, keyboards

Second, and somewhat more benign, is the pseudo-consultation via survey. I know of academic leaders, fortunately not my direct managers, who can’t decide what color shirt to wear without the help of Survey Monkey. Instead of making eye contact, asking a question and listening to a well-chosen sample of their staff, they run off an electronic survey.

It’ll only take 5 minutes!

Five minutes times 100 staff equals eight person hours. What you are really saying when you send out a survey is that your own time is more valuable than that of all the people on whom you have inflicted your email. Just as one should never convene a meeting unless the benefit to be gained from it exceeds the person-hours spent by people attending it, so should it be with surveys.

Which is why I have a policy of never speaking to survey monkeys.

I recently spent an enjoyable hour over beer with a colleague designing a system by which anyone who sends out a survey must first reimburse the budgetary units that employ each subject for the time involved in reading the email, deciding whether to respond, and then responding. The surveys would be fewer, but the data far more valuable.

The economists call this internalizing negative externalities. That’s the kind of solution that makes the tragedy of the commons – and other apparently intractable problems – far easier to crack than Garrett Hardin could ever have predicted.

While Hardin raged against the Tragedy of the Commons, The Beatles’ “Hey Jude” was the biggest song of 1968. There’s something in there about making it better, and not by survey.

The Conversation


Rob Brooks is the Scientia Professor of Evolutionary Ecology and director of the Evolution & Ecology Research Centre at the University of New South Wales

View all posts by Rob Brooks

Related Articles

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions
Research
October 10, 2024

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

Read Now
Lee Miller: Ethics, photography and ethnography
News
September 30, 2024

Lee Miller: Ethics, photography and ethnography

Read Now
NSF Seeks Input on Research Ethics
Ethics
September 11, 2024

NSF Seeks Input on Research Ethics

Read Now
Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence
Research
August 23, 2023

Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence

Read Now
Hype Terms In Research: Words Exaggerating Results Undermine Findings

Hype Terms In Research: Words Exaggerating Results Undermine Findings

The claim that academics hype their research is not news. The use of subjective or emotive words that glamorize, publicize, embellish or exaggerate results and promote the merits of studies has been noted for some time and has drawn criticism from researchers themselves. Some argue hyping practices have reached a level where objectivity has been replaced by sensationalism and manufactured excitement. By exaggerating the importance of findings, writers are seen to undermine the impartiality of science, fuel skepticism and alienate readers.

Read Now
Five Steps to Protect – and to Hear – Research Participants

Five Steps to Protect – and to Hear – Research Participants

Jasper Knight identifies five key issues that underlie working with human subjects in research and which transcend institutional or disciplinary differences.

Read Now
We Developed a Tool to Make Responsible Research and Innovation Easier

We Developed a Tool to Make Responsible Research and Innovation Easier

Stefan de Jong, Michael J. Bernstein and Ingeborg Meijer describe their work developing a tool that helps researchers and research funders to incorporate responsible research and innovation values into their work.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments