The Problem with Surveys in Research
[Editor’s Note: We are pleased to welcome Ben Hardy who collaborated with Lucy R. Ford on their article entitled “It’s Not Me, It’s You: Miscomprehension in Surveys,” available now in the OnlineFirst section of Organizational Research Methods.]
‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean – neither more nor less.’
There is a little of the Humpty Dumpty in all of us. When we communicate with others we tend to think about what we want to say – and choose the words to mean what we want them to mean – rather than thinking carefully about what it is that others will hear.
The same is true when we conduct survey research. We identify a construct, such as job satisfaction, carefully define it and then produce a series of items which we believe will tap into the cognitive domain occupied by this construct. We then test these items to check that people understand them and use a variety of statistical techniques to produce a finished scale which, we believe, measures just what we choose it to mean – neither more nor less.
Alternatively we may bypass all of this and choose to use a published scale, assuming that all this hard work has been done for us.
Unfortunately, we do not tend to pay much attention to the actual words of the items. Sure, we check whether people understand them but we seldom check whether people understand them in exactly the same way as we do. Instead, like Humpty Dumpty, we fall back on assuming that words mean what we choose them to mean – neither more nor less.
The average noun has 1.74 meanings and the average verb 2.11 (Fellbaum, 1990). This leaves quite a good deal of scope for words to mean very different things, whatever we, or Humpty Dumpty, might choose. Consider the item ‘How satisfied are you with the person who supervises you – your organizational superior?’ (Agho, Price, & Mueller, 1992). What does it mean to you? How satisfied are you with your boss? (49); How satisfied are you with your boss and the decisions they make? (25); Is your supervisor knowledgeable and competent? (6); Do you like your supervisor? (14); One of these probably accords with your interpretation. You might be interested to know that quite a few people do not agree with you. The figures in brackets are the percentage of people selecting that particular option. You knew exactly what the item meant. And so did everyone else. The problem is that you did not agree.
“So what?” you might argue. If the stats work out then is there a problem? Well yes, there is. Firstly, we are not measuring what we think we are measuring. Few of us would trust a doctor whose laboratory tests might or might not be measuring what they claim to measure – even if the number looked reassuringly within the normal range. So should we diagnose organizational pathologies on the basis of surveys which may or may not be measuring what they claim to measure – even if the number if reassuring? Simply because something performs well statistically, it doesn’t mean that it tells you anything useful. Secondly, we do not know what individuals would score if they were actually answering exactly the same question that the researcher intended Thirdly, the different interpretations mean that there are different sub-groups within a population and this may have knock-on effects when linked to other factors, such as intention to leave.
So what is to be done? There are a number of simple fixes. Probably the easiest is to actually go and talk to some of the people who are going to be surveyed and ask them what they think the items in the survey actually mean. This will give a good idea of whether your interpretation differs wildly from theirs, and in many cases you will find that it does.
This problem of other peoples’ interpretations differing from our own extends beyond survey research, of course. Indeed, there is a whole field of research, that of linguistic pragmatics, which seeks to understand why we interpret things the way that we do. At the heart of it all, however, is communication. And so the assumption that words mean what we choose them to mean – neither more nor less – is a fallacious one, at least as far as other people are concerned. We need to stop thinking about what we are saying and spend a little more time thinking about what others are hearing. Humpty Dumpty was wrong. It is not us who chooses what words mean, it is the recipient of those words. And we ignore their views at our peril.
Agho, A. O., Price, J. L., & Mueller, C. W. 1992. Discriminant validity of measures of job satisfaction, positive affectivity and negative affectivity. Journal of Occupational and Organizational Psychology, 65(3): 185-196.
Fellbaum, C. 1990. English Verbs as a Semantic Net. Int J Lexicography, 3(4): 278-301.
Read “It’s Not Me, It’s You: Miscomprehension in Surveys,” from Organizational Research Methods for free by clicking here. Click here to sign up for e-alerts from Organizational Research Methods to get notified for all the latest articles like this.
Lucy R. Ford is an assistant professor of management in the Haub School of Business at Saint Joseph’s University. Her research interests include leadership, teams, and linguistic issues in survey development. Dr. Ford has served on the executive committee of the Research Methods Division of the Academy of Management, and as the co-chair of the pre-doctoral consortium hosted by Southern Management Association. She has delivered numerous workshops on research methods and scale development at both regional and national conferences. Her work has been published in The Leadership Quarterly, Journal of Organizational Behavior, and Journal of Occupational and Organizational Psychology, among others. She received her BBA in human resources management from East Tennessee State University, and her PhD in organizational behavior from Virginia Commonwealth University.