News

Let’s Abandon That Qualitative/Quantitative Dichotomy

December 3, 2014 3055

quant and qual concept_optOver the past year, I’ve met with many doctoral students and junior faculty in my travels around the United States and Europe, all of them eager to share information with me about their research. Invariably, at every stop, at least one person will volunteer the information that “I’m doing a qualitative study of…” When I probe for what’s behind this statement, I discover a diversity of data collection and analysis strategies that have been concealed by the label “qualitative.” They are doing participant observation ethnographic fieldwork, archival data collection, long unstructured interviews, simple observational studies, and a variety of other approaches. What seems to link this heterogeneous set is an emphasis on not using the latest high-powered statistical techniques to analyze data that’s been arranged in the form of counts of something or other. The implicit contrast category to “qualitative” is “quantitative.” Beyond that, however, commonalities are few.

ASA_logo

This post by Howard Aldrich originally appeared at the American Sociological Association Organizations, Occupations, and Work Section’s Work in Progress blog under the title, “Stand up and Be Counted: Why social science should stop using the qualitative/quantitative dichotomy.” It is re-posted here under under a CC BY-NC-SA 3.0 US license.

Here I want to offer my own personal reflections on why I urge abandoning the dichotomy between “qualitative” and “quantitative,” although I hope readers will consult the important recent essays by Pearce and Morgan for more comprehensive reviews of the history of this distinction. For a variety of reasons, some people began making a distinction more than four decades ago between what they perceived as two types of research – – quantitative and qualitative – – with research generating data that could be manipulated statistically seen as generally more scientific.

I’ve endured this distinction for so long that I had begun to take it for granted, a seemingly fixed property in the firmament of social science data collection and analysis strategies. However, I’ve never been happy with the distinction and about a decade ago, began challenging people who label themselves this way. I was puzzled by the responses I received, which often took on a remorseful tone, as if somehow researchers had to apologize for the methodological strategies they had chosen. To the extent that my perception is accurate, I believe their tone stems from the persistent way in which non-statistical approaches have been marginalized in many departments. However, it also seemed as though the people I talked with had accepted the evaluative nature of the distinction. As Lamont and Swidler might say, these researchers had bought into “methodological tribalism.”

Such responses upset me so much that I have now taken to asking people, so, you are saying you don’t “count” things? And, accordingly, you do research that “doesn’t count”?!

Why would any researcher accept such second-class status for what they do? Cloaking one’s research with the label of “qualitative” implicitly contrasts it with a higher order and more desirable brand of research, labeled “quantitative.” This is nonsense, of course, for several reasons.


The ‘Q’ Words

Having recently argued that Sociological Science needs more “qualitative” work, I read this with interest. Certainly the terms are not the most descriptive, and they do reinforce a division within sociology that might better be blurred post-Methodenstreit.

But I think the distinction is likely to persist, despite Howard’s good intentions, for two reasons. …

Read more of this post by Elizabeth Popp Berman at OrgTheory.net


First, methods of data collection do not automatically determine methods of data analysis. Information collected through ethnography (see Kleinman, Copp, and Henderson), perusal of archival documents, semi-structured interviews, unstructured interviews, study of photographs and maps, and other methods that might initially yield non-numerical information can often be coded into categories that can subsequently be statistically manipulated. For example, the recording and processing of ethnographic notes by programs such as NVivo and Atlas.ti yields systematic information that can be coded, classified, and categorized and then “counted” in a variety of ways. Thus, an ethnographer interested in a numerical indicator of social status within an emergent group could count the number of instances of deferential speech directed toward a (presumed) high status person or the number of interruptions made by (presumed) high status people into the conversations of others. Note that the meaning of what has been observed derives not from “counting” something but rather from understanding how to interpret what was observed, with the counts helping to judge the strength of the interpretation. The interpretation depends upon a researcher’s understanding of the social context for what was observed.

I believe some of the controversy over what standards to apply to assessing so-called “qualitative” research stems from observers confounding methods of data collection with methods of data analysis. For example, Lareau was quite critical of LaRossa’s suggestions to authors and reviewers of “qualitative” manuscripts because she felt that he had imported some terms from “quantitative” research that were inappropriate for what she viewed as good “qualitative” research, e.g. terms such as “hypothesis” and “variables.” She saw his suggestions as imposing “a relatively narrow conception of what it means to be scientific.” Although she supported his call to improve our analytic understanding of social processes, she also seemed to view data collection and data analysis as inextricably intertwined in the research process. I do not share this view. Even in grounded theory approaches, practitioners distinguish between the collection and analysis of data, although there might be very little time lag between “collection” and “analysis.”

To be clear: not all information collected through the various methods I’ve described can be neatly ordered and classified into categories subject to statistical manipulation. Forcing interpretive reports into a Procrustean bed of cross tabulations and correlations makes no sense. Skillful analysts working with deep knowledge of the social processes they are studying can construct narratives without numbers. It all depends on the question they want to answer, and how.

Second, standards of evidence required to support empirical generalizations do not differ by the method of data collection. Researchers who claim “qualitative” status for their research must meet the same standards of validity and reliability as other researchers. Regardless of whether information is collected through highly structured computerized surveys or semi structured interviewing in the field, researchers must still demonstrate that their indicators are valid and reliable. Ethnographers must provide sufficient information via “thick description” to convince readers that they were in a position to actually observe the interaction they are interpreting, just as demographers using federal census data must convince readers that the questions they are interpreting were framed without bias.

When did this pattern of apologies for “qualitative” research start? Based on my own experience and Morgan’s review, I would say “something happened” in the late 1970s. Back in 1965, when I was doing a one-year ethnography class with John Lofland, at the University of Michigan, no one used the term “qualitative research.” My colleagues in the course – all of whom were doing field-work based MA theses, as I recall – might have described their work as doing “grounded theory,” as we were using a mimeographed version of Glaser and Strauss’s book. The ethnography class was an alternative track to the Detroit Area Study survey research course, and neither course was described as a substitute for the required statistics courses. Field work and survey research were just alternative ways of collecting data.

Am I “blaming the victims” for their continuing to use the labels “qualitative” and “quantitative”? To be clear, there are several institutional factors that explain why some people persist in using this term, and they clearly transcend individual characteristics, as I have found them used not only in North America but also globally.

I published several articles from that ethnography class, including a paper in the inaugural issue of Urban Life and Culture, begun in 1972. That journal is now called the Journal of Contemporary Ethnography. As I recall, no one ever asked me why I was doing “qualitative research,” nor did I ever describe it in those terms. The journal Qualitative Sociology began in 1978 and so I assume the phrase “qualitative sociology” was beginning to percolate into sociological writings around that time. In 1983, Lance Kurke and I published a non-participant observational study of four executives in Management Science, replicating Henry Mintzberg’s research from the early 1970s. We described it as a “field study” during which we had spent one week with each executive and mentioned that we had also conducted semi-structured interviews with them. The phrase “qualitative research” did not appear in the article. The terms “field study” and “non-participant observation” captured perfectly how the project was carried out.

Am I “blaming the victims” for their continuing to use the labels “qualitative” and “quantitative”? To be clear, there are several institutional factors that explain why some people persist in using this term, and they clearly transcend individual characteristics, as I have found them used not only in North America but also globally. Mario Small offered at least three explanations. First, the availability of large data sets has made it easier for researchers to conduct statistical analyses in their research. Second, journal reviewers apply inappropriate standards when they try to evaluate research that uses ethnographic or other less frequently used data collection techniques. Third, foundations and agencies have gravitated toward research labeled as “quantitative” because they see it as higher prestige or more policy relevant. My colleague, Laura Lopez-Sanders, noted a fourth possible reason: departments that offer a “methods sequence” often overlook or downplay ethnographic and other methods that are seen as not leading to easily quantifiable data for statistical analysis. Students pick up on the implicit message that statistical methods carry a higher priority than other methods of analysis.

To the extent that institutional norms and practices keep alive the implicit message that there really are “quantitative” and “qualitative” methods, they will be available for use by graduate students and junior faculty. Their availability, however, does not mandate their use. The point of this blog post is to make readers more reflexive about how they choose to describe their work.

So, if you are doing an ethnography, constructing a narrative using historical records, rummaging through old archives, building agent based models, or doing just about anything else, for that matter, tell that to the next person who asks what kind of research you do. Don’t automatically say “I’m doing qualitative research.” You might want to describe in some detail what data collection and data analysis methods you are actually using. Explain the fit between the questions you’re asking and the type of empirical evidence you are gathering and how you are analyzing it. If the person says, “Oh, you are doing qualitative research,” tell them you don’t know what they mean. You’re just doing good research.


Howard E. Aldrich received his Ph.D. from the University of Michigan and is Kenan Professor of Sociology, Adjunct Professor of Business at the University of North Carolina, Chapel Hill, Faculty Research Associate at the Department of Strategy & Entrepreneurship, Fuqua School of Business, Duke University, and Fellow, Sidney Sussex College, Cambridge University. His main research interests are entrepreneurship, entrepreneurial team formation, gender and entrepreneurship, and evolutionary theory.

View all posts by Howard Aldrich

Related Articles

Alondra Nelson Named to U.S. National Science Board
Announcements
October 18, 2024

Alondra Nelson Named to U.S. National Science Board

Read Now
Lee Miller: Ethics, photography and ethnography
News
September 30, 2024

Lee Miller: Ethics, photography and ethnography

Read Now
‘Settler Colonialism’ and the Promised Land
International Debate
September 27, 2024

‘Settler Colonialism’ and the Promised Land

Read Now
Artificial Intelligence and the Social and Behavioral Sciences
News
August 6, 2024

Artificial Intelligence and the Social and Behavioral Sciences

Read Now
Pandemic Nemesis: Illich reconsidered

Pandemic Nemesis: Illich reconsidered

An unexpected element of post-pandemic reflections has been the revival of interest in the work of Ivan Illich, a significant public intellectual […]

Read Now
How ‘Dad Jokes’ Help Children Learn How To Handle Embarrassment

How ‘Dad Jokes’ Help Children Learn How To Handle Embarrassment

Yes, dad jokes can be fun. They play an important role in how we interact with our kids. But dad jokes may also help prepare them to handle embarrassment later in life.

Read Now
Biden Administration Releases ‘Blueprint’ For Using Social and Behavioral Science in Policy

Biden Administration Releases ‘Blueprint’ For Using Social and Behavioral Science in Policy

U.S. President Joseph Biden’s administration has laid down a marker buttressing the use of social and behavioral science in crafting policies for the federal government by releasing a 102-page Blueprint for the Use of Social and Behavioral Science to Advance Evidence-Based Policymaking.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments