Featured

Beyond the Randomised Controlled Trial

August 20, 2012 2222

Recent pronouncements from government advisors, and general challenges to the social sciences have declared that the Randomised Controlled Trial (RCT) is the definitive way to carry out scientific research. Any other form of empirical study is demeaned as just a weak version of the RCT and therefore less ‘scientific’. Yet although its value in very specific contexts cannot be denied any imperialist claims for its universal applicability, and as  a bench mark against which all other studies must to be measured needs to be challenged.

The crucial challenges come from the mechanistic foundations that underlie RCT. These assumptions place a straightjacket on the sorts of theories of human actions and experiences that can be tested.  They profoundly limit how we can explain psychological and social processes.

The limits of the RCT can be demonstrated by considering the assumptions that are necessary in order to set up a randomised controlled trial:

  1. A distinct causal variable can be identified (the independent variable, IV )
  1. Clear, expected effects can be specified and measured (the dependent variable, DV )
  1. The main influences on the DV beside the IV can be determined so at an appropriate ‘control’ condition can be identified.
  1. Entities can be randomly assigned to conditions in which the IV is present or in which it is not.
  1. The cost of setting up the conditions has no impact on which conditions are chosen.
  1. The ethical constraints on assigning individuals to ‘experimental’ or ‘control’ conditions do not interfere with understanding the relationship between the IV and DV.
  1. Interactions between IV’s in complex experimental designs are relatively straightforward and not recursive or contingent.

These assumptions carry with them considerable baggage of which many people are ignorant. This is illustrated, most clearly, if the use of RCT’s in the less apparently controversial testing of pharmaceuticals is considered.  In this context RCT plays into the hands of those who want to sell specific cures for specific ailments. All those consequences of a particular drug that are not measured as part of the DV are labelled ‘side-effects’.   In the social sciences this plays into the hands of policy makers who want to claim a specific intervention has a definable outcome.  It leads also to the many silly social psychology, laboratory based ‘experiments’ that are then inappropriately extrapolated to the ‘real world’.

Attempts at alternative ‘real-world’ experiments are weakened by considering them as some sort of poorly controlled RCT rather than examining them in their own terms, often as careful case study comparisons.  For example attempts to test the efficacy of various forms of psychotherapy are less relevant to actual possibilities the more tightly controlled the experimental conditions.  Randomly assigning patients to therapists, or therapies, removes many of the naturally occurring processes that can make therapy successful.  Indeed the whole idea of ‘controls’ in natural settings is paradoxical. They require the removal of ‘confounding’ variables precisely because those variables are likely to be relevant to the processes under study.

Years ago the well-known psychologist was asked about the impact of hospital design on patient well-being. His response was that until we can run RCTs on hospital designs we can never know. This was a counsel of failure. It ruled out any exploration of phenomena that could not be subjects to RCTs. It ignored the fact that fundamentally RCTs cannot demonstrate how systems of any complexity operate. Consider the endless debate on the causes and cures of the banking collapse.  If you set up RCTs to find the ‘cause’ you would be ignoring the complex, social, political and culture interactions that brought on the collapse. You would be forcing a simplistic explanation on a complex phenomenon.

Instead of putting all our methodological eggs into the RCT basket we should be ensuring that the many other rigorous, scientific methodologies are developed to be as effective as possible, whether they be case studies, surveys, time-series explorations, systems analysis, operations research, or any of  the other empirical procedures that have opened up so much of our understanding of society and  being human

Read Related Articles

Robert Shiller on Behavioral Economics

Objective truth, social ‘science’ and tennis balls

The Importance of Studying the Obvious

Rejoinder to Gary Guttings Doubts about the Behavioral Sciences

Professor David Canter, the internationally renowned applied social researcher and world-leading crime psychologist, is perhaps most widely known as one of the pioneers of "Offender Profiling" being the first to introduce its use to the UK.

View all posts by David Canter

Related Articles

Emerson College Pollsters Explain How Pollsters Do What They Do
Communication
October 23, 2024

Emerson College Pollsters Explain How Pollsters Do What They Do

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions
Research
October 10, 2024

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

Read Now
‘Settler Colonialism’ and the Promised Land
International Debate
September 27, 2024

‘Settler Colonialism’ and the Promised Land

Read Now
Webinar: Banned Books Week 2024

Webinar: Banned Books Week 2024

As book bans and academic censorship escalate across the United States, this free hour-long webinar gathers experts to discuss the impact these […]

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
Revisiting the ‘Research Parasite’ Debate in the Age of AI

Revisiting the ‘Research Parasite’ Debate in the Age of AI

The large language models, or LLMs, that underlie generative AI tools such as OpenAI’s ChatGPT, have an ethical challenge in how they parasitize freely available data.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

2 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Ingo Rohlfing

I think one should always be skeptical when we are witnessing overwhelming methodological trends in the social sciences, so this is a welcomed warning. But I think two things deserve clarification. First, one can hardly make a case against the inferential leverage of RCT (meaning internal validity). Unlike any other method, the setting described in the main text allows one to assign a causal effect to the treatment. RCT might not be always feasible, but any departure from RCT renders causal inference more protracted and one should be cognizant of the inferential problems of observational designs of whatever sort. Second,… Read more »

David Canter

No doubt that the RCT can help to point towards specific causes, but my general point is that not all scientific understanding emerges from identifying specific causes. Furthermore, to use this mechanical cause/effect model as the template against which other forms of insight and understanding are measured is to limit the development of scientific knowledge.