Higher Education Reform

Software Is Not the Silver Bullet to Defeat Plagiarism

May 9, 2019 3817

Do Not Copy stamp

Educational software – whether it’s a teaching aid or a program designed to help teachers with administration – is big business. The recent multi-billion dollar acquisition of Turnitin, a program that is used around the world to flag possible evidence of plagiarism, is further proof of this.

But does this application mean that universities are actually dealing with plagiarism? Our research, conducted across South Africa’s public universities, suggests not. We found there was a reliance on software to identify plagiarism in ways that brought undesirable changes in students’ behavior. Turnitin was broadly used by universities – but its purpose was largely misunderstood. It was considered plagiarism detection software rather than what it actually is: a text-matching tool.

The Conversation logo
This article by
Amanda Mphahlele and Sioux McKenna originally appeared at The Conversation, a Social Science Space partner site, under the title “Universities must stop relying on software to deal with plagiarism”

Turnitin and similar programs don’t deal with the causes of plagiarism. Rather, they allow institutions to claim they’re doing something without really tackling the issues that lead students to plagiarize.

Educational software provides a number of powerful tools for supporting the development of student writing. But when it is superficially used, as often is the case with Turnitin and similar programs at South African universities, that development is undermined.

Students should be taught how to write academically and to avoid plagiarism. Instead, they are being encouraged to write in ways that fool the software. This encourages an instrumentalist understanding of what constitutes academic writing. It portrays the academic process of writing as a technical endeavour – you must avoid the sin of plagiarism – rather than what it really is: a complex practice of knowledge production that draws on prior research.

Software lacks nuance

The software used by Turnitin and similar programs works by matching text. It identifies instances where a paragraph or sentence or phrase in an essay is identical to one in, for instance, an academic journal article. It then generates a report with a “similarity index” percentage. This shows how similar the essay is to other pieces of work.

Unfortunately, many academics believe this percentage equates to degrees of plagiarism. At some South African universities, if the similarity exceeds a certain percentage, a student is automatically assumed to have plagiarised – and fails the assignment. They may also, depending on the institution’s plagiarism policy, face disciplinary sanction.

It’s important to note that Turnitin doesn’t claim its software detects plagiarism. The company encourages those marking student work to remember this, and to apply their own discretion in deciding whether they’re dealing with malicious, deliberate plagiarism.

But despite that disclaimer, academics and students believe Turnitin is an accurate “plagiarism detector”. We have run staff and student development workshops about academic writing in institutions across South Africa, and in some other African countries. At such events, one question inevitably arises: “What is the acceptable percentage on the Turnitin report?”

By insisting that students achieve less than a certain percentage on a similarity report, universities encourage a “plagiarize, but don’t get caught” approach. Students are being encouraged to rework and resubmit their assignments, paraphrasing the highlighted similar text, until their plagiarism evades the software.

This is not a solution to the problem of plagiarism. The act of plagiarism occurs along a continuum from intentional to unintentional. Certainly, some students may purchase essays from paper mills or may cut-and-paste because they think they can get away with it. These students must be punished. Intentional acts of plagiarism require swift, clear responses.

But a great many students plagiarise because they have never been taught about the norms of knowledge production in the academy.

Setting a percentage on the similarity index won’t help. Telling students that a percentage flung out by a piece of software can have enormous consequences for their progress is both nonsensical and educationally unsound. Universities must invest time and resources in developing students’ understanding about writing and knowledge production in different academic disciplines.

Software lacks human nuance. The similarity index alone cannot tell academics whether the phrases it highlights are deliberately plagiarised, are examples of formulaic statements required by the discipline, or are just poorly referenced. The Turnitin report only becomes meaningful when interpreted by an actual person.

Software also can’t identify plagiarism when the student has paraphrased someone else’s ideas and is passing them off as her own. For this reason, plagiarising students whose home language is the same as the medium of instruction are unlikely to be picked up by the software.

Use software better

Why do academics rely so heavily on software like Turnitin? A large part of the reason is probably the burden of growing class sizes. Academics have more students, and less time. Writing development takes time. This may explain the abdication of responsibility to software programs.

Turnitin offers a range of capabilities. It can be used to show students how to build strong claims from multiple sources. But using it simply to detect and punish students entails a policing relationship at odds with learning.

Academics need support to understand how to use the software in developmental ways and universities need to resist calling for simplistic solutions to complex issues.


Amanda Mphahlele is a lecturer in business management at the University of Johannesburg and researching plagiarism for her PhD in Higher Education Studies, which she is undertaking through Rhodes University. Sioux McKenna is director of the Centre for Postgraduate Studies at Rhodes University. Her research interests focus on the role of higher education in society, who gets access to the powerful knowledge in the academy and what constitutes 'powerful knowledge.' She also have a particular interest in doctoral education and postgraduate supervision.

View all posts by Amanda Mphahlele and Sioux McKenna

Related Articles

Deciphering the Mystery of the Working-Class Voter: A View From Britain
Insights
November 14, 2024

Deciphering the Mystery of the Working-Class Voter: A View From Britain

Read Now
Julia Ebner on Violent Extremism
Insights
November 4, 2024

Julia Ebner on Violent Extremism

Read Now
Emerson College Pollsters Explain How Pollsters Do What They Do
International Debate
October 23, 2024

Emerson College Pollsters Explain How Pollsters Do What They Do

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
‘Settler Colonialism’ and the Promised Land

‘Settler Colonialism’ and the Promised Land

The term ‘settler colonialism’ was coined by an Australian historian in the 1960s to describe the occupation of a territory with a […]

Read Now
Webinar: Banned Books Week 2024

Webinar: Banned Books Week 2024

As book bans and academic censorship escalate across the United States, this free hour-long webinar gathers experts to discuss the impact these […]

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments