AI is Here, But Is It Here to Help Us or Replace Us?
In his New Laws of Robotics the legal scholar Frank Pasquale offers a compelling vision of how “AI should complement professionals, not replace them.” His conviction is that we do not need to be “captured or transformed” by technologies of automation because “we now have the means to channel” them.
The distinction he draws between “technology that replaces people and technology that helps them do their jobs better” is simple yet relevant. Will generative AI help academics do their jobs better? Or will it replace them? In my book Generative AI for Academics I wanted to put forward an optimistic account suggesting that generative AI can help academics do their jobs better. This requires a reflexive approach to conversational agents as interlocutors, using interaction with them to clarify and refine our outlook, rather than framing them as a way to outsource unwanted tasks.
In other words, self-understanding is integral to using generative AI effectively as a scholar: it is needed in order to generate high-quality outputs that avoid the risks of mistakes and errors the technology is inherently prone to generating. As Pasquale later observes, “Knowledge, skill, and ethics are inextricably intertwined” and this means that “We cannot simply make a machine ‘to get the job done’ in most complex human services field, because frequently, task definition is a critical part of the job itself.”
To take a reflexive approach to conversational agents, academics have to claim the autonomy to define the tasks involved in scholarship. The conversational agent is contributing to increasing the clarity of defining those tasks, rather than being a mechanism to outsource them to automated systems which lack the expertise of the scholar. The only people who can adequately define the work involved in scholarship are scholars.

Is it realistic to imagine a reflexive approach could become mainstream? I suspect not. The reason for this skepticism is awareness of the workload pressures to which academics are subject, and the ways in which strategic responses to these pressures can unintentionally increase these expectations.
Let’s return to Pasquale’s invocation of professionals doing their jobs ‘better’ as a result of engagement with automated systems. Is it doing your job better if you publish more journal articles as a result of your use of generative AI? Even if many academics would answer ‘no’ to this question, we still act as if this is what we believe. Productivity stands as a proxy for our institutional worth.
It is reassuring to throw ourselves into producing more, particularly in an academy that is continually counting and encouraging us to do the same. The creeping sense of insecurity generated by the threat of automation is likely to make that anxiety mount rather than recede, encouraging individuals to throw themselves ever more fully into the rat race in the hope that it ensures the continued existence of their role. The competitive cycles in which managers are “heating up the floor to see who can keep hopping the longest,” as the political economist Will Davies once put it, leaves many of us already disposed to such a response.
The problem is that norms of productivity are easily ratcheted up as individuals act strategically to meet perceived expectations. As a first year PhD student in 2008, I was told in a training session that we should not try and publish alongside writing our thesis. Even then I could see this was bad advice. Fifteen years later, it is difficult to imagine a candidate being shortlisted for a short-term postdoc, let alone a lectureship, without a publishing record compromising at least a few items.
Filip Vostal argues that “early career academics are particularly vulnerable to the restructuring of higher education in comparison with more established and tenured/permanently employed senior scholars and professoriate.” The expectations of output are ratcheted up because competition leads to an intensification of activity as academics accelerate their work in pursuit of a competitive advantage. Exactly what is perceived as a ‘normal’ level of academic productivity will vary across fields and disciplines. However, it will tend upwards as long as people feel it’s a standard they need to meet, with those who exceed it contributing to the normalization of a higher standard.
The simple fact of senior figures publishing at the rate they do sends a message. The tendency for those messages, condensed through social media and compiled into books on academics career development, only makes these behavioural cues explicit. Even if no one expected that most academics match an output of, say, twenty peer-reviewed papers per year, this impacts the perception of what is productive.
How would early career researchers fare in these circumstances? Presumably some would cope by using generative AI to increase output, whereas others either would not or could not do this, leading to a decline in their relative competitiveness. Further, it is easy to see how classed and gendered inequalities could be reinforced by this dynamic, as freedom from the need for paid work and an absence of caring responsibilities would contribute to an ability to participate in this great productivity acceleration.
Recent crises provide evidence of how this might play out. The inequalities of the pandemic in which women submitted proportionally fewer papers than men is a trend likely to be exacerbated by generative AI. Not least of all, because there are start-up costs involved in developing generative AI routines that could lead to productivity gains sufficient to make a demonstrable impact on the rate of publication. There is a risk of a deterioration in the relevance of knowledge production if academic work becomes increasingly difficult for those with caring responsibilities, chronic illnesses, disabilities or simply an unassailable commitment to not having their lives defined by work.
What we take doing our job ‘better’ with generative AI to mean matters for what comes next. As Pasquale points out, “the future of automation in the workplace – and well beyond – will hinge on millions of small decisions about how to develop AI”. The choices academics make now, how we think and talk about the new possibilities which are opening up, alongside many others, will shape the nature of the AI university.