Saskia Tholen is a graduate student in political science at UBC. Her main research interests are in democratic theory and critical theory.
I told myself I knew the signs: the perfect grammar paired with an awkward, disjointed structure, the missing citations, the bizarre subheadings, the vague, robotic language. The tendency, as another TA put it to me recently, to “use all the right words without actually saying anything.” But these are the obvious cases, and when used carefully, generative AI tools like OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini are now virtually undetectable in writing.
These tools have become ubiquitous on college and university campuses. Research finds that the majority of Canadian post-secondary students are using AI in their schoolwork, and some international studies report numbers as high as 86 and 92 per cent. At UBC, 78 per cent of students surveyed this year said they use AI to support their learning, including 56 per cent who use it at least once per week. And it makes sense! AI tools are huge time-savers, and many students find that these tools improve the quality of their work — and as a result, their grades.
Schools are scrambling to respond. Like many institutions, UBC has published general principles that weigh the benefits and risks of AI use, but practically, the response has largely been a matter of individual judgment. Course policies vary widely: some professors have tried to ban AI completely, others allow it for research and process work but not writing, and still others actively encourage their students to practice using it. No one has figured out how AI fits into existing rules about academic honesty, how it affects the grading scale or how to enforce policies, given that AI use seems impossible to prove definitively.
I’ve had students who submit sophisticated, grammatically flawless essays online but can barely string a sentence together when asked to write by hand. I’m sure the essays are written with AI, but I can’t know for certain. Which should get the higher grade: success by plagiarism or a poor but honest effort? No one seems to have an answer.
In a recent piece for New York Magazine aptly titled “Everyone is Cheating Their Way Through College,” James D. Walsh chronicles the explosion of AI use in higher education. Describing the experiences of one student who reported using AI in all her essays, Walsh writes:
“I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”
[…]
Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”
A technology starting to feel like a necessary and unquestionable part of life is a symptom of its ubiquity: we can’t really imagine living without it. It seems illogical not to use AI, when students have absorbed the message throughout their schooling that high grades matter more than anything, and when everyone else is using it too — there’s definitely a social contagion effect at work. Once you begin cutting corners, it’s hard to imagine voluntarily taking the long way around just for the sake of the antiquated “beauty” of thinking for yourself.
Soon enough, though, the long way around is no longer an option, because technologies — all technologies, digital and otherwise — rewire our brains over time. Studies are already linking AI to declining critical thinking skills, especially among younger users, and the long-term effects are unknown. It’s one thing to make a strategic choice not to exercise a particular cognitive faculty because there’s a more efficient option available, and quite another to risk losing that faculty, or parts of it, altogether.
I hear the alarm bells from nearly every professor I ask: “the students can’t read, they can’t write, they can’t think anymore,” they tell me. The effects of AI are hard to untangle from the lingering impacts of the pandemic on mental health, attendance and learning outcomes. However, few seem to doubt that AI — despite its valuable potential to make education more personalized and adaptive, and to take some of the work from teachers’ overfilled plates — is doing serious damage to students’ literacy.
On the other hand, it’s common among both students and professors to think of AI as a neutral tool that just needs to be harnessed responsibly. It’s certainly an appealing argument: AI is coming whether we like it or not, but we can get ahead by learning how to use it to our advantage. This advantage is connected to employability: integrating AI is supposed to help tailor our “human capital” to what the new world of work is going to demand, and this, far from diminishing our value, will make us more productive and less dispensable.
We’ve been having the same misguided conversation about the internet and social media as “tools” for years, as if you can separate the instrumental benefits from the basic mind-altering character of the technology. This logic is partially symptomatic of a culture that believes being good at working the system makes you a master of it, and that adapting to disruption is a virtue. It’s a dehumanizing culture, a culture of precarity, a culture of resignation — not to mention a culture that has abandoned workers. But this logic also depends on a kind of wilful ignorance about the nature of the technology and about the scale and scope of the disruption it’s going to cause.
It isn’t just that AI is reshaping our minds. Consider this: eventually, humans will be written out of the equation altogether. Walsh mentions that AI’s potential to take over the task of grading students’ assignments, in addition to writing them, threatens to reduce “the entire academic exercise to a conversation between two robots — or maybe even just one.” But the problem is more fundamental than that. From the AI’s perspective, generating knowledge just means infinitely recombining existing data. “New” knowledge no longer means original knowledge. And what happens when the data being infinitely recombined — the content of the assignments, to put it one way — are themselves artificial creations? Already, more than half of online text is generated or translated by AI. The process becomes perfectly self-sustaining: to borrow words from a professor in my department, AI threatens to collapse knowledge production itself into an artificial mind endlessly “eating its own shit.”
Will being “good at ChatGPT” save any of us in this scenario — our jobs or our souls?
There’s another version of the argument for embracing generative AI that I’d like to mention here. In April, historian of science and technology D. Graham Burnett asked for The New Yorker, “Will the Humanities Survive Artificial Intelligence?” Burnett acknowledges that a lot of scholarly work is becoming irrelevant, now that AI can instantly call up the information and analysis that human researchers spend years labouring over, all tailored in real time to the user’s exact preferences. AI can write our books for us. Ostensibly a prospect that would worry scholars, Burnett reframes this disruption as an opportunity to pare the humanities back to their classical state:
[F]actory-style scholarly productivity was never the essence of the humanities. The real project was always us: the work of understanding, and not the accumulation of facts. Not “knowledge,” in the sense of yet another sandwich of true statements about the world. That stuff is great — and where science and engineering are concerned it’s pretty much the whole point. But no amount of peer-reviewed scholarship, no data set, can resolve the central questions that confront every human being: How to live? What to do? How to face death?
The answers to those questions aren’t out there in the world, waiting to be discovered. They aren’t resolved by “knowledge production.” They are the work of being, not knowing — and knowing alone is utterly unequal to the task.
By taking over the rote work of knowledge production, Burnett argues, AI will unshackle the humanistic quest for understanding. “This is the pivot where we turn from anxiety and despair to an exhilarating sense of promise,” he writes. “These systems have the power to return us to ourselves in new ways.” Practically, this means disciplines can move away from the reductive notion of “productivity” measured by the volume of published research, and toward valuing scholarly inquiry for its own sake. At the same time, the nature of teaching will change because students can no longer be made to read or write if they aren’t motivated to do so.
On the surface, this is an attractive prospect. There’s no doubt that the “publish or perish” dynamic is constraining the academic imagination and pushing certain forms of knowledge to the margins. It’s also true that measuring success by the volume of outputs rather than the quality of outcomes is a classic sign of a broken incentive structure. Academia in general and the humanities in particular do have a “knowledge production” problem.
It’s also heartening to imagine a university where every student is there because they want to be — because they’re intrinsically motivated to learn, to ask the deepest questions with genuine curiosity, to grow. I know how defeating it is to try to teach when most students are only enrolled because getting the credit is a stepping stone to some other goal. We all want a room full of students who find our subject as inherently valuable as we do and who have the passion and skill to excel on their own merit.
The problem with this vision is that it’s exclusionary. In a way, it’s a return to a more classical academy: one oriented around the love of wisdom and insulated from economic pressures, yes, but also one that assumes wisdom is the pursuit of the few. In the present context, embracing automation means fewer jobs for knowledge workers in an already declining market. And what to do with all the students who aren’t intrinsically motivated to learn but need a post-secondary education to make it in the workforce? Will we let them graduate uninformed and functionally illiterate with a paper degree, or will the AI-induced purification of the academy filter them out altogether?
I absolutely sympathize with this way of thinking. I’ve joked to friends that sometimes when I grade papers, an evil voice in my brain starts chanting make the university elitist again! But I try to squash that voice because I recognize that there’s no going back without leaving a lot of people behind — especially when our social safety net is so full of holes (automation can be emancipatory, but we haven’t exactly socialized the means of production here, if you catch my drift). We still need to prioritize inclusion, even if it comes with tradeoffs.
The potential benefits of AI in terms of promoting equity need to be an explicit part of this conversation. For instance, should my assessment of an essay written with AI change if I know the student has a learning disability or a mental health issue, or speaks English as a second language? AI tools could serve as a form of accommodation for these students, helping them participate on a more equal footing with their peers. There are also documented differences in AI use by gender and race/ethnicity among Canadian students, suggesting that AI policies will affect inequalities along those lines too.
I don’t delude myself into thinking that anyone excels purely “on their own merit.” How can I disdain a technology with the potential to break down barriers, even if I believe it’s lowering the overall quality of education in other ways? There’s always another voice in my brain chanting, no less forcefully than the first, democratize education!
Beyond the equity issues, how can I criticize students who cheat because they don’t want to be there but need to get a degree because of structures beyond their control? I don’t want to ruin anyone’s future. And besides, penalizing students won’t make them care, and it certainly won’t put the AI genie back in the bottle.
I’m not writing this to shame anyone, or even to try to convince students not to use these tools — although I think it’s a good idea for all of us to reflect on what our education means to us and how AI fits in. The best I feel I can do as a TA is try to breathe some oxygen on the little spark of curiosity and self-confidence that each student has, and to lead them by example to appreciate the value of critical thinking and the written word. I joke that I’m trying to save their souls one by one.
This is, however, not the best that we can do collectively. It isn’t enough for faculty members and university administrators to debate about course policies, although this certainly matters too. We need to confront the deeper identity crisis in higher education that generative AI is throwing into especially stark relief — but did not create — which is no less than a battle over the soul of the university.
In many ways, higher education still appeals to the classical ideal of education as an end in itself; Walsh cites, for example, Columbia University’s lofty description of its own curriculum as “intellectually expansive” and “personally transformative.” At the same time, universities need money, and many of the people who control the money want higher education to be a training ground for the workforce, shaped in accordance with the practical needs of society or the nation.
In general, higher education is currently failing to do either of these things well. On the one hand, the university has become a sort of undergraduate factory where students feel like numbers and commodities, and where well-rounded education is being subordinated to economic demands — including through the systematic dismantling of the humanities. On the other hand, there’s a widespread feeling among students that their classes are irrelevant to their lives and aren’t preparing them well for the future. Mass cheating with AI is the symptom here, not the cause, although it threatens to create a circular relationship. Both the intrinsic and instrumental value of higher education are in jeopardy, and neither banning AI nor embracing it will get to the heart of this crisis.
I don’t have the answers, but I do believe neither ideal is sustainable on its own. The university as an insulated arena for intellectual and personal self-development is a humane ideal, but it’s one made for a bygone era and tends toward exclusion. The university as a practical training ground is an ideal with great potential to lift people up and advance the public good, but taken to the extreme, it represents a profound loss in terms of our collective capacity for meaning-making, cultural creation and free expression. I think we should be striving to combine the best of the two, and must recognize that both ideals are constrained by the university’s financial structure and by broader economic policy.
Whatever happens, the university will never be the same, here at UBC or anywhere else. Although the vision for its future is contested, I want to echo Burnett’s idea that we can and should reframe the present crisis as an opportunity to reimagine what higher education could be. Generative AI is a serious disruption, but this crisis has been a long time in the making. If we’re being honest with ourselves, we need to treat AI less as a random shock to the system that needs to be contained, and more as an inflection point: a moment of reckoning in the history of the university, destabilizing but potentially transformative.
This is an opinion article. It reflects the contributor's views and does not reflect the views of The Ubyssey as a whole. Contribute to the conversation by visiting ubyssey.ca/pages/submit-an-opinion.
First online
Share this article