AI on Campus
Why the real danger isn’t the technology but the kind of education we offer
ChatGPT and its many offshoots did not enter our lives gradually. They appeared almost overnight, at an alarming scale and speed. Computers, the internet, and mobile phones each took decades to move from obscurity to everyday use. AI, by contrast, passed from research labs and science fiction into daily life with astonishing suddenness.
In the 1950s, computers were remote and obscure, confined to military-industrial research, before slowly becoming familiar to the public in the 1970s. A decade later, they began working their way into our homes and offices—a process that unfolded over roughly forty years. The internet, again a governmental incubus, sprawled into our lives gradually across a decade and a half—from 1990 to 2005—before it became the thing we know and love and use for nearly everything. iPhones took the world by storm early in this century, but it’s not as though we didn’t know what mobile phones were. Mobile phones slowly percolated between 1975 and 2005 before Steve Jobs introduced us to the true meaning of the letter i. Nor was the internet itself new by this point. iPhones just put the two together in a way that in retrospect seems obvious.
AI is different. It too has been around for roughly forty years, but its real existence was as a project-in-development within computer science departments in universities. During all this time it had no presence in business applications, let alone personal use. Very few people had any idea what AI really amounted to. At most, it was a fanciful notion we only encountered in science fiction. Like fusion energy and flying cars, it was just something that existed in the absolute future – a caricature of progress, suitable for the Jetsons or Blade Runner.
And then, with basically no warning, ChatGPT waltzed into our lives like it owned the place. It became available to the public in November of 2022, and in less than a year it was ubiquitous, upending the established practices of academia, journalism, and advertising, and is now breathing down the necks of mid-level white collar workers. It presently seems to have set its sights on medicine and law.
It has given many of us whiplash, inspiring both adoration and amazement, on the one hand, and panic and hatred on the other. It is a new source for fantasies of utopia and doom. It’s not hard to find experts in the field predicting the annihilation of the human race as the result of unchecked AI development. While other researchers have judged these claims to be overhyped, nearly all experts seem to agree that the social and economic disruption this technology will introduce is enormous.
I confess I cannot speak to any of this. I have no illusions that LLMs are now, or ever will be, “intelligent” in the sense of conscious, thinking entities. My sense is that even debating this question already involves a category error. As far as the horrifying dangers some people describe, these too strike me as overwrought, but of course I can’t be sure.
What does seem clear is that this topic calls for seriousness, expertise, and caution at a public policy level. Given that the leading edge of AI research now lies within industry, while academic computer science struggles to keep pace, the need for sound public policy seems all the more urgent. Unfortunately, it is difficult to imagine near-term policy being guided by seriousness, expertise, or caution.
As for the dangers and benefits of AI in the here-and-now, I do have some things to report. What I’ll take up presently is the significance of AI for higher education. This will be a series of essays; it’s too much to cram into a single post. So what follows might seem a bit abbreviated.
Right off the bat I should confess: ChatGPT has proven to be a valuable resource for me, and I find it to be a fascinating interlocutor. As an academic, I recognize that I should be ashamed of myself for saying this. For the past two years there has been an uproar over student use of AI. Kids quickly figured out that AI could write their essays for them, or at least polish their own writing to perfection. This does seem bad, I admit. And by now the technology has advanced to the point where there is no practical way to detect if a paper is AI generated.
Disturbing as this technological development may sound, its actual significance depends largely on the academic environment in which it is considered. There is little doubt that if a student uses AI to generate a research paper, the student undergoes a diminished educational experience and the outcome, in the aggregate, will be less-educated students. This is, however, a relative problem. We should gauge it in terms of the methods and metrics of what’s actually going on these days in the academy.
In a degree program devoted to reading, discussing, and writing about poetry, literature, and philosophy, and to learning how to think seriously about politics, ethics, and meaning, student reliance on AI to write essays would be disastrous. In such a program—let’s call it a program of liberal education—writing is not simply for the sake of demonstrating key points about a text or regurgitating the presentation of a lecture or a textbook. It is an opportunity to discover what one actually thinks about a topic or book. The process of writing is discovery. Offloading this work to AI would undermine the very purpose of the education itself, and much of the resulting work could justly be regarded as fraudulent.
Most institutions of higher learning, however, have already abandoned liberal education, including many institutions that still call themselves liberal arts colleges. They neglect, among other things, the subtle virtues of thinking deeply, dialectically, and discursively about the content of primary texts of poetry, philosophy, and the natural sciences. This is a blunt claim, and it is certainly open to dispute. If this assessment is roughly correct, however, the ill effects of AI at such institutions might not be significant. Where student writing is limited to pro forma reporting or perfunctory undergraduate research, it is hard to see what is truly at stake in students using AI to polish, or even generate, their papers. AI would be no more consequential than spell-check.
In this way of thinking about the question, AI only highlights a more fundamental problem in the contemporary university. If students are able to succeed academically by using AI, the project of education was already a failure.
I teach at a small liberal arts college that places great emphasis on dialectical reasoning and on the discipline of writing, understood as a means by which students discover what they truly think about serious questions and the profound ideas they encounter in great literature. If these students were to use AI to write their papers, the damage to their education would be severe. It would defeat the purpose of the writing project entirely.
It is the custom at this college for faculty to meet with students one-on-one to discuss their papers. Moreover, the students undergo oral exams every semester, so if they have not been availing themselves of the opportunities to write sincerely, they are not in a good position to talk to their teachers about their work. As far as I can tell, we have not yet seen ill effects from AI here. (This is not to say there have been no AI-related issues; but so far these issues are marginal and rare and involve students who are struggling in other ways.)
I think this suggests something about how we can defend against the feared effects of AI in academia. There’s no point in trying to ban AI, but we can approach the project of education in a way that obviates the danger. Institutions that foster serious liberal learning can make an honest case to students for why they should not rely on such technology. Moreover, such a context makes dependency on AI conspicuous. Students who lean on AI will present as weak and less able to engage in serious conversation about their studies. As a result, a culture of liberal learning can sustain an ethos of disdain for the abuse of AI. In other types of academic settings, cultivating such an ethos would be difficult; and it may not even be especially necessary.
AI undoubtedly poses challenges, including within higher education. But in this context, the challenges may not be the ones we most readily imagine. Where AI introduces genuine tensions in education, it does so in part at least by revealing problems that were already there. In this respect, then, the significance of AI lies less in its disruptive power than in its diagnostic value. The encouraging implication is that these problems are not beyond repair. Institutions that are committed to authentic liberal education—especially those that treat writing and conversation as disciplines of thought—are well positioned to respond, and to show what education can still be in an age of AI.


