Diversity statements are one of the many things that go for a dime a dozen these days. Part of the application packages for professorial appointments at Canadian universities, these statements (supplemented by cover letters, CVs, teaching reviews, sample syllabi, etc.) ask candidates to affirm the principles of diversity, equity and inclusion (DEI) in higher education. Those keenly committed to DEI initiatives regard diversity statements as an indispensable tool for assessing whether a prospective candidate has a sophisticated grasp of institutional priorities. Cynics in the academic world view them as make-work projects whose lack of subtlety smacks of a moral purity test and is expressly hostile to the work of higher education.
Regardless of one’s viewpoint, an original diversity statement is hard to come by. As a quick Google search reveals, the best advice on diversity statements lacks much in the way of diversity. So too do the templates exhorting applicants to be honest and original while largely reproducing an uncontroversial set of rhetorical commitments: centring the de-centred voices of marginalized groups, exploding canons of taste, implicit or explicit critiques of colonialism, creating “safe spaces” in the classroom, aligning oneself with an historically disenfranchised community, and guarding against microaggressions.
In March of this year Patrick Hrdlicka, Professor of Chemistry at the University of Idaho, asked the world’s most famous AI platform, ChatGPT, to produce a sample diversity statement. Given the reputation of OpenAI’s large language model (LLM) for producing passable prose, it’s not surprising Hrdlicka’s prompt delivered an adequate DEI paper. ChatGPT’s highly readable diversity document captured the now-familiar catchphrases of universities, corporations and government bureaucracy: diversity is key to success, innovation depends on hearing every voice, we must be active listeners intentionally creating inclusive spaces, DEI is a way of life not just a way of work, and more. No critic of DEI initiatives himself, Hrdlicka’s brief post about his experience concludes by wondering whether new ways of measuring DEI commitment may be necessary in an AI-saturated world.
Concerns over AI saturation in the world of higher education had been echoing throughout the winter and into the early spring of 2023, though less with respect to AI and job applications than to student submissions. AI boosters hailed the new technology for its potential to help students struggling to produce essay drafts, outlines and thesis statements. Anti-AI “Cold Warriors” saw it as one more devastating technological blow to higher education that would undermine their efforts to help students acquire the habits of free and independent thought. Both sides were trying to determine appropriate methods of evaluation in a world where the production of AI texts by stressed, confused or uninterested students may soon expand from a trickle to a flood.
But while the debate over AI use in higher education – including in the classroom – has a certain practical urgency, the deeper risk lies in missing the extent to which AI is not merely a new tool but also a symptom of well-established trends in Canadian education. Namely, a largely unreflective accommodation of new technologies (even when they demonstrably undermine habits of learning) and a growing technocracy shaping the culture of campus conversation. From this perspective, ChatGPT’s aptitude for producing moralizing bureaucratic newspeak alongside passable papers on Shakespeare – largely inoffensive and generally uninspired – points to an existential crisis facing higher education in Canada.
Campus Homogeneity
The death of the ideal university – a community of scholars devoted to learning – has long been lamented, at least in the context of the humanities. Well over 100 years ago Max Weber declared the end of the “universitas litterarum.” In its place had arisen, already by 1907, a “capital-intensive, bureaucratically organised enterprise.”
Weber’s insight at the turn of the 20th century is remarkably apposite early in the 21st. Anxiety about ballooning university administrations across North America as well as the corporatization of higher education has become commonplace. So too has concern that top-heavy technocratic institutions are constitutionally oriented towards narrowing the scope of academic inquiry. And this is to say nothing of plummeting enrollments in traditional humanities disciplines (which typically comprise English, history, philosophy, Classics and languages).
From the early days of the Heterodox Academy in the United States, to the recent conference held by Heterodox Canada, to the 2022 study by the Macdonald-Laurier Institute that found a distinct lack of viewpoint diversity at Canadian universities, the case that the academy has become politically one-sided seems hard to deny, despite the best efforts of many teachers to avoid the ideological fervour of modern administrations. As Peter Wood, president of the National Association of Scholars in the U.S., noted recently, campuses have become places “where certain ideologies are so far in the ascendency that they cannot be discussed, let alone criticized.” Wood’s claim is confirmed by the Heterodox Academy’s annual student surveys on campus self-censorship, which show that students hesitate to participate in entire categories of discussion – politics, race, religion – especially if they are politically conservative.
The current ubiquity of DEI initiatives is indicative of the modern administrative university’s imposition of political commitments and even pedagogical priorities. So, for example, the institution adopts binary oppositions between white and “racialized,” settler and Indigenous, cisgender and queer, despite at least three decades of professorial exhortations inviting students to reject oppressive binaries in favour of their deconstruction.
The technologies accentuate the managerial ethos of the technocratic institution. They become complicit in the hollowing out of human beings by inviting them to outsource human memory to search engines and human thinking to chatbots.
Perhaps even more worrisome is the politicized reading of texts that oscillate between “Marxist,” “feminist” and “postcolonial” frameworks. This practice not only undermines the diversity and integrity of the positions themselves but reduces the work of reading to the utilitarian imposition of predetermined categories. The main problem is not that these categories introduce diverse perspectives but that their application is not nearly diverse or nuanced enough. All of this creates a climate in the which the work of education feels increasingly oriented, implicitly or explicitly, to the production of political activists.
Often missed in debates about the intellectual climate of North American universities is the extent to which the unreflective accommodation of modern technologies is facilitating this “closing of the Canadian mind,” especially in the humanities. This is not only because interventions like ChatGPT are adept at offering what the modern university often demands – research keyed primarily to patterns of oppression – but because the technologies accentuate the managerial ethos of the technocratic institution. They become complicit in the hollowing out of human beings by inviting them to outsource human memory to search engines and human thinking to chatbots.
“Our current dominant modes of discourse have already been largely emptied of meaning,” Jeffrey Bilbro, an American professor and editor of Front Porch Republic, asserted in a recent article. “It’s a small step from employees spinning out endless [search engine optimization] content and social media influencers chasing eyeballs to LLM-written text and deep fakes.” Bilbro pointed to an email sent out by Peabody College’s DEI office after a shooting at another school, calling for “a culture of inclusivity.” It was written by ChatGPT. As Bilbro put it, “Such bureaucratic emails are already formulaic and essentially meaningless, and the fact that an LLM could generate a passable email exposes the pre-existing vapidity of this discourse.”
Despite such missteps, many in our universities seem happy to embrace the technology as a labour-saving device and pedagogical tool. Some professors in Canada argue that ChatGPT is the swiftest means to jumpstart the writing process and to save students the anxiety induced by the blank page. Others note that the errors which punctuate AI text offer an opportunity for students to exercise their editorial and research skills. Daniel Lametti, Professor of Cognitive Psychology at Acadia University, last year wrote that he hopes his students will employ AI in the same way they use a thesaurus, to “make the task of writing a little easier.”
Evidence of the move to embrace AI in higher education practically fills the pages of Canada’s flagship university journal, University Affairs. It has run a stream of articles advocating the “ethical use” of AI by students and its thoroughgoing incorporation into teaching, class preparation and more. At least one of these articles employed AI to generate its talking points. And while the same publication has published some notes of caution, the concerns raised fail to transcend basic questions of ethics and use, leaving untouched the more profound issue posed by this technology: what kind of a world does it produce?
Whither Education – And Thinking Itself?
Steady exposure to vapid discourse is perhaps no new thing. But to a remarkable extent, North American culture (higher education in particular) has been inundated with it through the powerful twin technologies of social media-fuelled smartphones and the laptop computer. These devices have altered teaching, learning and social, political and even familial dynamics through the recalibration of what the writer and social critic Neil Postman calls the “media ecology.” Echoing Marshall McLuhan, Harold Innis and others, Postman notes that “a medium is not a neutral mechanism through which a culture conducts its business. It is by its very form a shaper of values, a masseuse of the senses, an advocate of ideologies, an exacting organizer of social patterns.”
The appearance of sophisticated AI is amplifying the power of these devices. The “worldview” of ChatGPT, for example, is characterized by an intensification of at least two features of modern digital technologies. On the one hand is a relentless and largely indiscriminate accumulation of information through the “scraping” of the internet. The AI then repackages the information according to predictive models that produce mash-ups of web-based content. On the other hand comes accelerated “cognitive offloading” (i.e., letting technology do your thinking for you) already characteristic of internet search activity.
The problem of the former for the humanities is its reduction of the human being to a largely passive prompt-generator posing questions for the encyclopedic software. The student or teacher using these technologies is thus incorporated, invisibly for the most part, into the system-assumptions of the medium. Students are transformed into consumers of algorithmically mediated information whose sources, assumptions and accuracy remain unknown.
The problem with the latter issue, cognitive offloading, is its reduction of thinking to the passive consumption of prepackaged ideas, contributing to the further degradation of a fundamental goal of all education: thought. In its place is the “computational model” of human thinking, as Matthew B. Crawford, a research fellow at the University of Virginia, recently observed. AI promotes the idea that computer activity is akin to human thinking and so human thinking, by extension, must be like computing: a simple process of consumption and regurgitation.
In one of his many rich and beautiful essays, the historian of technology L.M. Sacasas notes that AI generation produces “lonely surfaces.” These are familiar to anyone who has scrolled on an iPhone or surfed the web, their attention neither wholly present nor entirely absent. Parents see it with their children (and do it themselves) when “double screening” – dabbling on their phones while another device streams entertainment or news. As Sacasas notes, what matters here is not the content or even the power of a particular technological innovation. What matters are the habits produced by the mediating technologies that induce (almost imperceptibly) patterns of scanning that become a normative mode of attending not just to online content, but to texts and even to people.
The work of old-fashioned reading cultivated patterns of attention that presuppose depth, not only in creative works but in people and the world. Current technological mediations do not erase the depths, but they do condition a person to stop considering them. This not only induces boredom and the need for a steady stream of external stimuli, but erodes the inner dialogue that is the necessary precondition for thinking and questioning. At its far end it exacerbates the loneliness already rampant among university students. And loneliness is, in the words of Hannah Arendt, the bitter fruit of a life in which thought has become impossible because the dialogue with oneself that constitutes human interiority has been quashed.
The universities’ toleration and adoption of technologies whose habit-forming applications are hostile to free thought is astounding. With the exception of a few outliers among the professoriate, opposition is reduced to meagre expectations of ‘ethical use.’
Whatever the ideological biases of ChatGPT – and even if a bias-free technology were somehow available – the social habits engendered by this most recent time- and effort-saving device undermine the care that is fundamental to the moral formation implicit in all humanities education. To cite Sacasas again, the virtue of care is undermined by the outsourcing of thought to generative technologies. Care, he writes, is “what issues forth in meaningful knowledge of the world and others. Care is ultimately what transforms the quality of our involvement and engagement with the world so that we pass from ‘getting things done’ to living.” And all the while, precisely because of the passivity it induces, the AI facilitates the production and reproduction of specific worldviews.
The data substantiating the detrimental effects of social media on young people are so well-known as to hardly require repetition. There is also a proliferation of studies on the deleterious consequences of smartphones and laptops in the classroom.
Despite the evidence, the same institutions which allied themselves with the federal and provincial governments’ authoritarian response to Covid-19 by exhorting faculty, staff and students to “follow the science” largely refuse to do the same regarding new technologies on campus. From countless studies to popular articles, there’s exhaustive evidence chronicling the detrimental effects of wired classrooms. But the universities’ toleration and adoption of technologies whose habit-forming applications are hostile to free thought is astounding. With the exception of a few outliers among the professoriate, opposition is reduced to meagre expectations of “ethical use.”
As the institutions tasked to teach embrace technologies antithetical to humanities learning, the value of a humanities education becomes even less convincing. For parents, the question becomes: why send your kids to university if they will be given a poor education supported by technology that exponentially reduces their own labour?
Protesting Too Much?
There is yet another problem in the activist university’s eager embrace of the ChatGPT world: the material production of these technologies can run expressly against the moral imperatives the institutions profess.
The intellectual property and copyright issues and the privacy violations that are part and parcel of ChatGPT’s information accumulation, for example, have been largely sidestepped in what amounts to a willful subordination of the institution to the corporatization of learning. “AI image and text generation is pure primitive accumulation: expropriation of labour from the many for the enrichment and advancement of a few Silicon Valley technology companies and their billionaire owners,” charges the artist and writer James Bridle. “These companies made their money by inserting themselves into every aspect of everyday life, including the most personal and creative areas of our lives: our secret passions, our private conversations, our likenesses and our dreams…They are selling us back our dreams repackaged as the products of machines, with the only promise being that they’ll make even more money advertising on the back of them.”
Even more damningly, the well-publicized use of labour in the Global South to tag obscene material dredged by OpenAI’s chatbots in what could only be described as trauma-inducing exploitation runs counter to the professed commitments of Canadian universities to the DEI principles they demand of employees and the purity increasingly required of historical figures. These ethical tensions would be less concerning if institutions of higher education in Canada were not so often and so vocally inclined to publicly celebrate their presumed virtues.
Light Enough for the Next Step
Technological developments like ChatGPT reveal deep tensions in the provision of post-secondary humanities education in Canada. Its ability to churn out university diversity documents indistinguishable from “the real thing” on the one hand indicates the extent to which DEI initiatives have become not only normative but increasingly formulaic, and on the other hand suggests that requiring prospective professors to prepare such statements inscribes political priorities into every hiring decision. More egregiously, perhaps, this technology runs counter to the professed values of most Canadian universities and exacerbates the technologically mediated habits most hostile to developing the capacities critical for learning: attention and care.
Thankfully, higher education in the humanities does, for at least some students, remain part of their path to a deeper and more profound human flourishing. And there are still remarkable teachers who continue to expose their students to the heights of human creativity and reflection. But university administrations face the urgent question whether their complicity in allowing habits of attention, curation of information and cultural production to be mediated by a handful of technology corporations will compromise their ability to solicit students and their families. Though the credentialling power of higher education still carries significant weight for many Canadians, surely parents will become increasingly skeptical of institutions whose pedagogical promises are compromised by political agendas and practices unsupportive of deep learning.
As the previously cited Postman argues, the excitement over technological development and the ready adoption of new technologies as if they are inevitable is at minimum naïve, if not hostile to human ends. The 20th century, Postman notes, saw more technological developments than all previous centuries combined, yet was also a time of slaughter, tyranny and totalitarian ideology: “Is it not possible that behind the noon-day brightness of technological ingenuity there lurks something dark and sinister, something that casts a terrible shadow over the better angels of our nature?”
‘Parents and the world owe the young some (let it be four) clear years for becoming not a this or a that, but for learning to be a human being, whose powers of thought are well exercised, whose imagination is well stocked, whose will has conceived some large human purpose, and whose passions have found some fine object of love about which to crystallize.’
Postman’s question invites us to ask where education can go in a culture saturated by novel technological developments that treat people “as raw material for a kind of social cybernetics.” There is no easy answer other than, perhaps, to insist that humanities education remain an education in the questions that allow one to live a good life. As the famous American teacher and writer Eva Brann notes, the investigation of the good life depends on any number of pedagogical commitments, but two seem especially prescient in an age of politicized institutions and chatbot technology.
One is that students can only engage with human questions about human life if they are supported by teachers who conscientiously avoid “domination over students’ thoughts, be it by ‘raising their consciousness’ or by driving them into ‘role-playing’ or by pursuing incessant one-sided critiques of whatever is at hand or by intruding political opinions, left or right, or by leaning in any way on students for anything but that they should read their assignments and speak their minds thoughtfully, articulately and civilly.” By Brann’s lights, AI is a poor teacher, packaging and repackaging tepid insights that are neither especially articulate nor especially thoughtful, while adding further to the technologies and interventions that train students away from reading their assignments.
The other is that for students to make determinations in life about the good and the bad and what makes for human dignity and what erodes it, we should all recognize that while “almost everyone had better become a this or a that – a research physicist or a licensed plumber…parents and the world owe the young some (let it be four) clear years for becoming not a this or a that, but for learning to be a human being, whose powers of thought are well exercised, whose imagination is well stocked, whose will has conceived some large human purpose, and whose passions have found some fine object of love about which to crystallize.”
This is not the current language of the technocratic institution. None of this can be facilitated or provided by ChatGPT. And there is no roadmap for the recovery of this vision of education in Canada, or for sustaining it where it clings to life. Still, to paraphrase C.S. Lewis, though the future seems dark, there is often light enough for the next step.
Christopher Snook is a lecturer in the Faculty of Arts and Social Sciences at Dalhousie University in his hometown of Halifax, Nova Scotia. A widely published poet, he is the author of the 2018 collection, Tantramar Vespers.
Source of main image: Pexels.