A recent New Yorker article by Hua Hsu “What Happens After AI Destroys College Writing?” illustrates many of the social threats caused by the wide-spread adoption of AI in higher education. As I argue in my book The Global Solution to AI, we have to examine the effects of outsourcing our thinking to a machine that lacks a conscience. In other words, what is the result of relying on something that mimics human discourse yet does not fear dying and does not care about the harm done to others?
In his many discussions with students and faculty, Hsu reveals how the ethical behavior of these individuals is compromised over their ambivalence use of AI to do all or some of their work for them. Many of the people interviewed acknowledge that they think it is probably wrong to use ChatGPT to write for them, but they are afraid of being outperformed by everyone else who is using the technology. Moreover, as the article reiterates, many students see college as a simple transaction where one pays money for grades, degrees, and credentials. Thus, they are already being socialized to not care about learning and to only prioritize earning.
Many of the faculty quoted in the article appear to go out of their way to rationalize the widespread use of a technology that undermines education itself. Some simply think that it is an inevitable and uncontrollable development that cannot be stopped or policed. Others affirm that they also like the way it saves time and effort so that one can focus on other things. Here we find the classic degrading of education within higher education as teaching and learning often take a backseat to other activities.
We also encounter administrators whose main concern appears to be to outcompete rival institutions. Since no one thinks the machine can be stopped, the only choice is to do it better than others. This sentiment is a form of cynical irony, which is common in higher ed: For instance, schools know that college rankings have no real value, but they spend huge sums trying to outrank others. They also know that large classes are usually ineffective learning environments, yet they cannot resist their cost-effectiveness. Likewise, universities know that they should not judge their non-tenure-track faculty by biased, unscientific student evaluations, but once again, it is so cheap and fast to offer this type of superficial quality control.
Although it is hard to critique these institutions during a time when they are being attacked by Trump and others for political purposes, it is still necessary to defend the core principles of instruction and research. If we do not teach students how to pursue truth through the use of their own critical faculties, then how can we justify the ethical social value of these institutions?
Hsu quotes one professor jokingly saying, ““If you’re gonna just turn in a paper that’s ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach.” Within this hopefully ironic statement, we find the human drive to use as little mental and physical energy as possible. Freud called this drive the death drive because he found that pleasure is defined by the avoidance or release of mental and physical tension. In other words, we are driven to outsource our minds and bodies so that we do not have to deal with the anxiety caused by the generation of effort and desire.
As the Old Testament warns, people would rather live in ignorant bliss than eat from the tree of knowledge, and AI helps us to achieve this human goal by freeing us from thought and effort. However, Freud also argued that learning always involves unpleasure since we have to replace the pleasure principle with the reality principle. If teachers do not motivate their students to take the difficult path of actually thinking and learning, then all is lost for not only education but society itself.
Hsu points out that across the developed world, literacy rates have been decreasing since 2010, and that is before the current AI onslaught. People are actually getting stupider on a global level as the ability to read, write, and do math has been regressing. Of course, this lack of critical thinking skills enables the spread of conspiracy theories and the power of autocratic leaders. It also normalizes sociopathic behavior as people learn that the only thing that matters is to take advantage of situations for one’s own personal gain.
Fortunately, there are some very simple ways to respond to this crisis in education. The first step is to remove virtually all technology from the classroom: Teachers need to make sure that students put down their phones and close their laptops. Instead of allowing each individual to go off into their own private worlds, instructors should focus on public discussions of course content. Moreover, in the case of documenting learning, we need to make students write in class and help them learn how to learn. It is also important to find ways to meet students on a one-on-one basis to keep them engaged and motivated.
As I have stressed before, universities and colleges know that they have a student mental health crisis, which is due in part to the over-use of smartphones and computers, yet these schools continue to increase their reliance on these technologies. The push to incorporate more AI into the classroom and beyond will only heighten these psychological issues. Moreover, as we saw in the COVID-era move to zoom, educational technology tends to alienate students and reduce their learning. It has also been shown that the more students rely on AI, the more they are unable to read, write, and think critically.
I fear that the main reason schools are not doing the simple things to help their students learn is that they have essentially given up and are only going through the motions of education. The institutions are themselves lazy and short-sighted as they have lost their way in a sea of competing interests and activities. Hsu, himself, demonstrates what happens when teachers and schools fall back into the default mode of cynicism and apathy: “When the school year began, my feelings about ChatGPT were somewhere between disappointment and disdain, focussed mainly on students. But, as the weeks went by, my sense of what should be done and who was at fault grew hazier. Eliminating core requirements, rethinking G.P.A., teaching A.I. skepticism—none of the potential fixes could turn back the preconditions of American youth. Professors can reconceive of the classroom, but there is only so much we control. I lacked faith that educational institutions would ever regard new technologies as anything but inevitable.” What I find in this statement is a fatalistic impulse to stop the fight against childish ignorance and laziness. Perhaps we have spent so much time idealizing young people and catering to their vulnerable self-esteem that we have simply given up the roles of parents, teachers, and respected leaders? Or perhaps, the culture has enabled adults to stick with their most self-destructive, anti-social tendencies.
This is all indeed true, Bob. Although I wonder if your fear that schools have essentially given up shouldn't be modified to disappointment that they have. This is not cynicism if it is accompanied by defensive measures taken in one's own classroom as well as speaking against AI's ethical erosion with colleagues wherever appropriate and attempting to form collective solutions, as individual ones are too weak in the long run. The "inevitability" narrative is false, but the probability that AI will invade increasingly more educational nooks is high. Some instances are so daring that one is taken off-guard. For instance, at UCLA a class will use an AI-generated textbook in the Fall (https://cmrs.ucla.edu/news/comparative-lit-class-will-be-first-in-humanities-division-to-use-ucla-developed-ai-system/).
A large part of the trouble is that there are so many hostile factors in higher education now, especially for non-tenure-track faculty, as a special issue of American Association of Philosophy Teachers Studies in Pedagogy recently documents (https://www.pdcnet.org/collection-anonymous/browse?fp=aaptstudies&fq=aaptstudies/Volume/8990%7C10/). Especially worth a look is editor Liberman's annotated bibliography (https://www.pdcnet.org/collection/fshow?id=aaptstudies_2025_0010_0258_0316&pdfname=aaptstudies_2025_0010_0258_0316.pdf&file_type=pdf). Liberman buckets the hostilities into four broad categories: governmental policies, working conditions, classroom environments (of which AI forms its own subset), and professional norms and values. The list is depressing, and I can sympathize with why many colleagues become cynical in the face of such hostilities.
There are some signs of hope, such as Yondr's increasing popularity in K-12 schools, for example, but I have yet to see an impressive tech-free space at a college or university. It is still the case that the majority of faculty are on our side, as it were, but only just at 54%, since 46%, according to a Chronicle survey, believe that AI will improve higher education overall in the next five years. Unfortunately, 78% of administrators think it will too.
With respect to the "getting stupider" for about a decade now, I postulate that the force is "digitality" rather than AI per se. Digitality is a hybrid medium, if I may, between literacy and orality. I'm fairly convinced by the Toronto school of media theorists here that "the medium is the message" and that literacy is superior for the development of rationality than either orality or digitality is. Thus, when our communications move from literate media to digital media, rationality will predictably decrease. But it may be that a global decline in education is also underway. One is moved to posit global factors as the OECD study includes participants from 30 nations.
I'm not sure if you would call this cynical or not, but as I see the evidence right now, I'm looking not at a mere cultural problem but at what increasingly seems to be a civilizational malaise. Whether cheating, lying, dishonesty are causes or symptoms I don't know, but they appear so rampant at the moment that it's believable that the pursuit of truth itself is in crisis.