A recent New Yorker article by Hua Hsu “What Happens After AI Destroys College Writing?” illustrates many of the social threats caused by the wide-spread adoption of AI in higher education.
This is all indeed true, Bob. Although I wonder if your fear that schools have essentially given up shouldn't be modified to disappointment that they have. This is not cynicism if it is accompanied by defensive measures taken in one's own classroom as well as speaking against AI's ethical erosion with colleagues wherever appropriate and attempting to form collective solutions, as individual ones are too weak in the long run. The "inevitability" narrative is false, but the probability that AI will invade increasingly more educational nooks is high. Some instances are so daring that one is taken off-guard. For instance, at UCLA a class will use an AI-generated textbook in the Fall (https://cmrs.ucla.edu/news/comparative-lit-class-will-be-first-in-humanities-division-to-use-ucla-developed-ai-system/).
There are some signs of hope, such as Yondr's increasing popularity in K-12 schools, for example, but I have yet to see an impressive tech-free space at a college or university. It is still the case that the majority of faculty are on our side, as it were, but only just at 54%, since 46%, according to a Chronicle survey, believe that AI will improve higher education overall in the next five years. Unfortunately, 78% of administrators think it will too.
With respect to the "getting stupider" for about a decade now, I postulate that the force is "digitality" rather than AI per se. Digitality is a hybrid medium, if I may, between literacy and orality. I'm fairly convinced by the Toronto school of media theorists here that "the medium is the message" and that literacy is superior for the development of rationality than either orality or digitality is. Thus, when our communications move from literate media to digital media, rationality will predictably decrease. But it may be that a global decline in education is also underway. One is moved to posit global factors as the OECD study includes participants from 30 nations.
I'm not sure if you would call this cynical or not, but as I see the evidence right now, I'm looking not at a mere cultural problem but at what increasingly seems to be a civilizational malaise. Whether cheating, lying, dishonesty are causes or symptoms I don't know, but they appear so rampant at the moment that it's believable that the pursuit of truth itself is in crisis.
Thanks for your thoughtful reply. I agree with the role played by digital media in making us less intelligent and critical. The issue is that there are so many different forces at once leading to the same social degradation.
This is all indeed true, Bob. Although I wonder if your fear that schools have essentially given up shouldn't be modified to disappointment that they have. This is not cynicism if it is accompanied by defensive measures taken in one's own classroom as well as speaking against AI's ethical erosion with colleagues wherever appropriate and attempting to form collective solutions, as individual ones are too weak in the long run. The "inevitability" narrative is false, but the probability that AI will invade increasingly more educational nooks is high. Some instances are so daring that one is taken off-guard. For instance, at UCLA a class will use an AI-generated textbook in the Fall (https://cmrs.ucla.edu/news/comparative-lit-class-will-be-first-in-humanities-division-to-use-ucla-developed-ai-system/).
A large part of the trouble is that there are so many hostile factors in higher education now, especially for non-tenure-track faculty, as a special issue of American Association of Philosophy Teachers Studies in Pedagogy recently documents (https://www.pdcnet.org/collection-anonymous/browse?fp=aaptstudies&fq=aaptstudies/Volume/8990%7C10/). Especially worth a look is editor Liberman's annotated bibliography (https://www.pdcnet.org/collection/fshow?id=aaptstudies_2025_0010_0258_0316&pdfname=aaptstudies_2025_0010_0258_0316.pdf&file_type=pdf). Liberman buckets the hostilities into four broad categories: governmental policies, working conditions, classroom environments (of which AI forms its own subset), and professional norms and values. The list is depressing, and I can sympathize with why many colleagues become cynical in the face of such hostilities.
There are some signs of hope, such as Yondr's increasing popularity in K-12 schools, for example, but I have yet to see an impressive tech-free space at a college or university. It is still the case that the majority of faculty are on our side, as it were, but only just at 54%, since 46%, according to a Chronicle survey, believe that AI will improve higher education overall in the next five years. Unfortunately, 78% of administrators think it will too.
With respect to the "getting stupider" for about a decade now, I postulate that the force is "digitality" rather than AI per se. Digitality is a hybrid medium, if I may, between literacy and orality. I'm fairly convinced by the Toronto school of media theorists here that "the medium is the message" and that literacy is superior for the development of rationality than either orality or digitality is. Thus, when our communications move from literate media to digital media, rationality will predictably decrease. But it may be that a global decline in education is also underway. One is moved to posit global factors as the OECD study includes participants from 30 nations.
I'm not sure if you would call this cynical or not, but as I see the evidence right now, I'm looking not at a mere cultural problem but at what increasingly seems to be a civilizational malaise. Whether cheating, lying, dishonesty are causes or symptoms I don't know, but they appear so rampant at the moment that it's believable that the pursuit of truth itself is in crisis.
Thanks for your thoughtful reply. I agree with the role played by digital media in making us less intelligent and critical. The issue is that there are so many different forces at once leading to the same social degradation.