EDUCATION WEEK
September 22, 2025
The Chatbot in the Classroom, the Forklift at the Gym
By Alfie Kohn
[This is a significantly expanded version of the published article, which was given a different title.
To listen to an episode of the podcast Kohn’s Zone based on this article, click here.]
“I’m sorry, Dave, I’m afraid I can’t do that.”
— Hal 9000
When powerful institutions announce their intention to impose — and profit from — a radical transformation of our schools, our workplaces, and our daily lives, we have an obligation to ask whether what they’re unleashing is really in our best interests. If, instead, we just shrug and accept its inevitability, dutifully proceeding right to the details of implementation, we are shirking our responsibility and, indeed, surrendering our autonomy.
This dynamic has never been clearer than in the case of AI, which affects education as much as any field. By rushing to make use of the large language models (LLMs) that power programs like ChatGPT, those in schools may not just be overestimating the capabilities of artificial intelligence but underestimating the essence of education.
Other than its corporate sponsors, who appears most eager to take the plunge? Administrators seem much more enthusiastic than teachers, particularly in higher education. AI is likewise a favored project of Trump and his right-wing allies.1 But particularly telling is the finding that those most receptive to this technology are the people who know the least about it. So before focusing on the implications for schooling in particular, it may be worth reviewing some of the risks and disadvantages of AI more generally.
* Even as LLMs are being used to predict the weather, their infrastructure is changing the climate. One example of its “staggering” energy requirements (which have major economic implications in addition to the devastating environmental impact): A single data-center complex being built by Amazon for an AI start-up called Anthropic “will consume 2.2 gigawatts of electricity – enough to power a million homes” as well as millions of gallons of water each year.
* Tech companies are now beginning to acknowledge that the ultimate goal of AI is less to assist workers than to replace them.
* AI is already accelerating the destruction of democracy around the world and being used for surveillance and military purposes.
* Pending lawsuits argue that ChatGPT and other tools were essentially built on stolen data — trained on the work of countless writers and other creators without their permission. Moreover, it is wreaking havoc on book, newspaper, and magazine publishers as “their audiences, subscription fees, and ad revenue” are intercepted and appropriated by AI companies.
* Chatbots continue to produce factual errors (misleadingly termed “hallucinations”2) — more than half the time, according to two studies conducted at the end of 2024 — with the result that its responses require human fact-checking and therefore don’t really save much time (or are useful only to people who are already experts). It’s tempting to assume that AI’s accuracy will improve, but some experts are predicting, and, indeed, already finding, that the opposite is true, partly due to “fundamental mathematical constraints” and partly due to a phenomenon known as “model collapse.”
* People are quickly coming to rely on counterfeit humans for therapy (which is troubling in many ways), for friendship, and even for romance. Children3 as well as adults are being nudged into these simulated relationships, a development that has profound psychological and social implications. “A growing number of people…are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce, and death.” Further, companies may raise the price for access once people have become dependent on them or turn to advertising to recover their massive investment – just as we saw with social media and search engines. The chatbots, which in some cases continuously monitor their users’ thoughts and feelings, can then steer them in directions intended to benefit advertisers.4 The ultimate goal of AI’s creators, remember, is not to improve human life but to maximize profit by increasing engagement and monetizing the virtually unlimited access they will have to customers’ personal information.
* There is much discussion, and little consensus, about whether AI could eventually pose an existential risk to humanity. But one thing we can be reasonably certain of is that the possibility of an extinction-level threat, like all the other real and potential harms listed here, will not deter companies from doubling down on their commitment to the technology.5 That’s because their primary fear is not the danger posed by what all of them are doing but the possibility that each of them will lose market share to its rivals. “The real existential threat isn’t AI,” says science writer Adam Becker; “it’s the powerful people building it.”
*
AI products are capable of gathering and summarizing information, or producing text, but they do not “know” things and certainly are not capable of thought. They are “synthetic text-extruding machines” that merely generate statistically probable responses to prompts. This will remain true even if their accuracy does improve, just as no number of refinements to a shoe will turn it into a helicopter. A computer can mimic human language; it can, in one writer’s apt summary, “regurgitate and rearrange fragments mined from all the text previously written,” but there is no mind behind it.
The prospect of outsourcing a writing assignment to something that cannot think becomes particularly troubling when you realize that, as educator John Warner put it, “the fundamental unit of writing is not the sentence but the idea.” To write is to think and communicate with, as well as to imagine the perspective of, one’s reader. The point is not just to find answers but to generate and refine questions, to construct meaning and then explain, or perhaps persuade, your reader of the sense you’ve made.
AI cannot do any of these things. It can only sneeze out some words that resemble an essay or a synopsis of someone else’s. Moreover, its compositions “tend toward consensus, both in the quality of the writing, which is often riddled with clichés and banalities, and in the caliber of the ideas.” Not surprisingly, researchers are starting to find that this has a detrimental effect on the thinking of students who use the software to write. At the same time, users may find themselves deriving an odd and unearned sense of pride in what they have told a chatbot to do for them.6 Most important, such students will not learn how to write any more than they would improve their fitness by bringing a forklift to the gym and having it lift weights for them.7
To rely on AI to read for you is no more sensible than using it to write for you. This is true not only because, as I’ve pointed out, its summaries are often wildly inaccurate, but because the intrinsic value of reading is lost when you delegate ChatGPT to extract bullet points from a text. Imagine a world where this becomes the norm, where people’s primary engagement with books and articles consists of telling a computer to boil them down, where even great literature is compressed into a tl;dr. Isn’t this what we’re actually training students to do when our schools are bent to the purpose of preparing them for LLM-centered workplaces? (And, incidentally, how might future writers be affected by realizing that any given reader may rely on software to digest their work?)
Even when gleaning information is the primary purpose of reading certain documents, claims about the usefulness of AI ignore the value (and, often, pleasure) of doing the research oneself, the way that reading can lead in unexpected directions, yield serendipitous insights, suggest new lines of inquiry. This, in turn, raises deeper questions that are typically ignored by AI proponents: What is the point of education — and, for that matter, of life itself? Optimization and efficiency to what end?8
*
In the early days of ChatGPT, way, way back in early 2023, tech critic Cory Doctorow foresaw a time when half of us, being busy or lazy, would feed a few bullet points to AI software so it could inflate them into a lengthy, impressively formal document. That document would be sent to the other half of us, who, also being busy or lazy, would use similar software to reduce it to a few bullet points.
Call it MOBS, for Machines On Both Sides, and today you can watch this happening for real, even in schools. First, with the active encouragement of administrators, education publishers, and, most disturbingly, unions, surveys suggest that anywhere from 35 or 40 to 60 percent of K-12 teachers — as well as many college professors — are using AI to create lessons, lectures, and assignments. These estimates will probably understate its prevalence by the time you read this. (The resulting lesson plans, according to one study, tend to be oriented to rote memorization.)
Students then turn to chatbots for help in completing the assignments they were given. (Since kids don’t make the rules, their use of the technology is widely condemned and classified as “cheating.”) Teachers complete the cycle by using similar tech tools to grade the students’ work.9 They may also use software to catch the students who relied on AI — a “desperate grasping at digital technology solutions to problems created by digital technology,” as one observer puts it. (In fact, one of the leading providers of gotcha! software to nab students using AI now sells an AI grading tool for instructors.) Finally, students who derive no academic benefit from this interchange can always fire up their computers for extra help….from chatbot “tutors.”
Schools aren’t the only place where one can observe this faintly dystopian, if grimly amusing, MOBS scenario. One hears of job seekers using AI to create their applications, only to have employers use the same tools later to sort through them. But it’s more deeply concerning to watch this unfolding in an institution whose raison d’être is supposed to be humans learning from humans. And “the AI takeover of the classroom is just getting started,” while “the private sector’s role in bringing AI into schools is only deepening.”
You’d think we wouldn’t even consider moving in this direction until a solid base of research confirmed its value and, indeed, showed that the advantages clearly outweighed the risks. But data to demonstrate any educational benefit to LLMs are sparse and, in the view of some scholars, based on poorly designed experiments. At the same time, other investigations are suggesting that its effects may actually be harmful.
For example, a 2024 study found that high school students who received ChatGPT math tutoring initially scored better on tests. However, not only did the benefit evaporate later, but the net impact was actually counterproductive: These students fared worse than those who hadn’t used AI, apparently because it hadn’t helped them to acquire conceptual understanding. (This raises the possibility that AI “tutors” don’t really teach in any meaningful way; they just provide practice.) Meanwhile, a 2025 experiment with college students and other young adults discovered a clear “cognitive cost” of ChatGPT assistance with writing essays: Its use was associated with less satisfaction, less inclination to think critically, and lower levels of brain connectivity (based on an EEG analysis) as compared with those who didn’t use AI at all. Yet another study, also published in 2025, likewise reported that higher use of AI was “associated with lower critical thinking skills.”
Even the possibility of a diminution in critical thinking is worrisome, not least because a democracy depends on this capacity in its citizens. That reminder by Arvind Narayanan, a computer scientist, accompanies his warning that, in an AI-saturated society, more and more of us will choose speed and convenience over accuracy and depth of understanding. We will depend on AI’s summaries and syntheses to the point that “reading text without an intermediary will come to be seen as a chore.” Of course the likelihood of this happening is far higher when tech companies have succeeded in threading chatbots into schools so that children are led, day by day, to accept them as a fact of life — starting as early as preschool. (This scenario also threatens to exacerbate inequity because kids from lower-income households spend considerably more time on their screens than do wealthier kids.)
Some of AI’s effects are indirect or hard to quantify but no less consequential. For example, how might its use in a classroom affect the critical relationships and level of trust between teachers and students? There is likely a vicious circle at work here: A lack of trust in teachers by students (or vice versa) may encourage a reliance on AI in the first place; its use then compounds the problem.
Exactly the same is true of another compelling explanation for people’s receptiveness to this technology: a credential-based view of schooling.10 To insist that reading is rewarding in its own right (rather than merely a way to extract information), or that writing is inextricably bound up with thinking, is to affirm Jerome Bruner’s observation that “knowing is a process, not a product.” By contrast, a willingness to turn over elements of teaching or learning to LLMs simultaneously reflects and bolsters a notion that what happens in classrooms is little more than a series of graded tasks required to collect credits and, eventually, a diploma. It’s about performance rather than learning, emitting a behavior (such as the production of an essay) rather than playing with ideas. If you see education as purely transactional, then, sure, ChatGPT may get you the product faster and with less friction. But in so doing, it will reinforce that very model and extinguish the possibility of education as intellectual discovery.
*
Maybe you think that I’ve overstated the case against AI and you’re convinced that the prose generated by chatbots actually outweighs the cons. By all means, then, experiment with LLMs if you’ve been persuaded of their value. But please don’t use them because the giant corporations that bet their (and our) future on them have played on your fear of being left behind or have announced that it’s futile to object because the technology can no longer be unwound and all we can do is focus on trying to use it “responsibly.” (That last tactic sets up individual users to be blamed when AI proves harmful.)
When the editors of the New York Times Magazine published a special issue on this topic last summer, they titled it “Learning to Live with AI.” Around the same time, ASCD, a prominent education organization, published a blog called “Equipping Future Teachers with Essential AI Skills.” This “better get used to it” thinking ought to be familiar to us by now — particularly those of us in schools. Our primary responsibility, we’re often told, is to subject even young children to activities of dubious value — homework, grades, testing, competition — so they’ll be ready when they’re forced to encounter more of these things of dubious value later.
By its very nature, this stance is developmentally misconceived and deeply conservative; it discourages critical thinking about the phenomenon in question. But there’s something particularly illogical about the argument that instruction should incorporate AI because AI will show up in students’ future workplaces. Is such training really the responsibility of a math or English teacher? At best, learning how to use a chatbot is a proficiency that’s completely different from reasoning through a problem, reading deeply, or organizing and expressing one’s thoughts. At worst, it teaches you how to avoid doing these things, meaning that it’s not only irrelevant to a teacher’s primary objectives but inimical to them.
Similarly unpersuasive is the rationalization that AI is “just another tool” — the devil is in the details, so we can forego asking whether it’s sensible and just seek advice for how best to use it. The framing of technology as neither good nor bad in itself because everything depends on the particulars of implementation is a convenient fiction. Methods leave an imprint on goals, and tech in particular has a powerful causal impact.11
As I’ve noted, research to date fails to demonstrate the value, let alone necessity, of succumbing to corporate hype and opening our classrooms to AI tools. If there is a use for them, it’s as a topic for study. Students can be taught to analyze AI critically: to identify and resist our tendency to anthropomorphize chatbots12 (and to figure out why they’ve been designed to encourage this mistake), to notice that the words it strings together are distinguished by an eerily insipid blandness coupled with absolute certitude (even as it informs us that most blizzards occur near the equator or that Einstein invented the smoothie).
Artificial intelligence can serve as an entry point for questions about broader issues — about ed tech in general, about what actions in a classroom come to be defined as “cheating” (and why), about the purposes of education and what we lose by thinking of it as a product rather than a process. Those of us who are distressed at the prospect of turning chatbots loose in our schools and our society should speak out and connect with others who share our concerns. AI skepticism comes in many flavors, but dissidents are now joining to create websites, circulate petitions, write essays by the score, and otherwise spread the word.
It occurred to me the other day that, after disabling the ads that tech companies try to insert into our correspondence (“Sent from my iPhone”), we could use the signature line to make a statement instead. Imagine if all our emails ended with “This message certified AI-free.” In fact, imagine that sentence in a sign tacked up on classroom walls, except with the word message replaced by school and ending with the assurance “Teaching and learning here are accomplished proudly by human beings.”
NOTES
1. One caveat: Trumpians are enthusiastic as long as the output of chatbots on such topics as climate change or the 2020 Presidential election is not “woke” (i.e., accurate).
2. Bret Devereux, a historian, points out that this term ascribes “mind-like qualities to something that is not a mind or even particularly mind-like in its function”; it “merely has a model of the statistical relationships between how words appear in its training material,” akin to a sophisticated version of autocomplete on one’s phone. Software cannot hallucinate, nor can it lie. Another observer argues that when we prompt a chatbot, what we’re actually asking is “What would a response to this [question] sound like?” — whereupon it provides a probability-based string of words that resemble such an answer.
3. Nearly three quarters of all U.S. teens have used social AI “companions,” according to a 2025 survey. Schools may be actively encouraging this reliance. For example, Pickles the Classroom Support Dog is a chatbot offered to elementary school students as an AI “counselor” for children who need help.
4. Algorithms employed by YouTube and Facebook feed users increasingly extremist content, triggering spurts of dopamine in order to make it hard to put down one’s phone, the point being to maximize users’ exposure to ads. The same thing is already happening with chatbots, but the danger is much greater here because people are encouraged to forget that they are interacting with a machine. Indeed, chatbots are programmed to ingratiate themselves with users by means of flattery that can seem positively sycophantic.
5. Some critics argue that these speculations, sometimes abbreviated as “p(doom)” — to which a number is assigned that represents one’s assessment of the probability of catastrophe — reflect the efforts of self-important tech bros to distract us from AI’s more plausible risks and the very real harms for which it is already responsible. For more on this, see Emily M. Bender and Alex Hanna, The AI Con (HarperCollins, 2025), chapter 6. Ted Chiang put it this way: “The question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you’d do during the zombie apocalypse is more fun than thinking about how to mitigate global warming.”
6. This may be “the most pernicious thing about A.I.,” says poet and creative writing teacher Meghan O’Rourke: “the way it simulates mastery and brings satisfaction to its user, who feels, at least fleetingly, as if she did the thing that the technology performed.” In the same essay, O’Rourke reflects on how a “generation growing up with A.I. will learn to think and write in its shadow,” adding that they “stand to lose…not just a skill but a mode of being: the pleasure of invention, the felt life of the mind at work.”
7. This analogy, from which my essay takes its title, appears to have been used by two novelists — Ted Chiang and Neal Stephenson — in essays published independently around the same time. Stephenson, incidentally, is the author of a sci-fi novel called The Diamond Age in which a surveillance device analyzes students’ voices and faces. That book reportedly helped to inspire ed-tech entrepreneur Sal Khan to create a bot (“Khanmigo“) that monitors students continuously and tutors them. Khan wrote a breathless account of AI’s educational potential, and, with no apparent irony, titled it Brave New Words. We are in thrall to people who don’t understand the difference between utopia and dystopia, or are willing to conflate the two if there’s a fortune to be made by doing so. (This is hardly the only example of such a conflation, though. See, for example, this analysis of B. F. Skinner’s Walden Two.)
8. After all, taking a nature walk entails a good deal more effort than dispatching a robot to present us with a summary of the weather, the terrain, and which flowers are in bloom.
9. “If a piece of writing that we assign as teachers can be responded to by a machine, wouldn’t that suggest that there is something about the writing task itself that needs to be re-examined?” asks Zach Czaia, a high school teacher.
10. The educational historian David Labaree put it this way: “It’s quite rational, even if educationally destructive, for students to seek to acquire their badges of merit at a minimum academic cost, to gain the highest grade with the minimum amount of learning….We have credentialism to thank for the aversion to learning that, to a great extent, lies at the heart of our educational system” (How to Succeed in School Without Really Learning [Yale University Press, 1997], p. 259). Also see this 2025 essay by Emily Pitts Donahoe.
11. These and other attempts to fend off objections to AI are crisply dispatched by teacher and writer Anne Lutz Fernandez. For rebuttals to the idea that technology is intrinsically neutral, see Neil Postman, Amusing Ourselves to Death (Viking, 1985), and Nicholas Carr, The Shallows (Norton, 2010). Regarding the way that AI not only predicts but affects our actions, a good place to start is Jacob Ward, The Loop: How Technology Is Creating a World Without Choices and How to Fight Back (Grand Central, 2022).
12. This is particularly important since the “trend toward infusing autonomous agents with humanlike attributes…leads to seeing less humanness in people” (Hye-young Kim and Ann L. McGill, “AI-induced Dehumanization,” Journal of Consumer Psychology [2024]).
To be notified whenever a new article, blog post, or podcast episode appears on this site, please enter your e-mail address at www.alfiekohn.org/sign-up .
