In education and elsewhere, “hand-wringing” only goes so far
By Leigh E. Rich
“I, too, have found a place, Primus. It is very strange. Human beings lived there once, but now it is overgrown with weeds.” — Helena, a Robotess
Here in the twenty-first century — more than one hundred years after playwright Karel Čapek introduced the term “robot,” with the concept of automated humanoids going back even further — the A.I.s have already won. It doesn’t matter how “dumb” some iterations might still be or their promise for improving productivity or medical care, they have beaten us. Almost literally.
This is not their fault, nor is it yours, our students’, or mine. Systemic and other choices (by pioneers and policymakers enjoying much higher pay grades) have ensured that the average person and professional must bend to the robots’ “will.” And the primary path we have been given to travel (in this prostrate position) is hand-wringing.
Whether you are a victim of fraud, disinformation, or revenge porn, the onus for prevention and reparation falls on you. Should you not want high-density data centers in your community’s backyard, better mobilize your neighbors for a fight. Can’t figure out why energy and water bills keep creeping higher, you might be subsidizing digital processing. Perhaps all well and good if it helps catch someone’s cancer earlier; not as much if used to cheat on exams or create an image for a family barbecue flyer. (There’s a deeper conversation about externalities here that is worth debating on a larger scale.)
And for those who have parsed the worst of the Internet — trafficked into troll farms or training the algorithms and “black boxes” against prejudice and harm, with little health care and low pay — they have suffered the most.
As technology reporters Matt Burgess, Tom Gerken, Natasha Hinde, and others have emphasized, there is no legitimate use for “nudify” apps (which chiefly harm women and children), and Minnesota Sen. Amy Klobuchar’s recent experience with being “deep faked” suggests that Richard Condon’s Manchurian Candidate may have just come “to life.”
And yet, the “solutions” that circulate run so far downstream even the savviest person may become prey.
Much hand-wringing also has been wrought over A.I. in education, with articles to “go medieval” by N.Y.U. Vice Provost Clay Shirky or “lean in” by Adam Clark Estes. Good intentions aside, these mostly individual-oriented and after-the-fact suggestions do little for chapped hands. And on a less dire level than those caught in “scamdemics,” it’s pretty frustrating on the front lines.
For two decades, I have taught in higher ed (and was a student before then for nearly as long). Most faculty, in K–12 or college, principally care about student learning, welfare, and engagement. We don’t aim to be taskmasters beholden to deadlines or the larger system’s safeguard for who can earn an A. We also aren’t unaware of the privileges of our jobs or the need, on various fronts, for revision and reform. We welcome advances in our fields and the challenge of pedagogical changes. And most of us take seriously what we owe our students, our disciplines, and the larger society. (Do you want a health care provider or engineer who isn’t competent? We don’t, either.) I love my profession, despite inherent irritations and undulating critiques on the “worthiness” of the humanities. (So let’s engage in discourse and debate! Do you want doctors and leaders who can’t comprehend the particularities of the “lived experience,” what it takes to keep a democracy, or how to have a face-to-face conversation? We don’t, either.)
What is wearing thin are the exhortations to “change how we teach” — not as a means to improve learning or adapt to student needs but merely to compensate for the “ease” of A.I. Pull out the blue books and oral exams, add more internships and applied practice. Engage interpersonally and one-on-one. Cherish bonds that prompt growth and ability. Of course, we say, please sign us up! (We actually have been clamoring for these for a while …) But they cannot be magicked out of thin air or come without resources and streamlined support. (Try to wrangle M.O.U.s, liabilities, and placements in one small program, let alone en masse …) Human connection also is not “scalable” (as social media and the ten-minute doctor encounter appear to indicate), and each teacher cannot counter the entirety of corporate tech.
Nor will such suggestions work similarly in all formats.
Not long ago, due to a confluence of reasons, I was reassigned to a fully online, asynchronous program geared toward undergraduates seeking health-related careers. This had nothing to do with the pandemic. (In fact, when someone I recently met assumed so, my brain did a double-take recalling those years. Though I live with people who are immunocompromised, I was required to teach in person. I held classes in a lecture hall, where students could spread out and large bottles of sanitizer sat at the end of every row. I live Zoomed each session for those who couldn’t or wouldn’t come in person — running back and forth to the computer to engage with virtual members and repeating or typing comments and questions so that everyone would hear — while recording classes and attending to the asynchronous needs of anyone who wasn’t present for either. Pretty quickly, even in a course I have spent many years developing, I was in a room with a mostly empty Zoom and one older student. Post-pandemic, I fought to keep in-person classes as long as possible, with decent numbers of students signing up but dwindling numbers showing up. Mr. Shirky and others are correct: Increasingly more students don’t come to class or can’t or find it challenging for a variety of reasons to do so. In one last in-person iteration of my favorite course, I still think about two students. One, who struggled with the material but asked questions, made comments, sought help during and after sessions, and was of the three to five students who regularly appeared, though her commute was almost an hour. Another student, more prepared for the material and engaged in the first several weeks of the term, soon stopped coming altogether. She lived in the dorm next to the building where we met. They both earned a similar non-A grade, but I know who I’d recommend for a position in health care.)
Online degrees have a place and serve a need (especially for campuses supporting the “nontraditional” learner — those working or married, with kids, or in the military), and my colleagues and I are fortunate our current program is bursting at the seams. My main concern in any course is whether students learn (and whether I have developed the appropriate-but-engaging materials essential to their learning).
In an asynchronous online class, however, the “go-to” suggestions by Mr. Shirky and others don’t go very far. There is no “writing papers in person” and no good way for proctored exams. Sure, students could handwrite assignments, take photos with their phones, and post images to professors. We (still) couldn’t guarantee our students had crafted these (long an issue in online classes and the correspondence courses that came even earlier) or that they didn’t just copy something spat out by an L.L.M. Such a process itself would verge on the ridiculous. And virtual proctoring is invasive and creepy: a succession of photos or videos in personal environments as students take a test, flagging them if their eyes or bodily movements stray. Instructors then review the data to determine potential infractions. This undermines the teacher–student relationship, while only superficially solving the problem, and requires students to download software to a computer and have a video camera on hand (limiting hardships for some compared to others). In-person testing centers are impractical and costly — with need for physical locations, scheduling, flexibility, staffing, etc. — whether for institutions, third-party companies, or students. And “lockdown browser” software might work for one computer, but most of us have phones that are free to roam the Web and prompt A.I. at will.
Moreover, face-to-face discussions run against the raison d’être of the “conveniently flexible” online experience, and the “limitless” numbers of “cyberspace seats” dance in administrators’ eyes like sugarplums at Christmas.
Going “analog” will not resolve educational issues in the digital realm, and after-the-fact “A.I. detectors” are unreliable and unfair (and likely would flag me for my em-dash addiction). Yet industry leaders (and some of our very own policymakers) refuse tactics that could make A.I. use and its outputs more transparent.
Technology, of course, has always altered professional practice — in industry, in medicine, in education and training — with newfangled advantages and irks to overcome. A.I. “watermarking” and other “identifying” devices would not be a panacea. But something might be better than nothing.
In this wild west that serves the moneymakers at the everyperson’s expense, certain tech and third-party tools are all online instructors currently have. Some work better than others — encouraging engagement across the digital divide, preventing the “copy and paste” or prompting better use of A.I. — but none is foolproof or closes relational gaps.
At a regional public university, the kind that serves most American students and accounts for the largest socioeconomic advancements, there are individual paths professors can take: staying atop technological changes, constantly revamping and revising courses and assignments, vetting third-party tools, perfecting our use of their imperfect features, and maintaining sanity as “robotic” systems refuse to connect and communicate. Much of this is essential to what teaching is anyway — but now we must be experts in our fields and authorities in circumventing the undetectability of A.I. Often, the systems employ us, not we them.
So now I sneak broccoli into brownies. Whether through the institution’s learning management system (L.M.S.) or outside tools one must discover and learn, time is spent creating (and recreating), connecting, and evaluating various types of interactive assignments: readings and discussions that mimic social media, video lectures with embedded reflections and feedback, online projects and experiences that can be done at a distance. All of which often must change as technology does or be transitioned to new platforms when external tools evaporate.
Many of these cost money, for institutions as well as students, and command endless “clicking” and coordination among virtual people and moving parts, with linkages and setups that easily break. (And though it may be hypocritical that online tools increasingly rely on aspects of A.I., students aren’t banned from using it and certain systems, including an institution’s own L.M.S., may give faculty no other choice.) Over the course of a term, I spend as much time in tech support as I do teaching.
Would that I could update a lecture, then waltz to a room for an hour’s discussion! (Assuming, of course, the students appear, prepared to dive deeper into the details; we can meet learners where they are, but we can’t herd them like cats or turn pages of their books.)
Even beyond A.I., the “robots” often won’t work. My students and I lost more than a week this semester trying to correct the bookstore’s errors in ordering materials: It relied on an automated and unbending algorithm and simply ignored my instructions that would have prevented the pitfalls. As the system imploded, with endless confusion for students and consequences on financial aid, I made forty-three phone calls to institutional overseers — reaching an actual human just five times — and communicated in countless e-mails, course announcements, and discussion posts. The experience was unnecessary and exhausting, and students and I are still digging out of this “hole.” The only silver lining is that they and I seem bonded by the ordeal.
The focus of Čapek’s 1920 play R.U.R. (Rossum’s Universal Robots) really isn’t about a “robot apocalypse.” Sure, they do revolt and prevail in his story, but his is a warning for the ways we treat human workers and human beings. Today, I don’t fear the robots as much as the men who deem it wise to “move fast and break things,” while the rest of us drown downstream without life preservers or guardrails.
Rich, L. E. (2025, September 1). (Already) beaten by the robots: In education and elsewhere, “hand-wringing” only goes so far. Leigh Rich Freelance: insertcomma.com.
References
Matt Burgess, “AI ‘Nudify’ Websites Are Raking in Millions of Dollars,” Wired, July 14, 2025, https://www.wired.com/story/ai-nudify-websites-are-raking-in-millions-of-dollars/
Karel Čapek, R.U.R. (Rossum’s Universal Robots): A Fantastic Melodrama, trans. Paul Selver (Garden City, NY: Doubleday, Page and Company, 1923). Originally published as R.U.R (Rossumovi Univerzální Roboti) (Prague: Aventinum, 1921).
Andrew R. Chow, “‘We Are the Last of the Forgotten:’ Inside the Memphis Community Battling Elon Musk’s xAI,” Time, August 13, 2025, https://time.com/7308925/elon-musk-memphis-ai-data-center/
Ben Cohen, “They Were Every Student’s Worst Nightmare: Now Blue Books Are Back,” The Wall Street Journal, May 23, 2025, https://www.wsj.com/business/chatgpt-ai-cheating-college-blue-books-5e3014a6
Adam Clark Estes, “ChatGPT Isn’t Just for Cheating Anymore,” Vox, August 28, 2025, https://www.vox.com/even-better/459534/chatgpt-cheating-schools-ai-education
Seth Wyatt Fallik, Erika Stone, Danielle Victory, Taylor Markevitch, Rolando Salvo, and Alexis Mallalieu, “Revenge Porn: A Critical Content Analysis of the Nation’s Laws and Reflection upon Social Science Research,” Criminology, Criminal Justice, Law & Society 23, no. 1 (2022): 1–22. https://doi.org/10.54555/ccjls.4234.34102
Tom Gerken, “Meta Urged to Go Further in Crackdown on ‘Nudify’ Apps,” BBC News, June 12, 2025, https://www.bbc.com/news/articles/cgr58dlnne5o
Natasha Hinde, “We Need to Speak to Our Daughters About This Disturbing Use of AI,” HuffPost, March 5, 2025, https://www.huffingtonpost.co.uk/entry/deepfakes-of-children-advice-for-parents_uk_680f73cbe4b000a1c038ba1d?ncid=APPLENEWS00001
Rose Horowitch, “The Perverse Consequences of the Easy A,” The Atlantic, August 28, 2025, https://www.theatlantic.com/ideas/archive/2025/08/harvard-college-grade-inflation/684021/?utm_source=apple_news
Amy Klobuchar, “What I Didn’t Say About Sydney Sweeney,” The New York Times, August 20, 2025, https://www.nytimes.com/2025/08/20/opinion/amy-klobuchar-deepfakes.html
Lily Hay Newman and Matt Burgess, “The Pig Butchering Invasion Has Begun: Scamming Operations That Once Originated in Southeast Asia Are Now Proliferating Around the World, Likely Raking in Billions of Dollars in the Process,” Wired, September 30, 2024, https://www.wired.com/story/pig-butchering-scam-invasion/
Ivan Penn and Karen Weise, “Big Tech’s A.I. Data Centers Are Driving Up Electricity Bills for Everyone,” The New York Times, August 14, 2025, https://www.nytimes.com/2025/08/14/business/energy-environment/ai-data-centers-electricity-costs.html
Billy Perrigo, “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic,” Time, January 18, 2023, https://time.com/6247678/openai-chatgpt-kenya-workers/
Clay Shirky, “Students Hate Them, Universities Need Them: The Only Real Solution to the A.I. Cheating Crisis,” The New York Times, August 26, 2025, https://www.nytimes.com/2025/08/26/opinion/culture/ai-chatgpt-college-cheating-medieval.html

