The Problem is Not AI

resistance as return

Once again this fall I will be teaching a careers course required of all English majors, “Your Degree in the World.” We explore practicalities like capturing humanities driven skills on a resume, connections between internships and careers (we are fortunate to have an internship guru embedded in the Department of English and fund that provides stipends to students who take unpaid internships to develop their career prospects), making your own luck by seeding as many possible futures as you can, networking with those in the field you’d like to enter as curiosity driven human engagement, the practical utility of platforms like LinkedIn and Handshake as well as the resources of our college’s Futures Center. When I bring alumni back to speak to current students we inevitably have a robust discussion about how humanities study readies you for the eight to twelve jobs that form the arc of your career. Who knew for example that studying literature could have a fairly direct connection to becoming an executive at Waymo, a company that pioneered self driven cars? And speaking of AI, we’ll certainly discuss how proficiency in its use is now an essential career skill — in fact, I believe that humanities majors are better positioned to integrate its use into many fields better than engineers, given the intimacy of LLMs to adeptness in language use.

Many academic colleagues believe that AI should (and can) be outlawed. They’ll use blunt declarations impersonating speech acts like “It is unethical” as a way of not having to think through the fact that students already rely upon AI extensively as an access technology, learning tool, and assignment proxy — and that they will need to know how to use AI well to flourish after they graduate. Student success is our obligation as faculty; so is a thinking through of how complicated all ethics around technology are, despite any blunt statementing we might attempt. A wise colleague who heads our Lincoln Center for Applied Ethics (which takes a special interest in humane technology) often states that the work of ethics is to complicate our relations and our actions, not to simplify complexity into a list of Dos and Don’ts — no matter how tempting that reduction of action into prescription might be.

I had all this in mind recently as I read through a New York Times piece on faculty resistance to AI, “These College Professors Will Not Bow Down to A.I.” Here is a typical quotation about what such resistance supposedly looks like:

"Banning A.I. and calling it a day wouldn’t work; they had to A.I.-proof many assignments by making them happen in real time and without computers, and they had to come up with a workable policy around the technology in other situations ... Through a combination of oral examinations, one-on-one discussions, community engagement and in-class projects, the professors I spoke with are revitalizing the experience of humanities for 21st-century students."

It is striking to me how this story labels as resisting AI what is actually a return to all the practices that animate the best humanities classrooms: active and project based learning, lively and direct student engagement, cultivating a community of learners attuned to each other's progress and possibilities rather than reduction of that community into solitary assessment units, energetic attentiveness to urgent problems, cultivating intergenerational learning opportunities, refusing to confine learning to the classroom and instead bringing humanities work into public spaces like libraries and high schools. Throughout the New York Times piece faculty speak of the joy they feel in witnessing students so impassioned by their assignments, and I could not help thinking: why did it take the threat of AI to trigger these changes? It strikes me that fewer students would have turned to AI to write their assignments if their assignments had been less routine, less soul numbing, giving joy neither to those tasked with completing them nor the faculty who assess them.

How did anyone ever convince themselves that delivering their expertise in the form of a lecture to students and then having them write formulaic responses to the materials (essays that are essentially an algorithm with specific, dreary and uncreative outcomes that students are graded on their ability to have internalized) could ever grow the number of students eager to be in humanities classrooms? The problem is not ChatGPT; the problem is that we bored students into using such tools poorly because our assessments and assignments -- and sometimes our very classrooms as spaces of learning absorption rather than learning in action -- bore them. And, too often, us. I have a friend who often says that if you hate grading, that is on you for giving dull assignments. The responsibility for engagement and joy is ours to embrace.