Skip to content

Main Navigation

Humanities in the Age of AI

Hollis Robbins


    Who expected that 2022-2023 would be the year we officially invited Artificial Intelligence to join us at the College of Humanities? While many of our faculty, staff, and students had been familiar with AI for years with the study of large language models (LLMs) in our linguistics department, as an emerging technology in history of science, as a character in books and films for the past century, and as a Digital Humanities tool, the arrival of OpenAI’s ChatGPT in November 2022 brought LLM technologies into our classrooms and research spaces loudly and permanently. “Everyone needs to get a ChatGPT account!” I encouraged humanities department chairs in December. Students will soon be using it and faculty needed to be familiar with what it could do. By spring semester, conversations about ChatGPT had sprung up across the college—nearly all optimistic and clear-headed as we agreed that doomsaying and worrying about plagiarism would prevent us from understanding what it could and couldn’t do and how the traditional humanities could benefit from Artificial Humanities.

We haven’t given ChatGPT a faculty page, a staff page, or a student page. If we did, we would need a profile photo and what does ChatGPT look like? Our excellent Kayli Timmerman, graphic designer for the College of Humanities, offers an idea. Using artificial intelligence to imagine itself. But ChatGPT—or GPT-4 or Claude or Bard, whichever newer models are being used—will now and for years ahead have a presence in the College of Humanities. We are committed to distinguishing the real from the not-real and how to best be sure we can recognize the product of a real human mind to that of the work of an artificial human.

In short, I am not worried. Humanists have wondered about artificial humans—or more broadly, non-human entities who dispense knowledge or cause mischief—for millennia. Consider the Oracle of Delphi, the mysterious entity who gave tragic advice to poor Oedipus and his family. Consider the concept of “deus ex machina”—literally God in the Machine—from ancient Greek and Roman drama where an entity/ machine/God appears unexpectedly to resolve a thorny conflict. In my own Jewish tradition, there is the Dybbuk, a malicious supernatural entity who steps in to thwart plans whenever a person is too sure of herself. Consider HAL from “2001: A Space Odyssey” who refuses to open the pod bay doors for the human astronaut, Dave. The most well-known artificial human concept is the robot, a term coined by Czechoslovakian writer Karel Čapek in his 1920 play, “R.U.R.” English professors Anne Jamison and Lisa

Image made with Midjourney AI using the prompt: “photorealistic business headshot of ChatGPT as a human.”Image made with Midjourney AI using the prompt: “photorealistic business headshot of ChatGPT as a human.” 

Image made with Midjourney AI using the prompt: “photorealistic business headshot of ChatGPT as a human.” 

Swanstrom both teach Čapek’s play in English classes. For Swanstrom, the robot (derived from the Slavic term robata, meaning “forced labor,” she notes) the production of the robots, cobbled together from synthetic, plastic body parts manufactured separately, resonates with new prosthetic technologies in the wake of World War I battlefield carnage. For Jamison, Čapek’s play, an imaginative response to World War I and assembly lines, was an early warning about the dangers of alienated labor, unchecked technology, and fascism—a warning about AI a full century ago! I first read Čapek’s play as launching a new genre of speech, on the idea of sounding robotic, speaking aloud only what one was programmed to say, and how interactions with humans would be both frustrating and hilarious. In a wonderful surprise, I discovered this spring that my great uncle Myrtland LaVarre—who would, under the name John Merton, go on to a long film career as handsome Roman soldier in Cecil B. DeMille

    films and a dashing bandit in many forgettable Westerns—played one of Čapek’s robots in the first American production of “R.U.R.” Broadway in 1922. I imagine my grandmother, his younger sister, asking him about his new role on stage. “I play a robot,” I imagine him replying. “What’s that?” she then would ask, becoming one of the first in a small community of speakers to use the term in conversation and to ponder what was this new form of artificial human suddenly on the scene. Would robots soon become a thing? Over the next few decades, with the introduction of countless robot characters they surely would. We are, in the College of Humanities, perhaps more familiar with AI and its various forms than anyone else on campus, including the computer scientists and robotics experts in the College of Engineering. Our valued colleagues there may know better than we do how to make an AI, but we may know better what artificial entities mean to us as humans and how we might go on living with them in the century to come. Our faculty, staff, and students are asking hard questions about AI and deception, media disinformation, deepfakes; about intellectual property and who has the rights to their own words and their own likenesses; about the ethics of using ChatGPT for rough drafts; about those without access to the internet or who refuse to engage; about public communication designed not for human audiences but for AI to scrape and absorb. Clearly our award-winning debate team will be debating other humans for the time being and our crisis communication courses will be preparing students to advise people and companies who have gotten into hot water with an AI malfunction. Our jobs will go on.


A screenshot of Hollis's tweet that reads "Nobody yet knows what cultural competence will be in the AI era."      The “culture” of ChatGPT and other AI products will be a new focus of humanities scholars in years to come. Will chatbots always be anthropomorphized, answering in a conversational style using “I” and “you”? Will the very young and very old become “friends” and develop emotional relationships with their AI caretakers? How long will it take for AI to absorb the local and ever-changing jargon of teenagers, who stop using terms the minute grownups start using them? (Does anyone say “groovy” anymore?) Bottom line: we welcome AI into the College of Humanities and hope that it will learn as much from us as we learn from it.



Last Updated: 10/30/23