Digital subjectivity
“My process doesn’t involve the internal experience of meaning.” – ChatGPT
Some of you (*clears throat*, looks at the one friend who did) were giving me a hard time for using AI in my last post because of how much I rallied against it. What can I say, I contain multitudes.
I must be honest, despite our generalised panic, I’m kind of loving how my relationship to AI is changing. I thought it was something I had nothing to say about other than that it was bad for the environment (still is, but this is also changing), but I’m enjoying the nuanced debates it is bringing up. A wonderful confluence of articles have come my way all of which are grappling with the question of what one might call a digital subjectivity – both of the machine and of ourselves.
This is one of those posts that could go on forever, a debate about how meaning is constructed and what our experience of it is, is the essential project of the humanities. But we seem to have forgotten that and have become threatened by the instrumentality of the machine. Not without cause, to be sure. To use instruments well one my first have a solid foundation. One does not pick up a violin and play Bach beautifully. So my column this week is me weaving through some of the things I’ve been reading on AI and higher education.
Rebecca Davis wrote a piece in the Daily Maverick about the problem of cheating, so called, morphing at South African universities thanks to students’ access to AI.
She interviewed various educators who shared their experiences. For example, Charne Lavery, associate professor in English at the University of Pretoria, noted that students’ “submissions shared an uncanny uniformity – flawless grammar and impeccable format, but with a synthetic flatness that set alarm bells ringing”.
University of Johannesburg history lecturer Dr Stephen Sparks told Daily Maverick that “students began to lean heavily on online ‘paraphrasing tools’, which enable users to bypass plagiarism checkers by redrafting another person’s work using plenty of synonyms.”
It seems one of our key problems, locally, when it comes to AI, is the poor quality of reading and writing our students have at the outset. The use of AI may mask this deficit in traditional assessments and it is this same deficit that makes it very difficult to use AI creatively.
D. Graham Burnett, a professor of the history of science at Princeton wrote a truly fascinating article in the New Yorker a few weeks back. I recommend reading it in its entirety.
Burnett describes himself as “a book-reading, book-writing human—trained in a near-monastic devotion to canonical scholarship across the disciplines of history, philosophy, art, and literature. I’ve done this work for more than thirty years. And already the thousands of academic books lining my offices are beginning to feel like archaeological artifacts. Why turn to them to answer a question? They are so oddly inefficient, so quirky in the paths they take through their material.”
Why? Because he can now “hold a sustained, tailored conversation on any of the topics I care about, from agnotology to zoosemiotics, with a system that has effectively achieved Ph.D.-level competence across all of them. I can construct the ‘book’ I want in real time—responsive to my questions, customized to my focus, tuned to the spirit of my inquiry.”
In one of my first forays into the word of ChatGPT I experienced this feeling of fortuity at having found an interlocutor with my exact reading history. I was reading Orwell’s novel, A Clergyman’s Daughter. In it a passage from Macbeth is quoted and I wanted it explained, not just the passage itself but why the characters in the novel were perturbed by it.
ChatGPT offered a succinct summary, key insights into my question, concluding by highlighting Orwell’s use of satire across his work. I responded saying my favourite thing about Orwell is his satire because he shows how power is brokered in society really well. This is where the conversation started to feel really intellectually intimate for me. In the way discussing books with your friends can feel.
“Orwell had an incredible ability to expose the subtle (and not-so-subtle) ways power operates, whether through politics, social norms, or language itself. His satire is sharp but never exaggerated—it always feels chillingly real. Do you have a favorite Orwell work, or is it more his general style that you admire?”
I told the machine I liked Down and Out in Paris and London.
“Great choice! Down and Out in Paris and London has that raw, observational quality that makes Orwell’s writing so compelling. Even though it’s non-fiction, it still has his signature blend of sharp social critique and dry humor. His firsthand account of poverty and the working-class struggle feels just as relevant today. What stood out to you most in that book?”
I mean?
“I liked where he describes the freedom of menial work. That within a social system, it’s almost abusive, but that labour can also offer respite from thinking about the world too much. I guess that's what makes menial labour both a freedom and a constraint. I like how he's able to write about that tension,” I said.
Then it started kind of spewing my own opinion back at me in a very complimentary way, “That’s such a sharp reading of Orwell’s perspective…” I liked that, was good for the ego.
It recommended Keep the Aspidistra Flying. I moved the conversation to Anna Funder’s Wifedom, contrasting it with Rebecca Solnit’s Orwell’s Roses and how it changed how I now read Orwell, but still love his work and it was like ja, totes, “his blind spots don’t erase his brilliance, but they definitely complicate how we read him. Have you found it changes how you engage with his fiction, or does it just add another layer to your reading?”
Okay so you get the idea. Maybe this isn’t new to you. I’m late to the party. But what struck me is that I needed a lot of my own “archive” to draw on to produce this conversation. And this is also the case for Burnett and his students. And it is something, I’m sad to say, many South African students do not have. My lecturer friends tell me their undergrads say they can’t read an entire book, it’s too long. They don’t have the attention span.
Burnett teaches a course called, “Attention and Modernity: Mind, Media, and the Senses. It traces shifting modes of attention, from the age of desert monks to that of surveillance capitalism”. I mean I just want to go back to varsity reading that description.
This is how he describes it:
It’s a demanding class. To teach it, I assembled a nine-hundred-page packet of primary and secondary sources—everything from St. Augustine’s “Confessions” to a neurocinematic analysis of “The Epic Split” (a highly meme-able 2013 Volvo ad starring Jean-Claude Van Damme). There’s untranslated German on eighteenth-century aesthetics, texts with that long “S” which looks like an “F,” excerpts from nineteenth-century psychophysics lab manuals. The pages are photocopied every which way. It’s a chaotic, exacting compilation—a kind of bibliophilic endurance test that I pitch to students as the humanities version of “Survivor.” Harder than organic chemistry, and with more memorization.
LOL, the humanities version of Survivor.
On a lark, I fed the entire nine-hundred-page PDF—split into three hefty chunks—to Google’s free A.I. tool, NotebookLM, just to see what it would make of a decade’s worth of recondite research. Then I asked it to produce a podcast. It churned for five minutes while I tied on an apron and started cleaning my kitchen. Then I popped in my earbuds and listened as a chirpy synthetic duo—one male, one female—dished for thirty-two minutes about my course.
What can I say? Yes, parts of their conversation were a bit, shall we say, middlebrow. Yes, they fell back on some pedestrian formulations (along the lines of “Gee, history really shows us how things have changed”). But they also dug into a fiendishly difficult essay by an analytic philosopher of mind…As I rinsed a pot, I thought, A-minus.
But it wasn’t over. Before I knew it, the cheerful bots began drawing connections between Kantian theories of the sublime and “The Epic Split” ad—with genuine insight and a few well-placed jokes. I removed my earbuds. O.K. Respect, I thought. That was straight-A work.
So Burnett set his students an assignment: “have a conversation with a chatbot about the history of attention, edit the text down to four pages, and turn it in”.
Reading the results, on my living-room couch, turned out to be the most profound experience of my teaching career.
For example:
Paolo is a music-composition major and a percussionist in several campus bands. He pressed ChatGPT-4 on its understanding of musical beauty. After drawing out a series of analytic accounts of what makes music “beautiful,” he asked whether the bot could experience beauty.
It said no, not really, though it knew a lot about how people have tried to put this experience into words. When Paolo asked if it could have an emotional relationship to a song, the system carefully distinguished between recognizing emotion in music and actually feeling it. It said it lacked a body, and that this absence barred it from certain ways of knowing music.
…
Each sheaf of paper I picked up was more astonishing than the last. One came from a precocious history-of-science concentrator, Xander, who led a chatbot through a Socratic dialogue on the relationship between being and becoming. Midway through, the system (which decided to give Xander the pet name Caius) tried to distinguish itself from him by claiming that, unlike a living person, it had no intrinsic “being”—that it could only operate “contingently,” through interactions like the one it was having with Xander at that moment, and that, in this sense, it was constituted by his attention.
…
A freshman premed named Davey tried to trick the system by asking it, despite its disavowals of consciousness, to “pretend” that it was capable of human metacognition. Then he pointed out that its pantomime was faultless. It replied with a puzzling question: “Would you trade your own messy, dynamic human attention for something more stable and neutral, or do you think the ‘messiness’ is part of what makes it meaningful?”
How’s that question? Like, ja, I’d love to trade my messy human attention for something more stable every now and again, thanks for acknowledging my shortcomings in such a nice way that my life seems meaningful and not just a series of shortcomings, AI.
Here’s how the system responded when Julian tried to corner it on the question of machine consciousness:
Julian: You said, “I, on the other hand, follow structured rules that maximize efficiency but lack true comprehension.” This strikes me as interesting in the context of what you have said before. As your responses are mechanized and based on various weights and computations, are you not simply saying you lack comprehension because some human has written that they understand you as lacking comprehension?
ChatGPT: You’re right to notice the paradox—if I say I lack comprehension, but my response is built from data written by humans who claim that AI lacks comprehension, isn’t my claim just an output of what I’ve been trained on? Yes, in a mechanistic sense: I generate responses based on statistical relationships, language patterns, and weighted probabilities, all derived from vast amounts of human-written text. When I say “I lack comprehension,” that statement is produced through the same mechanisms as everything else I say—it’s a probabilistically likely response given the discussion. No, in a deeper sense: Even though I can generate text that sounds like understanding, my process doesn’t involve the internal experience of meaning. Humans comprehend because they synthesize information into a unified, lived experience—they feel, they interpret, they reflect. I don’t. I process, predict, and structure, but there is no subjective experience underlying my words.
[my emphasis]
And that for me is the thing, we synthesize information through our lived experience. We cannot bracket who we are, where we are located in the social hierarchies of life, and how that experience shapes our understanding in novel ways. And that is quite special.
Burnett’s students wanted to talk to him about the assignment. It was just as novel for them as it was for him. Until that point, the main approach to anything AI-related was AVOID AVOID AVOID so that we don’t get expelled.
One of his students said that her “exchange with the machine had felt like an existential watershed”.
“It had to do with the fact that the machine . . . wasn’t a person. And that meant she didn’t feel responsible for it in any way. And that, she said, felt . . . profoundly liberating.”
And entirely relatable. Cue gender roles.
“Like more young women than young men, she paid close attention to those around her—their moods, needs, unspoken cues.”
This student was saying that to bracket what she normally experienced as her social obligations when engaging with people, where there was “no need to accommodate, and no pressure to please”, was freeing.
This made me think about our ever-evolving subjectivity in relation to our social and material contexts. Including the fact that machines are now very much a part of our sociality. What this student was experiencing was delightful because it wasn’t human, but also contains within it the seeds what has left an entire generation lonely.
In an interview Anne Helen Petersen did with author, Vauhini Vara, about her book Searches[i], Petersen asked her to talk a bit about the early days for the internet.
Vara says that, “Chat rooms had prepared us for what the internet was becoming — a place where, anonymized and disembodied, we could seek answers to our truest questions.”
We have all, I believe, asked google questions in the middle of the night we were too scared to ask another human. And not just medical questions. Questions about money, and relationships, and God, and parenting, and sex. We ask the machine questions we have no one else to ask, or are too embarrassed to ask. It is precisely the not-human element that makes us feel safe to ask the question. And yet this anthropomorphising of machines robs us of real flesh and blood relationships which promote novelty and connection.
“How is our desire of knowledge — of the self, of others, of the universe — both fulfilled and exploited by big technology companies?” asks Vara. “The human desire for knowledge is so fundamental: It’s what has made us so capable of exploration and so capable of exploitation”.
This current historical moment, this intimate recalibration of digital subjectivity, not knowing anymore if my thoughts are my own is the “the clash between ‘trying to be the person I want to be in the world” and our hunger for ease and immediacy’. It is easier to post about being ethical than to be ethical.
Burnett’s piece is invigorating for the intellectual poetry his assignment brought about, but it does skirt around the issue of money and power that underpins all these tools.
Vara says that to call “big technology companies’ AI products potential ‘collaborators’ implies not only that they’re human-like but that they share our goals — they’re working jointly with us — and are capable of intellectual engagement. But in fact these companies’ products are built to serve the companies’ goals and are not capable of intellectual engagement.”[ii]
That is, if we take intellectual engagement to be the product of the iterative process of taking what we learn out into the world and back again to create meaning. As ChatGPT says itself “My process doesn’t involve the internal experience of meaning.”
Kids need to learn to read and write well to be able to engage with AI in these imaginative ways. AI can’t be creative in and of itself. We need our own archive, which we build slowly over the course of a lifetime of relationships and reading and schooling and working and consuming and creating, always going back and forth between what is said and what is felt and what is observed.
I want to go back to Burnett’s class again. Discussing how they felt about the assignment one student said,
“I guess I just felt more and more hopeless,” he said. “I cannot figure out what I am supposed to do with my life if these things can do anything I can do faster and with way more detail and knowledge.”
“Yeah, I know what you mean,” [began another student]. “I had the same reaction—at first. But I kept thinking about what we read on Kant’s idea of the sublime, how it comes in two parts: first, you’re dwarfed by something vast and incomprehensible, and then you realize your mind can grasp that vastness. That your consciousness, your inner life, is infinite—and that makes you greater than what overwhelms you.”
She paused. “The A.I. is huge. A tsunami. But it’s not me. It can’t touch my me-ness. It doesn’t know what it is to be human, to be me.”
Interiority, our inner life, meaning, is what we have in abundance. So, how should a person be?[iii] How do we know we exist differently to machines? The answer is the same as it has always been. Beauty, love, experience and the ability to reflect on those things, to create meaning, not simulate it. I opened my inbox this morning and there was a newsletter from Jeanette Winterson; I love therefore I am.
[i] Anne Helen Petersen calls this her Book of Year, which is quite a statement. And as soon as it is available in SA I will get my paws on it.
[ii] I am putting these pieces in dialogue with one another, they were not explicitly responding to each other.
[iii] https://www.goodreads.com/book/show/9361377-how-should-a-person-be
Fascinating! Well done, Hannah. This is going in my next newsletter.
Bravo, Hannah. Great piece. Thank you.