The Problem with A.I. Writing is the Problem is an Artificial Life: You Lose Agency
What a recent MIT study on A.I. can tell us about ourselves
This week a group of researchers at MIT’s Media Lab released a “pre-publication” paper on the effects of using A.I. to write papers, specifically SAT essays. Pre-publication means that it has not been peer-reviewed yet, so facts on the ground may change in the coming year, but it’s a promising paper. The study was comprised of 54 participants and broken into three groups: one group used ChatGPT to write their paper, one used Google, and one used only their brains. What they found was exactly what you’d think they’d find. Here’s how Time described the results:
Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.
Specifically, when asked to revise their essays without use of A.I., the ChatGPT users found themselves struggling:
The first group remembered little of their own essays, and showed weaker alpha and theta brain waves, which likely reflected a bypassing of deep memory processes. “The task was executed, and you could say that it was efficient and convenient,” Kosmyna [one of the authors of the study] says. “But as we show in the paper, you basically didn’t integrate any of it into your memory networks.”
In other words, if you “write” a paper using A.I., you won’t integrate the knowledge into your mind. And whether the setting is a classroom, a church (in the case of a sermon), a business office, or a home office, integrating ideas is one of the absolute critical purposes of writing. We write to know. A.I. shortcuts that, as we all knew it would.1 But what I’m most interested in is the study’s finding about ownership:
Another nuanced behavioral dimension was the participants' perception of essay ownership. While Brain-only group claimed full ownership of their texts almost unanimously (16/18 in Session 1, rising to 17/18 by Session 3), LLM Group presented a fragmented and conflicted sense of authorship: some participants claimed full ownership, others explicitly denied it, and many assigned partial credit to themselves (e.g. between 50-90%).
This too makes a lot of sense. The more you rely on A.I. to write for you, the less ownership you naturally feel over that work. In one way this is simply seeing reality for what it is. It quite reasonably is not your work, and you can recognize that. And yet, you are treating it as your work, and there’s the conflict. We surrender our agency and in exchange we receive free labor.
It seems to me that what A.I. asks of us is the same thing the rest of contemporary life asks of us: to surrender our agency and to disengage consequential thoughts from our memories. It’s hard for me to overstate how perfectly suited for propaganda manipulation a generation of A.I. users would be, people who implicitly trust massive digital systems, who don’t generate their own ideas, and who don’t hold important information in their memories. We’re already halfway there with our heavy dependence upon Google and smartphones, but A.I. dramatically increases this dependence.
Our agency is surrendered primarily in the form of addictions, dependencies. And it’s notable that the users in the study became increasingly dependent on ChatGPT to write their papers for them, eventually resulting to straight copying and paste. For us it’s addiction to smartphones, mobile games, social media, pornography, internet gambling, alcohol, drugs, video games—whatever it might be. Something that takes a piece of our agency in the world and gives us a sense of false-life. Addiction is a mark of contemporary life. Almost everyone is addicted to something. And at root, I think we’re uncomfortable with our own agency in the world.
But our agency is also surrendered through ideologies which teach us that our place is to be a consumer, to express ourselves, to go with the flow of the political parties, to be middle-class, to be presentable on Instagram, and so on. We surrender to these ideologies of living and allow them to guide us without challenging them. Here is one place where I think Sartre’s idea of “bad faith” is onto something. It does matter that we live intentionally rather than inauthentic lives guided by the flow of corporate and political forces which do not have our ultimate telos in mind. But Sartre is wrong, we do have a human nature, and that nature means that our authentic living is not to create our own nature but to live toward God. And that requires us to take up our agency instead of giving up.
Finally, there is the concept of ownership. I know I wrote an entire book about not belonging to yourself, but to God. But we can still talk about taking ownership over your life in the sense of accepting responsibility for your actions and duties. And this is part of what A.I. seeks to shortcut. You have an action and a duty to perform, and rather than perform that duty you offload the labor onto a machine. This is the basic story of technology. Sometimes that dynamic is good, sometimes the tradeoff is not worth it. In the case of A.I., the tradeoff is that we lose some of our insight and memory. We don’t learn what we need to learn. That’s hardly worth the tradeoff. We do the same thing outside of A.I. We deny our responsibility in the world for acting and fulfilling our duties, of “taking ownership” over our lives. I think in some ways a pop/performative therapy culture has contributed to this, encouraging us to see ourselves as perpetual victims and never as agents capable of acting for God in our own lives.
My other concern, drawn from the study, is that we’re disengaging from ideas and focusing on the production of content. A.I. encourages us to do this by creating outputs for us, rather than requiring us to wrestle with ideas ourselves, putting them into our memory and producing deep learning. Similarly, much online content skips over meaningful engagement with ideas and cuts straight to content creation. Images, articles, and videos are produced, shared, and engaged, but without the kind of deep learning that needs to take place for us fully engage our capacity as humans. I’m not saying this doesn’t ever happen. There are certainly pockets of resistance online, but much of the algorithm pushes us to ephemeral engagement. 30 second thought bubbles.
I know I’ve written a lot about A.I. over the last year. I don’t think I’ll stop anytime soon unless someone stops it. The reality is that it’s changing our lives and the way we process the world, and if we aren’t careful, it will harm us. It may harm us anyway. I hope I’ve shown that in many ways, the basic contours of A.I. have been with us for a while. The temptation to abandon agency. The disengagement with ideas and the failure to form memories. There’s so much more I could say. Let us all strive to live unto God, fully human, clinging to our memories, engaging with ideas, and accepting our agency.
Now I should note that in the study, students who first wrote their papers with their brains and then used A.I. to help revise continued to show cognitive benefits, suggesting that perhaps A.I. simply needs to be introduced later in the development of a student’s capabilities. But I am seriously skeptical. I suspect that a longer study will show that the person who learns to write well and then turns to A.I. as a tool will atrophy those cognitive muscles.