Why Should We Just Accept AI?
Against digital technological determinism
We’ve been told that the AI revolution is inevitable1. That what’s coming is coming whether we like it or not. You either adapt or get left behind. It’s going to change our world and make our lives easier and more efficient. Either America leads with AI innovation, or China will, and we can’t allow that. While there may be downsides to AI (environmental, cognitive, social, spiritual), ultimately it’s here to stay. So the only question is: How can we use it well?
I don’t buy it.
I don’t think we have to roll over and accept the juggernaut of AI, at least not in every area of our lives. And I think we will look back with regret if we don’t act now to temper AI, if we don’t change our posture toward AI to one of prudent skepticism.
Looking back over the last twenty years of digital technological development should remind us of one truism about technology: technological advances come with unintended consequences. And the faster the advances are adopted the harder it is to identify and ameliorate those consequences. When adoption is slow, you have more time to identify potential problems and develop social norms, practices, and regulations that mitigate those problems. When adoption is a blitzkrieg pushed by multi-billion dollar companies and the federal government, we have almost no time to develop social norms, practices, or regulations.
In fact, in the rush to develop, we can’t even scientifically measure the consequences of many forms of digital technology because they are so rapidly changing! Jared Cooney Horvath makes this point in his article for After Babel: “Educational technology evolves so rapidly that by the time researchers evaluate one platform, it has already been patched, rebranded, or replaced. Product-specific causal evidence is perpetually just out of reach.” This doesn’t mean that there aren’t consequences, only that certain scientific standards can’t be established. This should only make us more cautious about digital technology. But historically, it hasn’t. Historically we are discovering consequences long after adoption, and some cases, addiction has settled in.
Take the case of social media. It exploded into our lives starting (mostly) with MySpace and Facebook and developed and advanced until it was a monster consuming billions of lives. And only now are we realizing the dramatic effects it has on teens and their mental health.
Or take smartphones in general. Jonathan Haidt has done an excellent job showing that the combination of smartphones and social media are toxic for teens. And he has been working frantically to get smartphones out of schools to get some safe space for teens to be free from them. But that’s only after decades of damage already done and addiction already settled in.
Or take the use of tablets in schools. Once sold to schools as the way to “prepare for students for the technological future,” now there is growing reason to believe that they distract students from real learning and weaken their attention spans (see here for more). The best schools are the ones that are going screen-free! But this is only after a generation of students was subject to our “technologically inevitable” advances.
Or take the use of computers for taking notes in college classrooms. Once sold as an inevitable and efficient system for learning new information, any college professor will tell you that 9 times out of 10 students are more likely to be watching videos and surfing the web than paying attention. And even if they are taking notes, the data shows that they retain less information than handwriting notes. What was sold to us as inevitable has turned out to be harmful. The problem is that now every college professor in the country has to fight the battle to get their students to put away their computers and write by hand for their own good.
I could go on. The point is not that digital technology is bad, but it has a record of having unintended consequences, as all technology does. The difference is that we are consistently rushing to adopt digital technology at a rapid pace without stopping to count the cost, without really thinking through consequences. And a significant reason we rush to adopt is that we feel compelled to. If we don’t, we’ll get “left behind.” We’ll miss out. We’ll lose something valuable. Everyone is going to do it, so we might as well join in.
But if we step back and think about that logic, that’s not a good reason to adopt AI. That’s despair.
Instead, I think we need to be AI skeptics. Not opposed to AI in every use case, but skeptical. Open to its use in particular circumstances given a good argument, but cautious, because we don’t know what we are dealing with. We keep acting like we know what AI is doing and what effect it’s having on us and our work, but in most cases we don’t. We’ve tried being openly trusting toward all digital technology and in some notable cases that has caused significant harm to us and our children. Maybe it’s time we wise up and slow down and think before we adopt.
There’s plenty of reason to be skeptical. In the arts, AI has taken the great works of human artists and now churns out soulness facsimiles that put actual artists out of work. The beautiful experience of a human communicating to another human through creation is circumvented for the sake of efficiency (AI may be the master form of technique). In writing, the intellectual, spiritual, and interpersonal maturity that comes from wrestling with words and meaning and ideas is cognitively offloaded to a machine which produces a poor substitute of human language. Even if it only helps you edit or brainstorm it is robbing you of a rich experience of learning. In relationships, AI acts like a sycophantic companion who trains you to view yourself as the center of the world.
And all this is in addition to the reality of AI psychosis, of AI companion bot addiction, and AI hallucinations.
In other words, we already have signs that at least some use cases of AI have significant negative consequences. Does that mean all AI is harmful or unethical? No. But it does mean that we shouldn’t talk about AI as a uniform positive good and we should be looking for social norms, practices, and regulations that can mitigate the harm it causes.
I think we already see some of this happening. Currently, AI has a low opinion rating. And everytime I see someone post something AI on Twitter, it immediately gets dunked on as “slop.” People are already sick of AI content. We like humanity. It’s a nice thing. God made us in his image, and we kind of like that image. There’s some natural social stigma that goes on around AI that I think is healthy. If someone writes an AI article and gets caught, it’s probably a good thing for people to criticize them. It creates a social norm that readers expect human authorship. The same goes with art. We need a strong social norm around AI usage. This goes for writers, authors, artists, pastors, teachers, and students.
For me, I think of this in Christian Humanist terms. We ought to be advocating for works made by people made in the image of God and reflecting the image of God, with all their errors and messiness and glory. It is through the struggle to create that the creator grows as a full human being and communicates something of that humanity to someone else.
We also need to develop social practices. When is it reasonable to use AI? In what fields and in what use cases? A good test would be to ask, Does this use of efficiency (because that’s what it’s about) come at the cost of some divine or human value or good? For example, I can have ChatGPT summarize the book of Genesis for me, but that comes at the cost of the spiritual benefit of studying the Word of God and growing in wisdom, so that’s a bad use. But if a math teacher asks ChatGPT to take an exam and make an alternative exam with different numbers to help prevent cheaters, that’s not costing anything meaningful. It’s just being efficient. That’s a good use case, in my opinion. We must use the virtues of prudence and temperance to make these value judgements.
Finally, I think we need to push for regulations. Just what those regulations should be, I don’t know. But I do know that AI services are collecting massive amounts of data, are training (sometimes illegally) on massive amounts of data, are advising people on life and death situations, are negatively affecting the environment, are requiring the use of moderation farms overseas where people work long hours viewing horrific content so that we can make slop, are being used to create revenge and deepfake porn and undress women without consent, and are hoping to replace millions of jobs. It seems like there should be at least some government oversight on what these companies are doing. I’m not a policy maker. I don’t have specific suggestions. But I do believe that reasonable regulations are a healthy check on any industry and should not be a threat to their proper growth. Reasonable regulations are only a threat to predatory growth, growth that preys on humans and God’s creation.
But at the end of the day, we have to make the choice not to participate. To paraphrase my favorite Jacques Ellul quote, we must choose not to do all that we can do. We must decide to opt out. Maybe we learn with time that some use cases are beneficial and helpful and we can adopt them over time. But maybe not. Maybe this time around we don’t jump on the bandwagon and accept technological determinism. Maybe we choose not to do all that we can do. And we try to establish social norms that are skeptical of AI. And practices that are reasonable. And encourage regulations that are just.
I don’t think we just have to sit back and accept AI. I think we have a choice. Will we accept our agency, or will we see ourselves as passive participants in world events?
A special thanks to Elisabeth Lasch-Quinn, who I met two days ago and is lovely and wrote this great post on AI and higher education which inspired this post.



Wow. Another insightful article I completely agree with. I love the “I don’t buy it,” statement after giving the lists of societies’ rationalizations for using such a dangerous tool.
I wish every government officials/lawmakers, teachers/students, tech “wizards” in the West (and East) would read this article and really ponder the ramifications of AI. Unfortunately you brought up an inconvenient truth most don’t want to examine. Thank you for this.