An Ethical Critique of AI
Published:For your average technological layman most digital technologies seem to pretty much be magic. Somehow a bunch of ones and zeros can be transformed into a movie which is streamed over cables and even thin-air until it reaches my tablet and is transformed to something actually recognizable on the screen. So it should come as no surprise that when it comes to Large Language Models (LLMs, a.k.a. AI) the usual amazement and mystification of technology reached new peaks as folks started speculating that perhaps this is truly the point where computers can pass the Turing Test, and the more enthusiastic among these would go further and say that perhaps even the machines could gain conciousness(1) if they have not already, all ultimately raising the question of whether we have reached the point of Singularity.(2) In fact, we could say that the capabilities of LLMs are so amazing that even some of the CEOs of these companies are suggesting they either have or may soon truly achieve consciousness.[1] However, if you were to ask these people how we can know that LLMs have reached this point the test provided is generally a very unsophisticated one: if it can ape human behavior, it must have an intellect at the same level as a human.
Now, for anyone who has some familiarity with Pro-Life apologetics, this kind of reasoning to determine personhood sounds extremely familiar, but is now being applied in the reverse. It is the attribution of personhood on the basis of what the thing can do in a specific stage of its development rather than on the basis of what kind of being the thing is. In the case of the unborn it is used to claim they do not have personhood because at that stage of their development they cannot do certain things, while in the case of LLMs it is used to claim they do have personhood because they can do these things (at least those things which we associate with the intellect). All this because our modern materialist culture cannot actually understand the concept of a kind of being, for all being is merely an assortment of atoms that just so happen to organize themselves in such a way that a conscious being is formed. Thus, to form other conscious beings all you have to do is put the same kinds of atoms together in the same pattern and you can reproduce life! And not any sort of life, but rational life at that. Furthermore, in the case of LLMs, it would seem that it is not even necessary for it to be the same kinds of atoms in the same pattern at all, but instead we can replace neurons and alike with transistors and other electronic elements, coded to interact with each other to do the same thing human beings have been doing for tens of thousands of years, and what pretty much most animals have been able to do as well: pattern recognition and replication.
The fact of the matter is that LLMs may seem, from a purely superficial standpoint, like a child who is slowly learning to speak. In the past few years we have seen drastic improvements as many of the tell-tale signs have been smoothed out of the algorithms. But even as they become indistinguishable from the product of actual human work, that does not mean that the means of arriving at that product are the same, and thus the value of the product itself is not the same. So we must ask ourselves: how do these LLMs work?
The Man Who “Learned” Chinese
It is not hard to find explanations on the Web that explain in a very technical manner how these LLMs work, but for most people these explanations are as good as a neuroscientist explaining how the brain works (which, at least for me, would be pretty useless). Luckily, it is not necessary to know how all the gears in an analog watch are interconnected versus the circuits in a digital watch in order to understand that the principle of an analog watch's movement is kinetic energy and physical components pushing one another, whereas with the digital watch it is electrical signals passed through logical circuit components. The specifics do not really matter for these purposes. Therefore, for LLMs, I would like to offer an explanation of this principle through analogy which may be easier for people to understand.
Imagine there is a man who is a monolingual English speaker. Furthermore, he has no knowledge of grammatical concepts which would allow him to think abstractly about his language, much less any foreign language. Now let us say you gave this man hundreds, maybe thousands, or maybe even millions of years to look over Chinese texts. Some of them are books, fiction or non-fiction, some are articles, some are conversations, others are instruction manuals, etc. All sorts of texts of practically any kind. After such a long time he begins to notice some patterns, where normally certain symbols are followed by certain other symbols. And after all this time you begin to train him: you give him a text in Chinese and he has to try to return the proper pattern of symbols which ought to follow the ones you gave him. You, the one who knows Chinese, judge whether the response makes sense, and if so you give him positive feedback which tells him he did a good job (and will likely return similar responses for similar prompts), and if not you give him negative feedback which tells him he did a bad job (and will be less likely to return similar responses for similar prompts). After all this time, you finally have trained this man to the point where if any Chinese person were to speak with him over text prompts he could respond as if he spoke perfect Chinese and was truly having a conversation or writing meaningful texts. But is he?
If you were to actually ask this man (in English) whether he had any idea what he was saying, he would obviously reply with a flat “Of course not,” but if you were to ask him in Chinese I do not think any of us would doubt that he would simply reply that he does, even though he clearly does not. The man has not actually learned Chinese, but rather to mimic Chinese. The reason for this is that he is not capable of doing the very thing that language is meant to do: convey meaning. Sure, a Chinese speaker may find meaning in the texts he writes, but that meaning is not his meaning. This language is no longer one rational agent communicating his ideas to another rational agent, but merely a single rational agent trying to induce a meaning into a text that wasn't infused with meaning to begin with.
It is from this understanding that some of the various flaws of LLMs begin to make sense. For example, the reason why they cannot accurately cite sources is because, firstly, they do not know what a source even is, but secondly, because it is simply looking back into its data and checking what usually follows within that context with the parameters of “citing the source,” which is why it so often simply makes them up. The truth is that its source is all its data mashed together probabilistically based on the input prompt and the context of the overall “conversation.”
Getting back, however, to a question raised earlier about the level of consciousness of these LLMs, although in the analogy given above the man surely has a rational soul and a human intellect, it is also evident how it is not necessary to make use of these higher faculties in order to do what these machines can do: it is merely pattern recognition and probabilistic computation. This is something that even the beasts could do if sufficiently trained (think of the example of a parrot). The machine has no concept of the meaning symbolized by these words; in fact, it does not see them as symbols at all, but tokens with numerical values. And if it cannot comprehend meaning, then it certainly cannot reason on the basis of meaning. What is more, this is not simply a question of needing more training or more data, it is a matter of the process itself; for no matter how much you train the man in the analogy with more data and better pattern recognition techniques, he never will have actually learned Chinese until he starts to associate meaning those symbols and is thus able to reason a response instead of merely guessing what tokens go next.
Hammers Are for Nails
One aspect of the LLM usage which is often demanded but seldom actually considered are the ethical considerations, and there is a sense in which you cannot blame people for evading the question, as our ethical framework has become quite impoverished over the last few centuries. In the secularist world we could probably say that the predominant ethical framework is either utilitarian (of some sort) or emotivist. As such much of the discussion around the ethics of LLM usage revolve around the consequences and the effects, rather than the classical approach traditionally taken by the West which focuses on what a thing is and what it is for. Even among some Catholic celebrities, due to this prevailing ethical framework, the thought process tends to get stifled at the consequences of LLMs, wherefore the only limitations or considerations placed on the use of LLMs becomes a simple “don't do bad stuff with it.” But this is not serious. If we're going to treat this new revolutionary technology seriously we must consider what it is and what it is for.
For this I must refer to a video made by New Polity titled “Should Christians Use ChatGPT?”[2] which is the only resource I have found as of yet which actually addresses the issue of the nature of LLMs—though to be fair, I haven't done a lot of research on my own and I was already subscribed to them. I don't wish to go through the entire syllogism here, because that's what they made the video for, as well as their blog article “AI Chatbots Are Evil”[3] (which probably spoils some of the answer from the title), but I do wish to briefly summarize the concluding statements taking for granted that we all accept basic virtue ethics. In short, we first start not with the LLMs and the chatbots, but with human conversation and its purpose, and note that, as mentioned in the article:
“Conversation is for communion. The ability to speak and to listen, to discuss, to reveal our hidden, intellectual life by articulating ourselves in a public, common language with the hope of receiving a response—all this has as its natural correlate in another intelligence, one who receives our meaning, understands it (or misunderstands it), and has the power to respond in kind, revealing the hidden reality of his or her own subjectivity.”
If conversation is for communion with another intelligence, and LLMs solicit conversation by how they're programmed, yet LLMs—deceptively called AI for marketing purposes—are not actually intelligent, then to converse with a LLM chatbot will necessarily always frustrate the act of conversation. To frustrate an act from achieving its end is immoral, therefore LLM chatbots are an immoral technology, and to converse with one is likewise an immoral act.
This is a brief summary of the argument put forward by Marc Barnes of New Polity. For a fuller treatise on the matter either watch the video[2] or read the article.[3] For my purposes, however, I'd like to clarify a couple of things which I think need to be pointed out regarding this conclusion. First is to point out that claiming a technology is immoral by its nature is not something new or special to LLM chatbots (at least for Catholics); and second is to clearly delineate the limits of this argument, as it is very possible that one may wrongly assume this is an argument against all use of LLMs.
Regarding the claim that the LLM chatbots are immoral by nature is not something new. I think that I can give two examples which should be compelling: the first mostly to Catholics and the second (though not explicitly condemned by the Church) hopefully to a broader audience. The first technology which should come to the mind of every Catholic in this regard is contraceptive technologies. The argument by which the Church justifies such a condemnation is exactly the same as the one used here against LLM chatbots: that it frustrates the natural end of a human action—in the case of LLM chatbots, conversation; in the case of contraception, procreation. Sure, one could technically do as Trent Horn has advocated in some videos(3) and purposely sabotage a contraceptive you will use in order to attain some other end (he gives the example of acquiring seed from the marital act to use for artificial insemination), but I think we'd all recognize that this is not truly a use but a misuse of the technology: it's not what it was made for, hence the need to intentionally sabotage it. The technology itself is evil because its purpose is evil and unnatural. Similarly we can also give the example of a nuclear bomb. The purpose for which this technology was made was to indiscriminately destroy and kill at a massive scale, necessarily killing millions of innocent civilians in the process—something which I hope doesn't have to be explained why it is extremely immoral. Again, surely you could instead use a nuclear bomb to destroy or deflect a giant asteroid headed to Earth, but this is not what the technology was made for. Similarly with the LLM chatbots you could limit yourself to only giving it commands, as opposed to conversation (just as we've done up until now with technologies such as Siri or Alexa), but this is clearly not what the LLM chatbots are meant for. Would it be licit to use these chatbots in this way? Probably, but it would end up being forced and unnatural to the technology itself.
Secondly, it's worth noting that this claim is not as general and extensive as one may think. You'll notice that in the past few paragraphs I've been distinguishing between LLMs simply and LLM chatbots. This is an important distinction because the underlying LLM technology is actually not necessarily meant for conversation and has much broader uses. Remember, the actual LLM technology isn't even related to language at all (even though it's in the name), but rather it's about predicting based on prior tokens what the next token is likely to be. As such the argument doesn't touch upon things such as how LLMs are used for autocompletion of the next few words when you write your e-mails, or of the next few lines of redundant code if you're a programmer—though there are legal concerns over violation of intellectual property rights. It also doesn't apply to cases such as archaeologists who have used LLMs to “fill in missing words from ancient Roman inscriptions carved on monuments and everyday objects, as well as dating and placing them geographically,”[4] nor cases such as the Alphafold tool which has been used to predict with amazing accuracy the 3D structure of proteins,[5] something which would have taken lots of expensive and tedious manual labor to do in a laboratory before. Of course, all of these should be (and generally are) verified by a human being who is an expert in the field in question, but we can see how these implementations differ from those proposed for the chatbots: the chatbots seek to completely replace the human element in something which is meant for human interaction, these implementations seek to improve the performance of human activity and automate certain redundant and tedious tasks, but the human being remains vital. In other words, these implementations actually use the LLM as a tool which assists man in his endeavors.
Here Be Demons
This is probably where I will lose the entirety of the secularist (or quasi-secularist) section of my readers, since we delve much more into the spiritual, but I think it is worth addressing due to the grave danger it poses to the souls of men. We must talk about the potential for demonic influence in these LLM chatbots.
When speaking of this there are typically two reactions: on the one hand you have those who, incapable of understanding how a machine could generate text that feels human, claim it all to be demonic and powered by the Devil himself; on the other you have those who completely reject the idea because they frankly find the notion of the demonic, particularly in computing, to be absurd. I will take neither of these positions and instead choose a third middle route, which I think is more prudent and theologically sound.
We must not assume that simply because we do not understand the technology that it is something demonic. Furthermore, there is an actual logical way of understanding how LLMs work which can easily be found in the publicly available source-code for some of these models.[6] However, that is not to say the demonic cannot influence a computer and a computer algorithm, particularly one that, as already stated above, is made for an evil purpose. This is made even more probable by the fact that these LLM chatbots are being turned into literal idols to which people entrust their livelihoods. This is not an exaggeration; an increasing number of people are beginning to use these chatbots as a means of therapy[7] or making other important decisions in their lives. What's more, arguably because of the somewhat mystical omniscience and objectivity attributed often to these LLM chatbots, people will put more faith in the answer given by the chatbot than a human being or their own personal research. We've all had the experience of having a conversation with someone and when a dispute arises they take out their phone ask the chatbot and trust its judgment as if it were final. This turns these chatbots into idols, not just in the very loose sense we're used to hearing from our priests and catechists where it's idolatry for me to like my game console too much—an example I've heard used quite a bit for some reason—but in the very real sense that as a society we are assigning divine attributes (sometimes implicitly, sometimes explicitly) to, not just a creature, but a creature of our own making: just as the ancient heathens would assign divine attributes to the statues they would make of matter that in some way resembled a human being or another creature.
When this sort of idolatry happens, which is already a sin against the First Commandment, it is no surprise that just as with the idols of old, demons can also use the new idols to manipulate men to stray from God. For demons like to make use of the creatures we replace God with in order to mislead us. By placing so much faith in these idols we make ourselves vulnerable to misdirection by an algorithm that has no sense of the significance of the words it is saying. It is akin to the reading of tea leaves or astrology. You are extracting meaning from something that has none. You seek wisdom from something that cannot think. You seek consolation from something that cannot empathize. You seek love from something that, for lack of a will, is entirely indifferent to your existence or fate. It is truly replacing God, the Summum Bonum, with a shell which doesn't even deserve to be called a shadow compared to that Being who is the only one who can truly fulfill our deepest desire.
In the World, Not of the World
The question thus becomes, what do we do now? Even though I've limited my criticism (which honestly is mostly sourced from New Polity) to the chatbots, it is true that this seems to be the most common use of LLMs for the common man, or at least the most obvious. It would seem to be almost ubiquitous at this point, with everyone around us using it and some Christians even promoting it. Even so, I think we ought to live by our convictions and what is right and avoid the use of chatbots and especially their development.
I should acknowledge that this may be difficult for some who may find themselves forced to use these chatbots by an employer or educational center. Obviously, where there is lack of freedom there is also lack of culpability. Yet it would be a good thing to begin to make known our opposition to the use of such technologies and to have prepared a good defense for our position. If forced to ultimately use them anyways, we should attempt to use them in the most restricted ways possible and avoid using any conversational language, instead focusing on direct queries (i.e. questions) or orders, since these do not constitute a conversation. Certainly, this will be a difficult task and an uphill battle, since the technology is made for the purpose of mocking conversation, but that is not to say it is impossible.
I would also like to speak directly to my Catholic brothers & sisters who I think suffer from an acute sense of Fear-Of-Missing-Out (a.k.a. FOMO). Ever since the Protestants beat us to the mass adoption of the printing press—and since we've recognized that as a mistake—Catholics have been afraid of falling behind technologically. This combined with the overall worldly technophilia has lead to many Catholics jumping head-first into whatever new technology there is without any real discernment about the technology itself and whether it is truly the most appropriate means for the desired end. As a result, when it comes to LLM chatbots, Catholics have been quick to implement them in spite of the obvious limitations. This is most obvious with something like “Justin, the Catholic Answers Virtual Apologist,” which, as demonstrated by the famous Atheist, Alex O'Connor, can easily be made to give nonsensical responses without much depth on very basic issues (e.g. slavery in the Old Testament) as well as contradicting itself.[8] All the while someone could simply look up an answer in a search engine and find many actually coherent responses from real people who are actually able to think about the responses they are giving.[9] Now certainly, some Catholics may not be able to give satisfactory answers to these questions either, but most Catholics aren't professional (or even amateur) apologists. These LLM chatbots, on the other hand, are explicitly marketing themselves as such, and so cause scandal when they fail to give adequate answers, with the additional flaw that no one can actually take responsibility for the faulty response since it was probabilistically generated. At least if a Catholic apologist says something scandalous then he takes responsibility for his blunder, but there is no way for a chatbot to take responsibility; it's not capable of doing such because it has no agency.
When I first converted to the Catholic Faith, we were proud to show off our vast library of resources from the Catechism of Pope St. John Paul II, to the Summa Theologiæ of St. Thomas Aquinas, and even the amazing work done by the good people at Catholic Answers who have created a large repertoire of responses to common objections to Catholicism on their website. All of this knowledge which has been accumulated over two millennia is now being pushed to the side in favor of a subpar answer from a literally mindless, irrational, and thoughtless machine. How can this be seen as anything but a deterioration of the Catholic intellectual tradition which we were all so proud of?
To make clear once more, I am not condemning all uses of LLMs, as pointed out above, but only those which pertain to the substitution of something which by its very nature is meant to be an interaction between two persons. We may continue to search for applications of LLMs in our respective fields as a tool which may serve to make our work more efficient and less tedious—honestly, if it can autocomplete the next 20 lines of totally predictable and repetitive code, that's great—but we must not use it to replace something which is by its very nature an interpersonal act. It's the same reason we all think that sex dolls are an absolutely disgusting idea and people that use them are freaky weirdos.
Let us avoid the use of these LLM chatbots as well as their development. Instead let's look at actual applications that the LLMs can have and reserve conversation for persons, not things that merely sound like persons. We should also avoid the temptation to sloth, looking for one of these chatbots to quickly synthesize what are often complex matters, and yet that we could easily discover for ourselves via a quick search online—yes, that thing we used to brag so much about when I was a teenager. This also means not facilitating the vice of sloth by providing for such means. But most importantly, even if you disagree with everything I've said about how the use of LLM chatbots is immoral per se, at the very least strive to avoid turning to idolatry by turning to it for all your answers. This is something that can be extremely spiritually dangerous and is a real offense against the First Commandment, and thus the most important of the Decalogue. I would especially advise spiritual leaders of all faiths—particularly the Abrahamic faiths which all resoundingly condemn idolatry—to call attention to this spiritual danger and to take it seriously. Yes, you may be called a retrograde or a luddite for doing so, but when the spiritual well-being of your flock is at stake, is it not worth the scorn you will receive? “For what does it profit a man, to gain the whole world and forfeit his life?” (Mk. 8:36) We must not fall into the fatalism of believing that simply because the World develops a technology we must inevitably use it. We have the freedom to resist these things, just as the Catholic Church continues to do with technologies such as contraception. To accept this fatalism is to believe that we have no say not only in how society functions, but even in how we conduct and live our own lives. And to accept this is to deny human freedom, and to deny human freedom is to despair. But we are free, so we need not despair, nor need we succumb to the whim of the winds of the World which seek to uproot us. If our roots are in Christ, the fertile ground, if our house is built upon the rock which is Christ, we need not despair, nor have we the need to embrace the whims of the World. For Christ is the Rock, and the Rock does not move, as “Jesus Christ is the same yesterday and today and for ever.” (Heb. 13:8) Amen.
Resources
Notes
- Some people confuse the Turing Test to be an indicator that a machine has reached conciousness, but this is a misunderstanding. The test merely indicates that in a blind-folded scenario a human cannot tell whether they are talking to another human or a machine, usually via a text prompt.
- Singularity in this context is understood as a point-of-no-return past which the advances of technological complexity are beyond our control.
- I cannot find the video in particular. It may have been his episode “What's Okay for Married Couples to Do (Sexually)?” but I can't be certain.
References
- Anthropic CEO floats idea of giving AI a “quit job” button, sparking skepticism - Ars Technica
- Should Christians Use ChatGPT? - YouTube
- AI Chatbots Are Evil — NEWPOLITY
- Gaps in what we know about ancient Romans could be filled by AI - BBC
- AlphaFold Protein Structure Database
- The 11 best open-source LLMs for 2025 – n8n Blog
- Should You Use AI as Your Therapist? |U.S. News
- An Atheist Debates a Catholic AI Apologist - YouTube
- Let Me DuckDuckGo That For You - LMDDGTFY