‘Human-Centered AI’: How Should the Church Engage With Emerging Artificial Intelligence Technologies?

Father Philip Larrey, a leading Catholic expert in the area, discusses AI’s promises and perils.

As chairman of Humanity 2.0, Father Philip Larrey has developed strong ties with leading AI innovators.
As chairman of Humanity 2.0, Father Philip Larrey has developed strong ties with leading AI innovators. (photo: Courtesy photo / Father Philip Larrey)

The March 2023 introduction of GPT-4, the chatbot developed by the research lab OpenAI, has sparked enormous excitement — and great anxiety about the latest advances in artificial intelligence, or AI.

In response to questions and inputs from users, ChatGPT-4 can converse and write poetry like a human. In minutes, it can spin out college-level essays on an endless array of topics — though details can be wrong. It can write and correct computer code, and supply work products that may soon threaten the livelihoods of lawyers, accountants and journalists.

But in a March 2023 “open letter,” top technologists warned that this promising instrument posed “profound risks to society and humanity.” The signers called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

In May 2023, a second high-profile letter underscored the need for urgent action by government regulators and the tech sector.

AI technologists are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” read the letter, released by the nonprofit Future of Life Institute.

The pause would provide time to introduce “shared safety protocols” for AI systems, the letter said. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Since then, President Joe Biden has vowed to take action that would secure the “safety” of AI advances, specifically addressing concerns about the potential for Chat-GPT-fueled disinformation that could foment divisions and unrest in the months leading up to the 2024 presidential election. 

Father Philip Larrey has played an active role in the debate over the future of AI and the construction of “guardrails” needed to protect against bad outcomes. The chairman of Humanity 2.0, which facilitates “collaborative ventures between the traditionally siloed public, private and faith-bases sectors,” he has developed strong ties with leading AI innovators.

The U.S.-born priest is the author of Connected World: From Automated Work to Virtual Wars: The Future, By Those Who Are Shaping It and Artificial Humanity, among other works. He holds the chair of logic and epistemology at the Pontifical Lateran University and has worked closely with Vatican officials who have organized a series of high-level conferences on AI in the past two years. 

Father Larrey is also serving as an adviser to the recently unveiled Magisterium AI, which allows users to apply artificial technology in the context of Church documents.

During a telephone interview in late June with Register senior editor Joan Frawley Desmond, Father Larrey offered his take on the fast-moving developments in the tech world and presented Catholic teaching as a vital framework for assessing the promise and peril of artificial intelligence. 

 

Experts on Artificial Intelligence have been raising concerns for some time about the threat posed by new advances in this technology. What are the most important breakthroughs leading some to call for a pause? 

The algorithms have gotten really good. 

The G in GPT means “generative,” and it’s almost as if the algorithms have a life of their own, learning more as they access more and more data and as they interact with about 300 million people using ChatGPT. 

At present, I believe, GPT-4 is available on the OpenAI platform.

Sam Altman, [CEO of OpenAI, the Microsoft-backed AI research lab that is behind ChatGPT] says he’s working on GPT-5, but it’s not ready yet. 

If you are just learning about ChatGPT, it’s important to know that the more you interact with it, the better it performs. 

 

The scale of investments for building the most important AI platforms is breathtaking. 

This software has cost about $1 billion to construct, when you take into account [all the time and costs associated with building the platform]. It isn’t something you can do in your garage. These are mammoth platforms.

Sam Altman will be receiving $10 billion from Microsoft over the next several years. 

Google has combined two of its major AI projects: Deep Mind, based in London, and the Brain project [from Google Research], in Mountain View, California. 

Their AI is going to be much more powerful than OpenAI because Google has committed almost unlimited resources to building its platform. Metaverse and Amazon are also building their own platforms. 

In the last year, these instruments have become even more powerful.

The reason behind this is that the technology and software have gotten much better, and there’s going to be exponential growth from this point on.

The big turning point came when engineers stopped trying to make Artificial Intelligence mimic human intelligence, and instead had algorithms do logical calculations on statistics. 

In a very narrow sense, these algorithms are better at conducting logical calculations than the human mind. They are much faster. They can access much larger databases, and they’re much more precise and exact. 

But it’s important for us to realize that what these platforms are doing is not the same as human intelligence. 

 

So these platforms could boost innovation benefiting all mankind, like breakthroughs in health care and customized educational support.

Yes. The plus side is that it could help us achieve our goals in areas like education. I know that some of our students were using ChatGPT to write papers, and one of my professors said, “You need to ban ChatGPT at the university.” 

I said, “No, because the students are going to use it anyway. Why don’t we try to look at how ChatGPT could improve students’ understanding of a subject?” And that’s what we’re trying to do. 

Let me give you an example. I used ChatGPT to write a five-page essay on the nature of the soul in Thomas Aquinas. I’m very familiar with this topic, as I’ve been studying Aquinas for most of my life, and ChatGPT gave me a pretty good essay, a little bit superficial, but decent.

Then I asked it to provide quotations from Question 84 of the Summa Theologiae, where Aquinas speaks about this. Right away, I got really good quotations, and that made the essay better. 

I asked for some other quotations, and because I’m familiar with the concepts and quotations, ChatGPT and I put it all together in an organic way. I printed out the essay and asked a renowned professor of medieval philosophy to grade the paper. He gave it an A-. When I told him that it was written by a computer, he said a computer couldn’t have done that.

If students are using this technology to take credit for work that is not their own, then that’s cheating, and it is not allowable.

But [this interaction with ChatGPT is not] plagiarism because plagiarism is copying someone else’s work. 

ChatGPT is actually creating something new with the student. 

If the student works with ChatGPT, and makes that known, it will become similar to the policy of many judges in the U.S. court system [who use software for sentencing decisions] and who have to make it known that they are using the software to hand down a sentence. 

 

Some experts say that any work product that has benefited from ChatGPT should include an acknowledgement of that fact. But while the research-paper case poses ethical issues, a much bigger worry is the possibility that future AI advances could surpass human control.

As these platforms get more powerful, we will probably give them more control over the digital world, simply because they do it better and faster. Most trading on Wall Street is already done by algorithms. We were glad to give them that power because of the positive effects on the regulation of the New York Stock Exchange. 

But the more control we give to AI, the more we will fear not having control. Sam Altman told me his greatest concern is that OpenAI could be used by a malevolent force for evil.

 

Are you talking about the risk of disinformation, misinformation, or more aggressive efforts to confuse or drive people to some kind of extremist behavior?

Sam Altman is talking about things like a nuclear-bomb threat. What happens when ChatGPT gets the codes for an atomic weapon? It sounds like science fiction, but it’s not completely unreasonable to imagine this happening. 

In 2021, Colonial Pipeline, the largest provider of gasoline on the East Coast, was the target of a ransomware attack. A Russian hacker managed to get malware in the pipeline’s servers and demanded a ransom in Bitcoin to hand back control of the servers. What happens when the software decides to do something like that on its own?

But Sam also has a point when he says, “Let’s get ChatGPT out there.” People will start using it when the existential risks are very low, and we will figure out how to address those risks. 

That’s what he is trying to do. He’s establishing guidelines: ChatGPT can’t access the dark web, where you can buy drugs and guns, and it can’t access other things that are illegal. He calls these guidelines “guardrails.” And he suggests that the U.S. government should require a license to be able to operate a large platform using AI. 

He’s coming up with these [regulatory] ideas, not the U.S. senators, and I think that’s because the senators don’t really understand what is happening.

 

You are in touch with AI innovators, and the Vatican has hosted conferences studying these new advances. Why are technologists taking part in this dialogue with the Church? 

Pope Francis is the largest soft power in the world and is a universally recognized moral authority. 

Silicon Valley people are interested in speaking with the Vatican or with religious leaders because they’re authentically concerned about this new technology. 

The problem is that, at times, Church leaders and thinkers don’t use the language and vocabulary that Silicon Valley can understand — and vice versa. So I’ve been trying to facilitate a dialogue, where we’re using language that we can both understand.

 

What other voices are involved in the debate over the future of AI?

The author Yuval Harari has written extensively on the subject. In his recent book, 21 Lessons for the 21st Century, he talks about people putting more faith in algorithms than in themselves and about Artificial Intelligence simulating intimacy and emotional ties. Harari is trying to shock us out of our comfort zone and show the urgency of controlling these new technologies. He advocates that we take back our autonomy, and I agree with that. But when he says that the “Singularity” (the union of AI with biologically based intelligence, predicted to occur around the year 2045) is the next stage of human evolution, I am hoping he’s wrong.

You may recall Spike Jonze’s 2013 movie Her, which explores the simulation of intimacy when an AI, with the voice of actress Scarlett Johansson, learns how to have a man fall in love with her (even though the man’s character, played by Joaquin Phoenix, knows that she is an AI and not a real human person).

 

When you dialogue with tech leaders on this topic, how do you bridge differences between the Church and Silicon Valley? 

When engineers or the CEOs of large companies come to Rome or invite me to San Francisco, they say, “We can’t just sit down and talk about this stuff; there has to be a framework.” 

In fact, the Catholic intellectual tradition has been looking at such problems for centuries, and my objective is to translate that intellectual tradition in words that the engineers and CEOs can understand. The fundamental Aristotelian and Thomistic framework of anthropology, cosmology and human nature is an excellent instrument for approaching these issues. 

 

What are some of the key principles that must ground this discussion? Do they deal with the nature of the human person and the purpose of life?

These principles include the following: Human beings are fundamentally religious, and the Christian understanding of the purpose of life is to know, love and be with God in this life and the next. 

We are created in the image and likeness of God. 

The human being is a substantial unity of the soul and the body, which Aquinas calls “matter and form.” 

The human being’s soul is immortal, which means it exists beyond the death of the body. This principle is important because many people are looking towards immortality. For example, Larry Page, the co-founder of Google, has a new company that hopes to discover ways to extend human longevity. And futurist Ray Kurzweil’s book Fantastic Voyage is subtitled Live Long Enough to Live Forever. Eventually, says Kurzweil, technology will allow you to rejuvenate yourself. 

The response of the Church would be theological in nature, not just in terms of the technology. Helping people to live longer is an admirable achievement, but human immortality is another question and depends on God’s plan for the human person. Radical life extension also brings up the question of the meaning and purpose of human existence, and of course the Church’s doctrine on this issue is very rich.

 

What kind of challenges do these advances in AI pose for the Church? What more should the Church do to prepare? 

The Church is very heterogeneous. 

I just finished a two-week course with religious sisters from Latin America, hosted by St. Mary’s University of Minnesota. The sisters had all kinds of questions about AI. I told them, “When we speak about the Church, we don’t mean just Pope Francis. It’s everybody who is involved in the Church,” from bishops, pastors and parishioners to women religious and those who run universities and hospitals and so on. 

From the perspective of the Catholic Church, as we use these technologies, we need to keep the focus on the human person. “Human-centered AI” is becoming a motto that can sum up the Church’s view on this issue. The Institute for Technology, Ethics and Culture at Santa Clara University recently published a handbook called “Ethics in the Age of Disruptive Technologies: An Operational Roadmap,” which provides a very good analysis of these issues from a Catholic point of view. Bishop Paul Tighe, secretary of the Dicastery for Culture and Education in the Vatican, has endorsed the handbook as reflecting a truly Catholic perspective.

Education and critical thinking will be essential in order to understand how we can benefit from these technologies in the future. I am convinced that, as a human race, we must use the technology in order to flourish and achieve our goals. That is my hope and prediction for the future.

Palestinian Christians celebrate Easter Sunday Mass at Holy Family Church in Gaza City on March 31, amid the ongoing battles Israel and the Hamas militant group.

People Explain ‘Why I Go to Mass’

‘Why go to Mass on Sundays? It is not enough to answer that it is a precept of the Church. … We Christians need to participate in Sunday Mass because only with the grace of Jesus, with his living presence in us and among us, can we put into practice his commandment, and thus be his credible witnesses.’ —Pope Francis