Assessing the Ethical (and Unethical) Implications of Artificial Intelligence

Pope Francis charges Vatican office to dedicate the next two years to AI study.

(photo: via Optic Humana Technologia)

Artificial intelligence technology — intelligent machines that work and react like humans — is increasingly becoming part of everyday life, prompting Pope Francis to ask the Pontifical Academy for Life to dedicate the next two years to ethical questions posed by the technology.

In a lengthy Jan. 6 letter called Humana Communitas to mark the 25th anniversary of the academy, the Holy Father said such technology has “enormous implications,” as it “touches the very threshold of the biological specificity and spiritual difference of the human being.”

In a message to the World Economic Forum in Davos, Switzerland, last year, he said that AI must “contribute to the service of humanity and to the protection of our common home, rather than to the contrary, as some assessments unfortunately foresee.”

Current examples of AI range from relatively benign systems, such as programs that can beat chess champions or “Alexa,” Amazon’s virtual assistant for the home, to more sinister ones involving algorithms that extract large amounts of data from consumers and invade personal privacy.

Sophisticated artificial intelligence systems can use past experiences to inform future decisions, such as self-driving cars made by Tesla and Google, automated weapons systems, cyber military technology and drones.

Recent technology is looking at understanding people’s emotions, beliefs, thoughts, expectations and even creating robotic systems that can interact socially. In Japan, robots comfort childless people, and “love bots” exist that are the objects of sexual fetishes.

In a Feb. 25 address to the life academy, Pope Francis noted the paradox of these technologies: that on the one hand they can allow us to “solve problems that were insurmountable until a few years ago,” but, on the other, present “difficulties and threats, sometimes more insidious than the previous ones.”

The technology is a product of “ingenuity,” Francis noted, but it also “casts a dangerous spell” that can end up producing “nefarious outcomes.” He warned that artificial intelligence could become “socially dangerous,” making man “technologized” rather than “humanized.”

Archbishop Vincenzo Paglia, the president of the Pontifical Academy for Life, told La Stampa March 26: “We need to understand the epoch-making transformations … in order to identify how to direct them toward the service of the human person, respecting and promoting his intrinsic dignity.”

He said this will involve “listening carefully to the phenomena in their complexity” to understand them better, adding that “listening doesn’t mean legitimizing,” but “getting in touch with reality and becoming aware of the multiplicity of projects and initiatives that are underway in this field.”

 

Algorithmic Trolls

“Some perspectives at times surprise us with their audacity, their creativity, their potential, but also with the diversity of anthropological approaches that they express,” Archbishop Paglia said.

Artur Kluz, an expert in AI technology and chairman of the advisory board of the Centre for Technology and Global Affairs at the University of Oxford, England, welcomed the Vatican’s involvement as “very timely,” as he believes these technologies pose ethical challenges that he listed as “privacy, bias, transparency and accountability.”

Algorithms, he said, are increasingly being used to extract data that “could invade personal privacy.” Meanwhile, he said “no consensus” yet exists on what ethical principles should guide development and use of “advanced autonomous systems.”

He highlighted current misuse of AI, citing examples of “fake news, people creating algorithmic ‘trolls’ spreading discord,” and how AI “can take away a population’s civil liberties.”

When it comes to AI and “cyber weapons,” Dominique Lambert, a philosophy professor at the Universitè Catholique de Namur, France, said these could potentially be more devastating than “conventional or local nuclear weapons.”

“All these technologies can backfire against the civilian population,” he told a conference on AI at the Pontifical University of St. Thomas Aquinas in Rome March 29. “Robots can be used for permanent surveillance without any human supervision.”

He argued for legislation that defends the human person and human dignity in the face of such technology and said that the Holy See and other institutions must insist on international humanitarian law as a bulwark against its misuse.

Dominican Father Ezra Sullivan, an expert in the philosophy of science, metaphysics and Abrahamic religions at the Pontifical University of St. Thomas Aquinas, warned AI can lead, and has led, human beings into idolatry in the forms of transference (of loyalty, right relationships, etc.), greed and control.

In comments to the Register, he said artificial intelligence could potentially lead humans to “treat each other like machines and also to lose a sense of their own dignity.” People could then see themselves “no longer as meaningful agents, but rather as passive objects of manipulation by machines or elites with power.”

Such a regression, Father Sullivan said, could lead to “mental-health disorders as well as spiritual malaise.”

Benedictine Father Gregory Gresko, a professor of theology at the Pontifical Athenaeum of Sant’Anselmo in Rome, believes one of the greatest current challenges is determining who is responsible when violations of morality are made through AI.

“Is it the proprietor of AI? The coder who did the initial development? Is it the vendor who sold the AI components to the buyer? ‘Who is responsible for AI?’ is a huge question,” he said.

Kluz also foresees “new challenges” for the Church, including a different “understanding of anthropology” and the “meaning of human life,” but also the technology’s implication for the Catholic conception of “the soul.” What if such technology eventually achieves, or even surpasses, a person’s capacity for “sentience and rationality”?

The Pope, in his Feb. 25 address, spoke of the need to “understand better what intelligence, conscience, emotionality, affective intentionality and autonomy of moral action mean in this context.”

But some are skeptical such a development will ever happen.

“A soul has to have a destiny,” said Father Gresko. “It has to have a conscience to make moral decisions for God or against God, and AI will never be able to do that.”

 

Silicon Ethics

At the March 29 conference, Archbishop Paul Tighe, the secretary at the Pontifical Council for Culture, said that, given these challenges, it is imperative that the Church and technology developers dialogue and build friendships with one another in order to infuse an ethical dimension into AI.

The same view was echoed by Dominican Father Eric Salobir, the president of Optic Humana Technologia, which focuses on the ethical concerns of disruptive technology.

Most decision-makers and regulators “do not have the tools and the time just to sit down and see the full picture of what could be the negative consequences and impact of those technologies,” he told the Register March 29.

This is where the Church can come in, he believes, working with AI creators to create technology that won’t “replace the human being but rather empower the human being to be more human.”

Technology, he stressed, “is not neutral” but always has “a kind of intentionality” and needs to be kept in check so it is “shaped, and shapes our society, in a good way.”

And he believes that even though much of the industry is money-driven, financial goals and “positively impactful” AI technology are possible to align. “What has to be promoted and maximized is not profit but value,” he said. “We call that ethics by design.”

Father Salobir, who is in regular contact with tech industry leaders within Google and Microsoft, is also not deterred by the anti-Catholic bias of many working in Silicon Valley.

“Too often we blame those companies as a big evil, and I think that it’s absolutely wrong,” he said. “They are trying to shape society, and we need to be in a dialogue.”

He said he tries to stress with them the Christian concept of human dignity, that we are created “in the image and likeness of God,” and discuss the “big societal challenges.”

Father Salobir said that despite the many challenges, he is “super hopeful and optimistic” that ethical ways forward can be found because of the “very good and reliable interlocutors” that he has engaged with.

Despite sin and the essence of fallen man, he said he trusts humanity “to find the right way.”

Edward Pentin is the Register’s Rome correspondent.

Pope Francis meets with participants of the Minerva Dialogues, a meeting of scientists, engineers, business leaders, lawyers, philosophers, Catholic theologians, ethicists and members of the Roman Curia to discuss digital technologies, at the Vatican on March 27.

Pope Francis: Ethical Artificial Intelligence Respects Human Dignity

Pope Francis says emerging technologies could be beneficial to society as long as they respect human dignity: ‘The fundamental value that we must recognize and promote is that of the dignity of the human person.’ He also said he is reassured to know many people working on new technologies put ethics, the common good and the human person at the center.