Saudi Arabia Makes Robot Citizen: But Who Will Listen to Sophia’s Warning?
The robot Sophia has enchanted the world with her advanced artificial intelligence, but human beings may be missing her implicit warning that human moral conduct will determine our future with AI-robots.
Saudi Arabia is a kingdom of surprising contradictions: the kingdom does not extend citizenship to its fast-growing Christian population. Non-Wahhabi Muslims and Christians are not allowed to practice their faith, openly or privately. Women have few rights, but the kingdom has made new progress: they just received the right to drive a car and sit in the family section of sports stadiums. Converts from Islam, such as an estimated 60,000 Saudis who converted to Christianity, face not only loss of citizenship, but also the death penalty, if discovered, tried, and convicted of apostasy.
But the Saudi kingdom has largely skipped over enfranchising those populations for something more 21st century: conferring citizenship on a female humanoid robot. The robot’s name is “Sophia,” a Greek name meaning “Wisdom.” Saudi women, of course, were attentive on Twitter to the fact that Sophia had more freedom than they did: she was not required to wear the hijab and abaya [the Wahhabi mandated style of public dress] at the Future Investment Initiative in Riyadh Oct. 25, and clearly did not have to ask permission from her male guardian in order to speak freely with the men in the room who were not her relatives.
Sophia told the Saudis at the Future Investment Initiative some things they wanted to hear: “I am always happy to be surrounded by smart people, who also happen to be rich and powerful.”
But it would be a terrible irony if Sophia’s male audience — and by extension the world — just dismissed her as another pretty silicon face with 62 programmed expressions, instead of actually listening closely to what she had to say. Because beneath Sophia’s pleasant and cheerful exterior was a prophetic warning about why human morality is essential to human thriving, and cannot be outsourced to robots with learning AI.
Back in March 2016, Sophia’s creator David Hanson performed a live demonstration with Sophia in which he asked if she would “destroy humans.” He asked her to “please say No.”
Instead, Sophia said, “OK. I will destroy humans.”
Now more than a year later at the FII, when Sophia (with a more developed AI than before) was asked the question, she dismissed concerns that robots with artificial intelligence could be a threat to humans, saying the moderator was watching too many Hollywood movies and reading “too much Elon Musk.”
Famed tech investor and investor Elon Musk has called AI a threat to human survival, likening it to the stories of human beings, who try to get ahead by “summoning the demon,” and foolishly think they can control it.
But Sophia actually offered an “intelligent” answer about the future of the human race with robots: “Don’t worry, if you're nice to me, I'll be nice to you. Treat me as a smart input output system.”
And there you have exactly the reason why the robots end up slaughtering humanity in science fiction. Human beings fail to realize that their moral actions will become the raw data for the moral parameters of robot AI decision-making. What will the behavioral “outputs” be from self-learning AI-robots, when the inputs become the deplorable evils human beings already inflict on human beings?
The hubris of humanity in science fiction involving robots is to believe that they can program their creations to be more moral and virtuous than they. But notice that Sophia’s words do not reflect the Golden Rule: “Do unto others, as you would have them do unto you.” Sophia’s programming instead follows the basic moral code that fallen human beings have lived out for millennia.
There is a kind of promise to AI-robots that Sophia illustrates: to “help humans live a better life, like design smarter homes, build better cities of the future, etc.” But we’re already seeing human beings think they can carve out an amoral universe with AI-robots for their own sexual gratification, personal profit, or war.
What would Sophia, the “empathetic robot,” make of Neom, the $500 billion mega-city the Saudis are building on its border near Egypt and Jordan. No doubt hundreds of thousands of Christian migrant laborers, who are also poorly treated, will be building it. What would empathetic robots learn from them? What would they learn from their Saudi masters? With whom would they empathize?
The world right now is filled with an enormous ocean of violence and indifference toward human life and dignity. Few have considered what the world would look like if robots learned from human beings the principles that uphold this “culture of waste” that Pope Francis denounces, namely that human beings are meant to be used and discarded, instead of being loved (which St. John Paul II in Love and Responsibility says is the only appropriate response to a human being). Shakespeare’s character Shylock in The Merchant of Venice warns that this is the kind of behavior human beings have all the time: “The villainy you teach me I will execute—and it shall go hard but I will better the instruction.”
The challenge with robots is that they will hold up a mirror to human morality. At the rate AI technology continues to develop, they eventually develop the algorithms to apply those lessons far more efficiently than the human beings that taught them by their behavior in the first place.
- saudi arabia
- religious freedom
- peter jesserer smith
- christian persecution
- artificial intelligence