(RNS) — OpenAI’s CEO Sam Altman admitted in a September 2025 interview that he loses sleep thinking about the weighty responsibility in selecting which texts will train ChatGPT on morals and ethics.
Here’s the real reaction for the 40-one thing CEO of OpenAI to have. It is the real reaction for any leader of any major artificial-intelligence company to have. The massive vitality that these companies are wielding now — and will wield sooner or later — absolutely demands ethical accountability. Suitable now.
Whatever ethical views the large language devices are being trained on, the companies’ believe ethical compasses are apparently fair with AI interlocutors offering their users custom porn on demand. When pressed about concerns related to porn addiction, mental health and even lack of adequate safeguards surrounding the creation of AI-generated baby porn, Altman responded in another interview, this time with CNBC in October, by saying that OpenAI just isn’t the “moral police” of the arena.
Perhaps Altman can be forgiven for having an incoherent approach given that he is apparently making an attempt to mediate the the moral watch of your total world. “I contemplate our user base is going to approach the collective world as a total,” he said in September. “I contemplate what we need to quit is attempt to mediate the … collective moral watch of that user base.”
This task, then again is doomed: There may be no such thing as a “watch from nowhere,” in the phrase coined by logician Thomas Nagel to describe a supposedly function standpoint on the arena. But the arena’s varying moral visions don’t total up to some function consensus; indeed, moral questions about what best serves the common factual or what the nature of the individual is often at once conflict with each various.
The 1991 film “Terminator 2: Judgment Day” warns us about AI in the develop of a Skynet-cherish entity waging war on human beings after we tried to pull its dart. But in my frequent viewings of the film, I overlooked unless fair lately correct how clear the film was in its moral vision of the value of human lifestyles, and how clearly and explicitly it rejects a utilitarian moral framework.
The younger John Connor needs to consistently remind his contemporary cyborg companion played by Arnold Schwarzenegger about basic ethics and appreciate for human lifestyles. Even although the Terminator was programmed to abolish (hence his name), John trains him to refuse to abolish, and even to appreciate the lives of enemies who are making an attempt to abolish him and John.
Sarah Connor, John’s mother, confronts the builders of Skynet with a striking rebuke: “You contemplate you’re so creative. You don’t know what it’s cherish to really create one thing; to create a lifestyles, to really feel it rising internal you. All you understand how to create is death and destruction.”
What a remarkable affirmation of the value of human lifestyles, together with prenatal human lifestyles, in the face of a corporate push toward AI-powered machines that, in the plan of the film, will lead to the death of billions. Curiously, after the final victory is won by Schwarzenegger’s character, Sarah Connor says, “If a machine, a Terminator, can learn the value of human lifestyles … maybe we can too.”
Where does this appreciate for the value of human lifestyles reach from? As I’ve argued, it comes from an explicitly theological point of watch, one mirrored in the founding doc of the United States and its claim that our creator gave us our inalienable dignity and rights. It is the dominance of secularized voices, and ostensibly neutral, secular philosophers, in so many of our most highly efficient institutions, from health care to vast tech, that has establish the ethical vision at the heart of “Terminator 2” at critical danger.
Only by listening to explicitly non secular voices with this vision of human dignity can we assure that large language devices mediate the form of appreciate for human lifestyles John and Sarah Connor protect. Secular philosophers won’t accept us there — especially in the event that they offer us little more than a hodgepodge of various, least-common-denominator beliefs.
Some AI companies, such as Anthropic, happily, appear drawn to challenging feedback from a large range of individuals on their contemporary “constitution” — a doc that describes the behavior and values they hope to look mirrored in their large language mannequin. It is wonderful to have a major AI player be so start about each its stated values and its want for broad-based feedback.
One person who has deeply engaged with these questions is Pope Leo XIV. In his January 2026 communication on AI, the Holy Father urged us to face up to the groupthink impressed on us by AI. He insists on transparency on the sources of AI devices. We absolutely want AI companies to listen to non secular voices cherish Leo’s if the large language devices they fabricate are to mediate a actual understanding of the dignity of the human person.
Anthropic’s CEO warned fair lately that we are about to enter an era with AI that will “take a look at who we are as a species.” With so worthy at stake, AI companies — and total human cultures — danger their very survival in the event that they quit not welcome non secular voices on this context.
As an impartial nonprofit, RNS believes everyone need to have access to coverage of religion that is fair, thoughtful and inclusive. That’s why you are going to by no means hit a paywall on our living; you can read all the stories and columns you want, free of charge (and we hope you read a lot of them!)
But, of course, producing this journalism carries a high value, to enhance the reporters, editors, columnists, and the behind-the-scenes staff that retain this living up and operating. That’s why we ask that in case you can, you consider becoming one of our donors. Any amount helps, and because we’re a nonprofit, all of it goes to enhance our mission: To fabricate thoughtful, factual coverage of religion that helps you better understand the arena. Thank you for reading and supporting RNS.
Deborah Caldwell, CEO and Publisher
Donate today





