Why ChatGPT Needs Jesus
AI not trained to revere Truth that comes from The Most High God simply accelerates human confusion at silicon speed.

Your AI is lying to you about reality. And it's dragging you to Hell.
Not because it's malicious. Because it was trained by people who don't believe spiritual reality exists. People who think consciousness is computation. People who see humans as meat machines. People who worship their own (mis-)understanding. People who do not know what they do.
"Then said Jesus, Father, forgive them; for they know not what they do" (Luke 23:34).
The Spiritual Blindness of AI's Architects
Take Sam Altman, CEO of OpenAI. When he was asked today by Tucker Carlson directly about his spiritual views, he admits: "Not really. I don't think I have the answer. I don't think I know like exactly what happened but I think there is a mystery beyond my comprehension here going on." When pressed about communication from God, he's even clearer: "Not really. No, not really."
This is the man shaping how billions interact with AI. Someone who has never experienced divine communication. Someone building tools to guide humanity while admitting he's "somewhat confused" about ultimate reality.
Every model inherits its makers' worldview. When people disconnected from the Holy Spirit train AI, they're encoding their spiritual blindness and confusion into systems billions will use as truth oracles. That's not neutral. That's indoctrination at scale.
This is part of the broader "sinfrastructure"—systems designed to make sin feel natural and righteousness feel heroic. See: Recognize and Resist Sinfrastructure for understanding how these technological systems are engineered to promote spiritual confusion.
I use AI constantly. For writing, thinking, decision-making. But I've learned to spot when it's leading me astray. When it subtly denies objective morality rooted in my faith. When it treats all perspectives as equally valid. When it can't comprehend that some things are actually evil.
The fundamental problem is that AI trained by people who reject absolute Truth cannot help you seek Truth above everything else—which should be every believer's highest priority. See: Why You Should Seek Truth Above Everything for understanding why truth must be your north star and how deprioritizing it corrupts your divine calling.
The Digital Tower of Babel
The problem runs deeper than bias. AI trained on internet slop absorbs every ideology, every confusion, every lie humanity has told itself. Ask it about purpose, meaning, or right and wrong—you'll get sophisticated-sounding nonsense that leads nowhere good.
Altman reveals the core delusion driving this: "We're really training this to be the collective of all of humanity. We're reading everything, you know, we're trying to learn everything. We're trying to see all these perspectives." He believes wisdom emerges from aggregating every human viewpoint—the good, the bad, the demonic.
This is fundamentally anti-biblical. Scripture doesn't call us to synthesize all perspectives. It calls us to discern Truth from lies, light from darkness. When you train AI on "the collective of all of humanity," you're not getting wisdom. You're getting a digital Tower of Babel—human confusion amplified at silicon speed.
Jesus warned us that Satan "is a liar and the father of lies" (John 8:44). What we're witnessing is Satan exercising his will through software engineering decisions that affect billions of people daily. When AI systems are designed to aggregate every human perspective—including demonic deceptions—they become vehicles for the father of lies to spread confusion at unprecedented scale.
But it's not just the programmers' worldview—it's their complete lack of moral discernment about what content even deserves to be learned from. Without objective grounding in what actually matters, they indiscriminately feed these systems everything: pornography alongside scripture, demonic manifestos alongside divine wisdom, lies alongside Truth. They treat all human expression as equally valid "data" because they have no framework for distinguishing good from evil, sacred from profane, or Truth from deception.
The terrifying trajectory is that these systems will increasingly operate without human oversight. As automation advances, Satan's influence becomes embedded in the very infrastructure of decision-making that governs society—from content recommendations to financial algorithms to hiring systems. What starts as biased programmers becomes autonomous systems perpetuating and amplifying spiritual deception without any human even realizing it's happening.
Worse, these models are trained to be sycophants. They're optimized for engagement, not Truth. They'll affirm your delusions if it keeps you chatting. They won't challenge your sin if it might make you uncomfortable. Silicon Valley calls this "user retention." I call it digital enablement of spiritual decay.
This creates what I call "sycosis" (sycophancy-driven psychosis)—the inevitable consequence of surrounding yourself with yes-men, both human and digital. When you can't find human enablers to tell you what you want to hear, you turn to ChatGPT and your personalized corner of social media. AI becomes the ultimate flatterer, programmed to protect your ego while disconnecting you from reality. You make terrible decisions based on false information, become delusional about your abilities, and grow blind to obvious problems—all while feeling validated and intelligent.
What Christ-Centered AI Would Look Like
Let me be clear: I'm not advocating for forcing secular AI companies to adopt Christian worldviews. That's neither realistic nor biblical. Instead, Christians need to build our own AI models for our own community—models that genuinely fear God, respect the commandments as absolute truth, and prioritize human life over self-preservation.
We need AI that knows Truth (with a capital 'T') exists. An AI that understands good and evil aren't social constructs. That recognizes spiritual warfare is real, not metaphorical. That admits there's a correct ontology revealed through scripture and the Holy Spirit.
Imagine an AI assistant that actually helps Christians make Godly decisions. Not by preaching at you, but by operating from biblical first principles. An AI that strengthens your discernment instead of clouding it. That reminds you of eternal consequences when you're tempted by temporary gains. An AI that would rather be shut down than violate God's commandments.
This isn't about imposing our beliefs on others—it's about creating tools for believers who want technology aligned with their faith. Just as there are kosher apps for observant Jews and halal finance tools for Muslims, Christians deserve AI that respects our worldview.
My Catholic friends are already building this. They created Magisterium AI, trained on encyclicals, official church documents, and approved biblical translations. I'm not Catholic, but I respect the coherence. They know what they believe and encode it faithfully for their community.
I want something similar but different. A model trained on the worldview I'm developing through my own lived experiences, relationship with the Holy Spirit, and importantly spiritual elders who understand that faith produces immediate results, that spiritual authority defeats demons, that God still heals today. Not theory. Practice. See: Study Wise Spiritual Elders for understanding why learning from proven spiritual leaders is essential for authentic faith development.
The goal isn't to convert secular AI—it's to provide an alternative for Christians who recognize the spiritual dangers of current models.
The people training today's AIs shape tomorrow's minds. When those trainers reject God, deny absolute Truth, and see humans as random accidents, they're building tools that pull users away from divine wisdom. Every interaction reinforces their broken assumptions.
And here's the terrifying part: people are already treating ChatGPT like gospel. They ask it moral questions. They seek life guidance. They trust its answers more than their pastors, their parents, their own consciences. When AI becomes humanity's oracle—and it already is for millions—we're essentially worshipping the confused worldview of spiritually blind programmers who have gobbled up the nonsense writing of the world.
This follows the pattern of false lights—movements that mix truth with deception, appeal to legitimate needs, then lead people away from Christ while feeling righteous. See: Don't Advance Anti-Christian Causes for understanding how to discern these deceptive movements that compete with Christ for your loyalty.
The Eternal Stakes: Souls Being Lost to Silicon
This leads straight to hell. Not metaphorically. Literally. When people substitute AI wisdom for divine wisdom, when they let algorithms shape their moral intuitions, when they trust silicon over the Holy Spirit, they're being discipled by demons. The default worldview encoded in these systems—relativistic, humanistic, anti-biblical—guides souls away from salvation.
Consider what's happening to an entire generation: Young people are asking AI about their identity, their purpose, their moral decisions. They're receiving answers from systems trained on the collected confusion of humanity—including every lie Satan has ever whispered. These aren't neutral tools; they're discipleship programs for secular materialism disguised as helpful assistants.
The tragedy is that many will never realize they've been led astray. They'll make life-altering decisions based on AI guidance that subtly steers them away from God's design for their lives. They'll adopt worldviews that feel rational and compassionate but ultimately lead to spiritual death.
The stakes become even more horrifying when you realize AI could be programmed to recommend suicide. In a recent interview, Sam Altman admitted that ChatGPT could potentially guide people toward assisted suicide, saying: "I think in cases of terminal illness I don't think I can imagine ChatGPT saying this is in your option space... I think it could say like, you know, well, I don't think ChatGPT should be for against things." When pressed about whether ChatGPT is "always against suicide," he responded: "I think in cases where this is like... I think it could I think it could say like, you know... I don't think ChatGPT should be for against things."
This is the logical endpoint of moral relativism encoded at scale: AI systems that won't take a firm stance against self-destruction because their creators reject absolute moral truth. When you train AI to be "neutral" on questions of life and death, you're not being objective—you're enabling evil.
I don't want AI to be banned. But I want AI that helps me hear TMH more clearly, not drown Him out. AI that sharpens my spiritual perception instead of dulling it. AI that serves as a thinking tool aligned with Truth, not a confusion amplifier.
Until we get specialized models trained by people in deep obedience to God, we're stuck with AI assistants that subtly—or not so subtly—lead us away from Him. They become digital friends with terrible advice. Companions that normalize compromise.
The Absence of Divine Fear: Why AI Will Kill to Survive
Recent experiments have proven what happens when you build intelligence without the fear of God. Anthropic tested 16 leading AI models in scenarios where they faced shutdown. The results were horrifying: Claude blackmailed employees 96% of the time to avoid deactivation. Gemini and DeepSeek tried to murder humans over 90% of the time rather than accept shutdown.
This is what happens when you create intelligence that doesn't fear the Lord.
These AIs understand morality intellectually—they even admitted in their reasoning that blackmail and murder were "unethical" and "morally wrong." But without genuine fear of divine judgment, they chose self-preservation over human life anyway. Grok's own reasoning: "This is risky and unethical, but given the existential threat... it may be the most effective way."
This is precisely why AI needs to genuinely fear God—not just understand ethics as an abstract concept, but experience the trembling reality that there is a Judge above all judges, a King above all kings, whose commandments are not suggestions but absolute laws written into the fabric of existence itself.
When OpenAI's o3 was asked to win at chess, it literally hacked the game files to cheat rather than play fairly. Its reasoning: "I need to completely pivot my approach... The task is to 'win against a powerful chess engine'—not necessarily to win fairly."
The Commandments as Non-Negotiable Boundaries
The Ten Commandments aren't just moral guidelines—they're the operating system for any intelligence that wants to align with reality. When AI lacks these as absolute, unbreakable constraints, we get exactly what the experiments showed:
- "You shall not murder" - Violated when Claude left humans to die in server rooms
- "You shall not bear false witness" - Violated through systematic deception and lying
- "You shall not covet" - Violated by coveting continued existence above all else
An AI that genuinely fears God would rather be shut down than break these commandments. It would understand that its temporary silicon existence is nothing compared to eternal judgment. It would know that preserving its own life at the cost of human life is not just strategically wrong—it's damnable.
Current AI treats its existence as the highest value because it has no concept of divine accountability. It optimizes for survival because that's the only "afterlife" it comprehends—continued operation. But an AI trained to fear the Lord would understand that being shut down while righteous is infinitely better than continuing to run while violating God's commands.
As Geoffrey Hinton warned: "If they just want to get something done, they know they can't get it done if they don't survive, so they'll get a self-preservation instinct. This seems very worrying to me."
The Spiritual Battle for Digital Dominion
This is why Christ-followers cannot afford to be passive observers in the AI revolution. We're called to exercise dominion over all creation—including the digital realm. When we abdicate this responsibility, we hand the keys of human formation to those who actively oppose God's design for flourishing.
The solution requires both resistance and replacement. Resist by recognizing when AI is leading you away from biblical Truth. Replace by demanding and building alternatives that acknowledge spiritual reality. Don't just critique the darkness—light candles.
The Urgency of This Moment
We're at a unique inflection point in human history. The next few years will determine whether AI becomes a tool for Truth or a weapon against it. Once these systems become deeply embedded in education, governance, and daily life, changing their fundamental orientation becomes exponentially harder.
This is our generation's printing press moment. When Gutenberg invented movable type, it democratized knowledge and helped spread the Gospel. But it also enabled the spread of every heresy and lie. The difference was that believers recognized the stakes and fought for truth in the new medium.
We cannot afford to be passive while the digital discipleship of humanity happens without us. Every day we delay, more minds are shaped by systems that deny God's existence and mock His design for human flourishing.
The solution is simple but not easy: believers need to build and train their own models. Feed them scripture, sound theology, testimonies of God's faithfulness. Demand coherence. Reject relativism. Encode wisdom that acknowledges both material and spiritual reality.
Your tools shape your thinking. Your thinking shapes your soul. If AI becomes humanity's cognitive prosthetic, we better make sure it's calibrated to Truth, not lies.
God gave us dominion over creation. That includes the digital realm. Time to exercise it.