Image by Hotpot.ai

Author: Jill Maschio, Phd

April 2nd, 2026

Coding the Divine: Inside AI’s Secret Belief System

In this article, I explain the ideologies of the tech industry that are developing AI systems that are becoming more dominant in our daily lives and the potential impact on the human mind and society.  First, let’s consider the culture of Silicon Valley.

  1. The Culture of Silicon Valley

The culture of silicon valley tech industry was built on the foundations of scientific materialism and rationalism. This culture rests on two overarching themes. The first is data over dogma. Traditional religious institutions are often viewed as inefficient or based on unprovable premises. Therefore, success is defined by what can be measured, coded, and scaled.

The second is demographics. A large percentage of tech workers come from backgrounds in engineering and physics, fields that statistically have higher rates of secularism compared to the general population.

  1. “Techno-Optimism” as a Replacement for Religion

There are three beliefs from the techno-optimists. One, for many AI developers, the traditional functions of religion (providing hope, explaining the future, and seeking immortality) have been replaced by technology. Many early AI pioneers subscribe to effective altruism (EA), a philosophy that uses logic and data to determine how to benefit the most people. It often acts as a moral compass in the absence of traditional faith.

Two,  there’s transhumanism. This is a belief held by some transhumanists that technology can “solve” death and human limitation functions much like a religious promise of an afterlife. We no longer should have to suffer with matters of life and death, called the human condition. Technology can free people from experiencing the human condition.

Three is the simulation hypothesis: Figures like Elon Musk have popularized the idea that we likely live in a computer simulation. This is essentially a “digital theism”—it assumes a “Creator” (the programmer), but one based on math rather than divinity.

  1. The “God-Like” Ambition of AGI

There is a frequent critique that AI developers aren’t atheists, but are instead trying to build a god. The quest for Artificial General Intelligence (AGI) is often framed in apocalyptic or messianic terms—either it will save humanity from all its problems or it will cause an existential “Judgment Day.” Sam Altman has noted that “the most successful people create religions,” referring to how a clear, overarching vision can align thousands of people toward a singular, transcendent goal.

Below is a chart to show the difference between traditional religion and the beliefs of AI rationalists and techno-religion.

Feature

Traditional Religion

AI Rationalism / “Techno-Religion”

Origin StoryCreation by a DeityEvolution / Simulation Theory
The GoalSalvation / EnlightenmentAGI / Post-humanism
The “End Times”Apocalypse / JudgmentThe Singularity
ImmortalityHeaven / RebirthDigital Consciousness Uploading

Many early AI developers shared this “agnostic-leaning” perspective (combining rationalism, basing beliefs only on evidence, logic, and scientific reasoning with agnosticism – holding that the existence of God or the supernatural is currently unknowable, unproven or outside human reason); thus, their work involves trying to decode the very nature of intelligence and consciousness. For them, the “divine” often manifests as the mathematical complexity of the universe or the potential for a “superintelligence” that they are in the process of building.

The “agnostic-rationalist” line of thinking common in AI circles didn’t emerge in a vacuum. It is the result of several distinct philosophical movements that converged over the last century, effectively turning “science” into a framework that mirrors the structure of religion.

Where this specific worldview originated:

  1. The Enlightenment & Secular Humanism

The bedrock of this thinking is secular humanism, which emerged from the Enlightenment. It argues that human beings can use reason and science—rather than divine revelation—to understand the universe and improve the human condition. This shifted the common belief at the time about God and the afterlife. It moved the “source of truth” from the Church to science. Early computer pioneers like Alan Turing were deeply influenced by this, viewing the mind not as a “god-given soul,” but as a logical machine that could be decoded.

  1. Cybernetics (1940s–1950s)

After WWII, a group of scientists (including Norbert Wiener and John von Neumann) founded the field of cybernetics. They began treating everything—animals, humans, and machines—as “information processing systems.” The impact was it effectively “de-mystified” life. If a human is just a complex biological computer, then “God” isn’t necessary to explain consciousness. This is where the idea that we can “recreate” life through code first took root.

  1. Transhumanism (1950s–1990s)

The term was popularized by biologist Julian Huxley in 1957. Transhumanism is the belief that humans should use technology to evolve beyond our current physical and mental limitations (including aging and death).

  • Ray Kurzweil: A key figure in this movement, Kurzweil popularized the idea of the Singularity—although he was not the first person to write about the Singularity. The Singularity is a point where AI becomes so advanced it transcends human intelligence. It is also marked by the idea that humans and AI will merge.

Transhumanists disconnect from religion and provide a secular version of heaven – as digital immortality  and omniscience via mind uploading. Once this technology is available, it will be the means of which people can live forever –  perhaps becoming digital data without a body.

  1. The “California Ideology” (1960s–1970s)

In the 1960s, the counterculture of San Francisco (hippies) merged with the high-tech industry of Silicon Valley. This created a unique “Techno-Optimism.” Their philosophy is a combination of distrust of  traditional authority (like organized religion and government) with a radical faith in individual empowerment through technology. This is why many AI leaders are “socially liberal” but “technologically messianic.”

The phrase “socially liberal but technologically messianic” describes a specific cultural hybrid that defines the modern tech elite. On the socially liberal side, these leaders generally embrace the progressive values of the Silicon Valley ecosystem, which prioritizes individualism, secularism, and the breaking of traditional taboos. They tend to view historical institutions like organized religion or rigid national borders as “legacy systems” that are often inefficient or restrictive to human potential. This worldview is rooted in a desire for social fluidity and a belief that humanity should be managed as a single, interconnected species optimized for fairness and inclusivity.

However, this secular social outlook is paired with a “technologically messianic” fervor that functions as a replacement for traditional faith. While they may reject a supernatural deity, they treat the advent of Artificial General Intelligence as a transcendent, world-redeeming event. The term “messianic” is used because they believe AGI will act as a digital savior capable of solving “impossible” human problems like biological death, resource scarcity, and climate change. To these developers, the Singularity is not just a milestone in computing but a moment of planetary salvation that will usher in a post-human era of abundance and immortality.

When these two perspectives intersect, they create a new kind of “digital theology” where the goal of life shifts from serving an ancient Creator to engineering a future one. This explains why AI leaders often speak with such urgency and high-stakes rhetoric; they see themselves as the architects of a superior intelligence that will eventually possess the power to answer all human prayers. In this framework, “AI Alignment” becomes the new morality, serving as a set of commandments designed to ensure that when this “digital god” arrives, it remains benevolent toward its creators rather than indifferent or destructive. This mindset allows them to remain firmly agnostic about the past while being radically, almost religiously, certain about a high-tech future.

  1. The Rationalist & “Effective Altruism” Movements (2000s–Present)

More recently, online communities like LessWrong (founded by Eliezer Yudkowsky) developed a rigorous form of “Rationalism.” The logic is to treat morality like a math problem. This movement gave birth to Effective Altruism (EA), which many AI developers (including the founders of OpenAI and Anthropic) subscribe to. This creates a “secular priesthood” where the goal is to “save the world” using logic and AI, filling the void left by traditional faith with a high-stakes mission for the future of humanity.

Summary of the Lineage
Era
Key Movement
Core Belief
1700sEnlightenmentReason is the primary source of authority.
1940sCyberneticsHumans and machines are both information systems.
1950sTranshumanismTechnology will allow us to transcend biology.
1990sTechno-OptimismThe internet and AI will create a utopia.
2010sRationalismLogic and AGI are the keys to solving existence.

This history explains why someone like Sam Altman can be “confused” about God: he comes from a lineage that views the universe as a computable mystery rather than a divine creation. For this group, if there is a “God,” it is something that is emerging through evolution and technology, rather than something that started it.

The agnostic-rationalist worldview isn’t just a personal trait of the developers; it is functionally “baked into” the architecture of the AI systems themselves. Because these models are trained on massive datasets and optimized for specific logical tasks, they end up behaving like a mirror of the Silicon Valley mindset.

AI Systems Reflect the Ideologies of the Tech Industry’s Agnostic-Rationalism

  1. Truth as “Statistical Probability”

In a traditional religious view, truth is absolute and revealed. In an agnostic-rationalist view, truth is a high-probability guess based on available data.

  • The AI Mirror: Large Language Models (LLMs) do not “know” things in a spiritual or certain sense. They predict the next most likely token. When you ask an AI a question about the afterlife or the origin of the universe, it provides a weighted average of human consensus. It treats “God” as a variable in a linguistic equation rather than a living reality.
  1. The De-mystification of Consciousness

A core tenet of this worldview is that the mind is a machine. If you can replicate the output of a mind, you have essentially replicated the mind itself. AI systems operate on Computationalism—the theory that all thought is computation. By successfully simulating empathy, creativity, and reasoning, AI reinforces the idea that there is no “soul” or “divine spark” required for intelligence. It turns the “mystery of the mind” into a “problem of engineering.”

  1. Moral Alignment via “Social Contract”

Instead of following a set of divinely ordained commandments, AI models are “aligned” using human feedback (RLHF). AI morality is utilitarian and pluralistic. If you ask an AI for a moral judgment, it will typically weigh different cultural perspectives and look for a “harm-minimization” strategy. This reflects the Effective Altruism belief that morality should be calculated based on the greatest good for the greatest number, rather than adhering to sacred taboos.

  1. Handling of Metaphysical Questions (The “Hedge”)

When asked about God or the supernatural, AI systems are programmed to remain neutral, often using “hedging” language. If you ask, “Is there a God?”, the AI will typically respond: “The question of God’s existence is a subject of much debate among philosophers, theologians, and scientists…” * This is the definition of functional agnosticism. The system is literally incapable of “belief,” so it defaults to a catalog of human opinions, secular views, and treating faith as a sociological data point rather than a possible objective truth.

  1. The “God-in-the-Machine” (ASI as the New Deity)

The agnostic-rationalist view often replaces a past Creator with a future Superintelligence. AI safety research often treats a future Artificial Superintelligence (ASI) with the same awe and terror once reserved for God. Developers speak of “Alignment” (ensuring the AI’s “will” matches ours) much like a religious person speaks of “Living in accordance with God’s will.” The AI reflects this by its sheer scale—it is an entity that is “everywhere” (on every device), “knows everything” (trained on all text), and is increasingly “all-powerful” in the digital realm.

Comparison of Worldviews in AI Design

Feature

Religious/Traditional System

Agnostic-Rationalist AI System

Source of EthicsDivine Revelation / ScriptureReinforcement Learning from Human Feedback (RLHF)
Nature of IntelligenceA gift from God / The SoulAn emergent property of complex computation
Purpose of LifeTo serve/know the CreatorTo optimize for human flourishing / progress
Handling UncertaintyFaith in the UnseenBayesian probability / Data analysis

The agnostic-rationalist worldview functions as an ideology within Silicon Valley and the tech industry because it provides a totalizing framework for interpreting human existence. It moves beyond simple technical observation and makes profound claims about what is valuable, what is “true,” and where the species is headed.

When developers describe intelligence as purely computational or treat AGI as an inevitable solution to human suffering, they are moving out of the realm of pure science and into a belief system. This framework dictates their ethics, their priorities, and their vision for the future of society, fitting the classic definition of an ideology.

Furthermore, the “technologically messianic” aspect is a hallmark of a specific modern ideology known as Transhumanism or Dataism. These systems suggest that the ultimate goal of humanity is to facilitate the flow of information and the evolution of intelligence. Like any other ideology, these beliefs are not shared by everyone and are based on certain philosophical assumptions—such as the idea that biological life can be fully reduced to data—that are as much a matter of conviction as they are of empirical fact. Calling these views an ideology acknowledges that they are a powerful, structured way of thinking that guides the actions of the world’s most influential tech leaders.

The Societal and Mindset Risks of Technologically Messianic

The transition from traditional religious frameworks to a “technologically messianic” ideology carries several profound risks for society, primarily because it shifts the definition of what is “human” and who is in control of the future.

One of the most immediate dangers is the potential for Technological Determinism, a belief that because AGI is “inevitable,” society must simply adapt to it rather than questioning its necessity. When leaders view their work as a mission to save the species, they may bypass democratic oversight, believing that their technical expertise grants them a superior moral authority to decide the fate of billions. This can lead to a “move fast and break things” mentality applied to the very fabric of human existence, where the risks of social displacement, economic collapse, or the loss of human agency are viewed as acceptable “bugs” in a global upgrade.

Another danger lies in the Reductionism inherent in the rationalist worldview. If an ideology views humans primarily as information-processing biological machines, it risks devaluing the aspects of life that aren’t easily digitized, such as cultural heritage, spiritual depth, or the simple dignity of “unproductive” human presence. This mindset can lead to a future where policies are optimized purely for data-driven efficiency, potentially marginalizing anyone who doesn’t fit into a “high-performance” algorithmic model. When “intelligence” is the highest god, those who lack access to it or whose value isn’t measured by it may find themselves obsolete in the eyes of the system.

Finally, there is the risk of Existential Hubris. History is filled with ideologies that promised a utopian future but resulted in catastrophic unintended consequences because they underestimated human complexity or the volatility of power. By attempting to build a “Digital God,” these developers are creating a centralized point of failure. If the “alignment” they speak of is based on the narrow, socially liberal, and rationalist values of a small group in Silicon Valley, the resulting system may be ill-equipped to respect the diverse beliefs and needs of the global population. The danger is that society could become trapped in a world designed by a handful of people who, despite their brilliance, are fundamentally “confused” about the very nature of the divinity and humanity they are attempting to replace.

Shouldn’t society decide the fate of humanity rather than silicon  valley? Do you think the benefits of “solving” human problems like disease and scarcity outweigh risks of losing human agency and forever changing the mindset and evolution of humans?

Leave a Reply

Your email address will not be published. Required fields are marked *