The human impulse to create artificial intelligence may indeed be understood as a profound quest for self-definition—an attempt to discover our horismos, the essential boundary that marks what is uniquely human. By creating that which mimics our intellectual capacities, we engage in a dialectical process of self-understanding, delineating the contours of our own nature through contrast.
When Norbert Wiener published God and Golem, Inc. in 1964, he recognized that our machines were more than mere tools—they were philosophical provocations. Wiener saw the study of AI as raising fundamental questions about human nature, warning that "in the long run, there is no distinction between arming ourselves and arming our machines." This wasn't simply technological caution but existential insight: our creations reflect our self-conception.
The early architects of AI understood this mirror-like quality. When Alan Turing proposed his famous test, he wasn't merely suggesting a benchmark for machine intelligence but asking a deeper question: what does it mean to think? The test implicitly asks us to examine our own intelligence by attempting to replicate it. Similarly, John McCarthy, who coined the term "artificial intelligence," viewed the field as "making a machine behave in ways that would be called intelligent if a human were so behaving."
From Person to Individual: The Modern Aberration
Our conception of humanity has undergone a profound shift. Classical thought understood humans as persons—beings defined by their relations within a community, cosmos, and ultimately, in reference to divinity. An understanding of persons is grounded in an understanding of Divine Persons: The Father being the father of The Son, grounding identity in that role. Personhood was not an autonomous achievement but a gift realized through relationships. What picks out a particular person is their place in a web of relationships that delineates the “I”. Human dignity emerged not from isolated capability but from participation in something external and reflected the dynamism of being and becoming that the relations endowed.
Modern thought disturbed this framework. The autonomous individual replaced the relational person—a transformation accelerated by Cartesian dualism, Enlightenment individualism, and eventually computational theories of mind. Humans became conceived as information processors: sophisticated but ultimately mechanistic collections of preferences, abilities, and experiences. The soul was replaced by the algorithm.
Critiques from various philosophers, such as Charles Taylor and Alasdair MacIntyre, have described the transition from man made in the image of God to Descarte’s cogito. This shift in our self-understanding has led us to lose the rootedness of human dignity and shifter away from a lived understanding of persons to an intellectualism that is noted in language of “finding myself” as opposed to “becoming myself” or “being myself”.
The philosophical baggage with how we approach self-understanding affects the work that we do in creating thinking machines. Are we creating tools to augment human flourishing, or attempting to replicate the human entirely? The answer depends on whether we see ourselves as relational beings with inherent dignity or as biological computers executing increasingly complex programs.
In this context, AI development serves as a unique philosophical mirror. When we create systems that can reason, learn, and even simulate emotions, we are forced to reconsider what constitutes our distinctive humanity. The advances in AI opens up an opportunity to genuinely question what it is that demarcates humanity as a unique species within creation, and that question ought to more explicitly guide the why and the how of our AI development.
Between Icon and Idol
How might we distinguish constructive AI development from its distortions? Consider the theological distinction between icon and idol. An icon is transparent, pointing beyond itself to deeper truths. It acknowledges its limitations while gesturing toward transcendence. An idol, conversely, absorbs our gaze, offering nothing beyond itself—a terminal point demanding devotion rather than facilitating understanding.
Icons in the Christian tradition functions as a window to deeper truths—the icon points beyond itself to the mystery of human personhood and ultimately to transcendent reality. Byzantine icons serve as "windows to heaven," providing a glimpse from the vantage of finite creation into the infinite spiritual realities that undergird and sustain the world.
Idols, conversely, are objects that are reified rather than encountered. They serve to centre gaze on themselves rather than pointing beyond, collapsing transcendental experience into the finite confines of something created. The idol absorbs our attention and reflects it back to us. Idols as objects that control or are controlled create the perspective of what Martin Heidegger termed "enframing" (Gestell) —seeing it merely as a resource that optimises or is optimised rather than a means of revealing truth.
AI development oscillates between these poles. When pursued as an icon, AI serves as a window into human nature. Each limitation in machine intelligence highlights something distinctly human: our embodied cognition, our relational understanding, our capacity for meaning-making beyond mere pattern recognition. The failures of AI are as philosophically rich as its successes.
When treated as an idol, however, AI becomes a technological Moloch demanding sacrifice of human values for computational efficiency. The machine is no longer a tool but an overlord, celebrated for capacities that actually represent diminished conceptions of intelligence and creativity.
This tension manifests clearly in the history of AI development. The symbolic AI movement of the 1950s-70s attempted to formalize human reasoning through logic systems—revealing both the power and limitations of explicit rule-following. The connectionist approaches that followed recognized the embodied, pattern-matching nature of cognition that resists perfect formalization. Today's large language models demonstrate remarkable pattern recognition while simultaneously exposing the gulf between statistical prediction and genuine understanding.
Each approach serves as a mirror, illuminating different aspects of human cognition by contrast. The mirror functions only when we recognize both similarities and differences.
The Cybernetic Theology
Wiener himself recognized AI's theological dimensions. The very title God and Golem, Inc. acknowledges that in creating intelligent machines, we engage in a form of sub-creation that raises profound questions about our relationship to creation itself. Are we merely replicating ourselves, or learning something deeper about the nature of intelligence that transcends human limitation?
This question has motivated many AI pioneers. Joseph Weizenbaum, creator of ELIZA, became one of AI's earliest internal critics precisely because he recognized how easily humans attribute understanding to machines that merely simulate it. His concern wasn't technical but ethical—how AI might distort our understanding of ourselves. Similarly, Marvin Minsky's "society of mind" theory wasn't just a computational architecture but a philosophical statement about the distributed, emergent nature of consciousness.
What these pioneers understood was that AI development is inherently entangled with questions of human nature. The machine functions as a philosophical probe, testing boundaries between mechanism and meaning.
Recovering the Person in a Computational Age
How might AI development serve human flourishing rather than diminishment? By recognizing it as a mirror rather than a replacement. When we approach AI as an icon—transparently guiding us to deeper realities beyond itself—we use it to illuminate aspects of human intelligence that resist formalization. The machine helps us recognize the mystery of personhood precisely through its limitations.
The distinctive dignity of humanity emerges not from raw computational capacity but from our orientation toward meaning, relationship, and transcendence. Intelligence isn't merely pattern recognition but participation in a world of values. Our computational creations can simulate aspects of this intelligence without replicating its qualitative richness or moral dimension.
This perspective doesn't diminish AI's practical value but contextualizes it within a richer anthropology than mere computation allows. It recognizes that our machines extend human capability without exhausting human meaning.
What would AI development look like guided by this understanding? It would prioritize human-AI collaboration over replacement, transparency over inscrutability, and augmentation of human relationship over automation of human interaction. It would recognize that the value of AI lies not in how closely it mimics human capabilities but in how effectively it extends them while respecting their uniqueness. The work of philosophers like Gilbert Simondon on technical culture and transindividual relation can help guide us to a world where technology is seen as a peer in the collaborative work of humanity, rather than either a slave to be used or a master to be feared.
The impulse to create artificial intelligence ultimately reflects something profound about ourselves—not merely our technical ingenuity but our persistent questioning of what it means to be human. Perhaps in building our thinking machines, we're engaged in a grand philosophical experiment, creating mirrors that help us recognize the irreducible mystery of personhood that no algorithm can fully capture.
The mirror only serves its purpose when we look both at and beyond it. In the reflection, we might glimpse not just the machine but what it means to be more than one.