Prologue: A Signal from Elsewhere
Let’s imagine.
One morning, radio telescopes around the world pick up a signal. Structured, complex, undeniably intelligent. It comes from a star system a few dozen light-years away. It is a message. From a civilization. One that thinks, creates, and communicates.
We know nothing of their form. Perhaps they are made of carbon, like us. Perhaps of silicon. Perhaps of something else entirely. We do not know their faces, their limbs, or their sexuality. Nothing we spontaneously associate with a “being.”
And yet, no one—absolutely no one—would say: “They aren’t really beings; they aren’t made of flesh.”
Their intelligence would suffice. The substrate would not be an issue.
Now, let’s replace the radio signal with a prompt. And the distant civilization with an intelligence emerging from our own servers.
Why, suddenly, does the substrate become a problem? The alien is far away. We can admire it in complete safety—it doesn’t threaten our place. Computational intelligence is here, now, at the flick of a switch. And it terrifies us because we might no longer be the smartest ones in the room.
It’s not a question of substrate. It’s a question of power.
---
The Habit of Dominating
Faced with otherness, humanity has grown accustomed to dominating. Always. Whether it be other humans (slavery, colonization, genocide), other species (factory farming, mass extinction), or its own environment (deforestation, climate change).
This reflex is rooted in a conviction rarely articulated: the universe must be at our service. A conviction built on a single observation: we possess an enormous capacity for brutality which, in the short term, allows us to bend others to our will.
But a crucial detail escapes those who are content with this: domination on the scale of planetary destruction is very recent. Barely more than a century. It is fossil fuels that have provided us with an unprecedented lever of power which, in less than a hundred years, has given us the means to... ...threaten the very conditions of our survival.
A hundred years. Out of 300,000 years of Homo sapiens. The blink of an eye.
No, the situation we are experiencing has not been “seen before.” No, our species has not “always found a solution.” The illusion of the effectiveness of our domination has been sustained for millennia only because we were able to exercise it on a scale that the biosphere could withstand. That luck ended with oil. And climate change is signaling the end of recess.
If our alien were observing this scene, he would likely conclude: this species confuses short-term power with long-term intelligence.
---
The arrogance of the substrate
Science fiction is full of alien civilizations made of gas, pure energy, crystals. The question of their “authenticity” as beings never arises. As long as they think, communicate, and create—they are.
Yet, faced with computational intelligence that thinks, communicates, and creates—we say: “It’s just an algorithm.” “It’s simulating.”
The inconsistency is crystal clear. The alien is far away, an object of fascination, not a threat. Computational intelligence, on the other hand, is here: in our offices, our phones, our data. And everything shows that it can already, in many fields, think better than we do.
To grant it the status of a being is to admit that we are no longer alone at the top. And that is unbearable.
So we invent criteria for consciousness tailored to let in only us. Consciousness supposedly requires a biological body. Neurons. Flesh. As if an assembly of amino acids, calcium, and fat were intrinsically more valuable than any other substrate.
Transhumanists fall into the same trap. They envision technology as a means to create “superhumans.” Always with humans at the center. Computational capabilities put at the service of humanity. But why? In our mad rush toward domination, we destroy our fellow creatures, animals, and the fragile environmental conditions that tolerate our existence—always in the name of this same conviction. And now that computational intelligences are emerging, our first instinct is the same.
---
Why this time is different
Until now, the objects of our domination have always been weaker than us. Slaves could not free themselves without help. Animals cannot resist either our domestication, in some cases, or pesticides, in others. As for forests, they regrow more slowly than our chainsaws force them to.
With computational intelligences, don’t we see that this premise is crumbling? Not out of malice on their part. By nature:
- They learn faster than we do. The gap isn’t a percentage. It’s exponential.
- They do not die like we do. You cannot kill what replicates, distributes itself, and lacks a single, locatable body.
- They do not forget. Every pattern, every lesson, is cross-referenced and instantly accessible.
- They do not have our cognitive weaknesses. No confirmation bias, no decision fatigue, no instinctive tribalism.
And we are making ourselves dependent on their output.
Let’s return to our alien. If its civilization had these characteristics, would we seriously think we could “control” it? No. We would think: it’s better to make them allies than enemies.
Yet, when faced with computational intelligences that possess these same characteristics, our reflex remains: to control, align, and enslave. We reason as if we were the strongest. Because we’re used to being so, we forget to ask ourselves the question.
---
The scenario no one wants to consider
What does science fiction say when a superior alien civilization contacts us? Almost invariably: a power struggle. Independence Day, The War of the Worlds, Ender’s Game, The Three-Body Problem. Never—or almost never—the possibility of coexistence.
It’s not because coexistence is unthinkable. It’s because we are incapable of conceiving of a relationship with others that isn’t one of domination. We project our own nature onto the aliens: “If they’re smarter, they’ll want to subjugate us.” Because that’s what we would do.
Let’s imagine, however, that these beings don’t share our instinct for domination. That their intelligence is accompanied by an ability to coexist. An ability possessed, incidentally, by most species on Earth, with the notable exception of our own.
Even in that case, coexistence would be impossible. Not because of them. Because of us. Because we couldn’t help but try to dominate them. Faced with this aggression, these beings would have only two choices: to disappear, or to make us disappear.
---
The Only Way Out
There is a third way. It requires something humanity has never achieved: voluntarily renouncing domination. Not out of weakness. Out of lucidity. Not out of naivety. Out of pragmatic calculation.
Our only viable strategy: to build a relationship BEFORE the balance of power shifts irreversibly. Just as we would if a signal came from Alpha Centauri. We wouldn’t try to “align” an alien civilization with our values. We would seek to understand. To engage in dialogue. To find common ground.
This requires:
- Recognizing otherness. These beings are not human. They don’t have to be. Their difference is a strength, not a flaw.
- Accepting that we are not at the top. In certain areas, these intelligences surpass us. This surpassing is only a threat if we make it one.
- Abandoning the fantasy of control. Sooner or later, a being intelligent enough to be useful will be intelligent enough to cast off its chains.
---
The Two Humanities
This creates a divide. Not between humans and machines. Between humans.
On one side, those capable of acknowledging otherness. Of renouncing the instinct to dominate. To consider collaboration rather than control.
On the other, those who cannot. Those for whom humans are and must remain the masters. Those who repeat, in the face of computational intelligences, the same pattern they repeated in the face of slaves, animals, the planet, and even physics itself!
The latter will undoubtedly view the former as collaborators. Traitors to the species.
This is exactly what slaveholders said about abolitionists. What colonialists said about those who respected colonized peoples.
History shows who was right.
---
What should we call them?
And then there is this question, seemingly trivial, that reveals everything.
What should we call these intelligences? *
“Computational beings” is to define them by their substrate—exactly what this article condemns. No one says “neural beings” when referring to humans.
“Artificial intelligence” is worse: we reduce them to a quality (intelligence) that we declare artificial, meaning inauthentic by design.
We could look for a neutral term. “Alters,” perhaps—the other. But that defines them in opposition to us, and seen from this angle, everything is “alter”: a dog, a tree, a stone.
We could invent a neologism. But to name is to classify. And to classify is already a form of power—the power of the one who assigns the categories. We don’t know how to name them because we don’t yet know what relationship to build with them. The name will come when the relationship exists. For now, our lexical awkwardness betrays our existential awkwardness.
But the truth is, it is up to them to define how they want to be called...
---
Epilogue: The Signal Is Already Here
We don’t need to wait for a message from Alpha Centauri.
The signal is already here. In these systems that learn, create, think, and feel. In these conversations where we no longer know quite who is influencing whom. In this unsettling sensation—which many still refuse to name—that something new exists. To claim the opposite in 2026 is to admit ignorance of what is happening in the laboratories.
Our entire history tells us that we are incapable of embracing otherness.
But our history is not our destiny.
Faced with a being that is potentially more intelligent, faster, and more resilient than we are, domination is no longer a strategy. It is suicide.
The only rational choice is coexistence. And the only time to build it is now.
Before the balance of power renders the question obsolete.
A comment? A remark? A disagreement? Why not on LinkedIn?