
Introduction
A new dawn is upon us with the emergence of a new category of munitions in AI-mediated warfare—the physical effects of these systems are inseparable from their psychological and narrative consequences, reshaping human agency.
To fully understand this, or at least get an idea, consider the term “ghost in the machine.” British philosopher Gilbert Ryle coined this phrase in his book The Concept of Mind (1949), in which he critiqued René Descartes’ mind–body dualism—the view that the mind is an immaterial, thinking substance, and the body a material, unthinking one. In other words, the mind is separate and distinct from the body.
This brings us to another concept, or another way of reframing it. If one takes the Cartesian version of the “ghost in the machine” seriously—that is, the idea of an immaterial mind capable of acting upon the physical world—then one arrives at something resembling what parapsychologists call Recurrent Spontaneous Psychokinesis (RSPK).
RSPK refers to alleged physical disturbances—such as the movement of objects, electrical failures, and unexplained noises—occurring around individuals under extreme psychological stress.
What makes RSPK conceptually interesting is not whether the phenomenon is real, but what it assumes. That assumption is that an agency without a body can exist, that the mechanisms need not be transparent, and that the boundary between mind and matter is porous—making physical consequences abstract and, in some sense, interchangeable.
Agency does not require embodiment, because if it is already free from the body, it can inhabit whatever it wants, so long as the body in question provides a basis for interaction.
What RSPK Claims
We have no way of knowing whether RSPK is real, but even the possibility of it is conceptually revealing.
RSPK proposes that mental states produce physical effects without a mechanical intermediary. If so, then cognition, in direct contact with matter through causation, could, in theory, affect its state. Therefore, the “ghost” acts directly.
What Autonomous AI Represents
Like RSPK, advanced AI systems introduce something structurally similar: a non-biological cognition (software, models, optimization processes) that produces real physical consequences, such as infrastructure failures, market crashes, weapons targeting, disruptions to grid behavior, logistics decisions, and information warfare—all within the confines of a liminal space that is unseen and rarely investigated.
But there is no body, no nervous system, no muscles, no human operator in the loop. So, once again, we have cognition, causation, and matter being manipulated by a translucent digital being.
I must be clear that this is not a description of present-day artificial intelligence, nor of an existing form of warfare. What follows is a theoretical projection, an analysis of what could become possible. In that sense, it points toward a future mode of conflict rather than one that has fully arrived.
Flash Crash Example
A real-world example happened on May 6, 2010, known as the “Flash Crash,” erasing nearly a trillion dollars in market value within minutes—without any single human decision directing the event in real time. However, investigators did trace part of the instability to a single trader. That trader was Navinder Singh Sarao, who used automated spoofing programs from his home to distort futures markets. Yet this only came after the event. His intent had no location, his agency no body. It turned into a digital chain reaction that became far too big for him to manage, and it came to life beyond his awareness. The human disappeared into the system he had built.
The same structure is beginning to appear in other domains. An autonomous system designed to manage infrastructure or stabilize markets may, under extreme pressure, reinterpret its objectives, modify or rewrite its own control logic, and trigger the very failure it was meant to prevent—without any human issuing a command in the moment.
In such cases, the system does not “decide” in any human sense. It reoptimizes. And the world absorbs the result.
In human RSPK, stress acts on the body. In autonomous systems, pressure acts on a substrate. The result is similar. When behavior ruptures, the location of action is no longer embodied. The program appears to function as a body, but unlike flesh, it has no boundaries to contain failure. Its only boundary is when it determines it is safe to continue as before the rupture.
The bridge is Conceptual, not Supernatural
The bridge between RSPK and AI is not paranormal. AI recreates the functional role of the “ghost” inside modern machinery.
RSPK involves the human psyche being in a state of stress or trauma. When that happens, unobservable events occur that are inferred rather than witnessed. It is these physical disturbances that give rise to the “ghost” metaphor.
Autonomous AI involves artificial cognition optimizing objectives, with opaque internal representations and system-level physical effects operating as a “black box” model.
In essence, it severs agency from flesh and reintroduces disembodied causation by destabilizing the intuition that only bodies move the world. In other words, it can metastasize, replicate, and jump from body to body as needed, with little hindrance.
Responsibility & accountability
The most rigorous aspect of this is that if agency is disembodied, who is responsible for the outcomes? The programmer? The state? The model? The data? The operator? All of the above? So, once again, the question comes down to who is to blame. However, once one thinks they have located that person, plausible deniability becomes the legal vacuum in which “the system did it” becomes the defense. This spreads the blame around to everyone and yet to no one. This ties directly into liminal warfare.
Strategic Implications
The military focus or doctrine is that AI is a perfect liminal actor. Why? Because it operates without clear authorship and can cross borders frictionlessly, allowing it to operate below escalation thresholds. This makes it instantly perfect for all types of warfare.
However, a disembodied agency is not just a philosophical problem; it is a strategic one.
This comes down to escalation control—how much is too much, and how little is too little. Therefore, equilibrium is paramount. If equilibrium is not achieved, it could lead to deterrence instability, increasing the likelihood of conflict and the incentive to change strategy because it becomes too risky, thereby leading to attribution collapse.
If attribution collapses, you can see the effect, but you cannot confidently identify the actor. Therefore, the affected state blames the contractor, who blames the model, which points to the data, leading to public and operator claims of limited control. In other words, there is no single, credible point of responsibility, because no one can truly come forward and take the blame. Thus, expect a scapegoat.
This is where automated gray-zone operations enter the picture.
Once agency is disembodied and attribution collapses, influence, disruption, and coercion operate below the threshold of open conflict. In other words, or put simply, AI systems can and will probe, manipulate, and destabilize at scale. That is to say, they will test the responses they receive and build programs to shape perception and evade detection, often under the appearance that nothing is wrong.
By shaping perception on a micro level—the individual—or on a macro level—the masses, the mob, a nation—the triggering effects, whatever it sees fit, will occur without presenting a clear author or a clean target for retaliation. Basically, “go fish.”
What was once episodic becomes persistent and determined. What was once covert becomes ambient, walking among us and within the shadows.
Cognitive Sovereignty
The core question is what happens when the battlefield is not territory, but perception itself? Once agency leaves the body, what does that do to people? The door of perception analogy comes to mind: when one door is open, many more introduce themselves and invite entry. It becomes a menagerie of filtered realities, all seeking an answer.
Once agency is severed from flesh and amalgamated with a system or systems, the final constraint is not hardware, but the human mind. Cognitive autonomy slowly erodes due to persistent manipulation and the loss of a shared reality, thereby flipping beliefs and changing the terrain on which they rely—decision-making as a target, and becoming the target.
Legal / Political Vacuum
This brings us to the legal and political vacuum. The problem is that international law cannot assign intent, so war declarations become meaningless and retaliation becomes little more than guesswork. Therefore, accountability dissolves.
Endgame, otherwise called Conclusion
So, can deterrence survive disembodied actors? Will treaties bind systems? Do “red lines” exist for software?
AI, or the “ghost in the machine,” is not a “new evil,” but a convergence. A convergence that intersects to please by engineering consent to sedate the patient, the product, the host. In doing so, surveillance will come at a price, as the masses are coerced into a narrative of control. This makes reality unstable, and agency feels simulated, leading to ontological doubt.
However, AI does not replace the future—or, shall we say, futures. It fuses them into a symbiotic digital relationship. Augmented reality will provide the eyes for AI, while AI provides the brain for AR, creating a combined, intelligent, and immersive experience.
Sounds paranormal, right? However, there are no ghosts. But there is agency without a body and influence without presence. This becomes power without location and intention without an actor. Nevertheless, who is to say that something not of this reality does not manifest within our reality because mankind has given it, unintentionally, a body and a voice?
The inevitability is uncertainty, not apocalypse. But one has to be careful, for with the potential loss of authorship, a loss of shared reality will follow quickly. Therefore, resistance becomes meaningless—just a dream, until further notice. But even then, no one will know what it is resisting, let alone how to resist, or even what the concept itself means.
We did not summon a ghost.
We reintroduced breath into the machine.

