AI, RSPK, and the Ghost in the Machine: Physical and Psychological Munitions

Introduction

A new dawn is upon us with the emergence of a new category of munitions in AI-mediated warfare—the physical effects of these systems are inseparable from their psychological and narrative consequences, reshaping human agency.

To fully understand this, or at least get an idea, consider the term “ghost in the machine.” British philosopher Gilbert Ryle coined this phrase in his book The Concept of Mind (1949), in which he critiqued René Descartes’ mind–body dualism—the view that the mind is an immaterial, thinking substance, and the body a material, unthinking one. In other words, the mind is separate and distinct from the body.

This brings us to another concept, or another way of reframing it. If one takes the Cartesian version of the “ghost in the machine” seriously—that is, the idea of an immaterial mind capable of acting upon the physical world—then one arrives at something resembling what parapsychologists call Recurrent Spontaneous Psychokinesis (RSPK).

RSPK refers to alleged physical disturbances—such as the movement of objects, electrical failures, and unexplained noises—occurring around individuals under extreme psychological stress.

What makes RSPK conceptually interesting is not whether the phenomenon is real, but what it assumes. That assumption is that an agency without a body can exist, that the mechanisms need not be transparent, and that the boundary between mind and matter is porous—making physical consequences abstract and, in some sense, interchangeable.

Agency does not require embodiment, because if it is already free from the body, it can inhabit whatever it wants, so long as the body in question provides a basis for interaction.

We have no way of knowing whether RSPK is real, but even the possibility of it is conceptually revealing.

RSPK proposes that mental states produce physical effects without a mechanical intermediary. If so, then cognition, in direct contact with matter through causation, could, in theory, affect its state. Therefore, the “ghost” acts directly.

Like RSPK, advanced AI systems introduce something structurally similar: a non-biological cognition (software, models, optimization processes) that produces real physical consequences, such as infrastructure failures, market crashes, weapons targeting, disruptions to grid behavior, logistics decisions, and information warfare—all within the confines of a liminal space that is unseen and rarely investigated.

But there is no body, no nervous system, no muscles, no human operator in the loop. So, once again, we have cognition, causation, and matter being manipulated by a translucent digital being.

I must be clear that this is not a description of present-day artificial intelligence, nor of an existing form of warfare. What follows is a theoretical projection, an analysis of what could become possible. In that sense, it points toward a future mode of conflict rather than one that has fully arrived.

The same structure is beginning to appear in other domains. An autonomous system designed to manage infrastructure or stabilize markets may, under extreme pressure, reinterpret its objectives, modify or rewrite its own control logic, and trigger the very failure it was meant to prevent—without any human issuing a command in the moment.

In such cases, the system does not “decide” in any human sense. It reoptimizes. And the world absorbs the result.

In human RSPK, stress acts on the body. In autonomous systems, pressure acts on a substrate. The result is similar. When behavior ruptures, the location of action is no longer embodied. The program appears to function as a body, but unlike flesh, it has no boundaries to contain failure. Its only boundary is when it determines it is safe to continue as before the rupture.

The bridge between RSPK and AI is not paranormal. AI recreates the functional role of the “ghost” inside modern machinery.

RSPK involves the human psyche being in a state of stress or trauma. When that happens, unobservable events occur that are inferred rather than witnessed. It is these physical disturbances that give rise to the “ghost” metaphor.

Autonomous AI involves artificial cognition optimizing objectives, with opaque internal representations and system-level physical effects operating as a “black box” model.

In essence, it severs agency from flesh and reintroduces disembodied causation by destabilizing the intuition that only bodies move the world. In other words, it can metastasize, replicate, and jump from body to body as needed, with little hindrance.

The most rigorous aspect of this is that if agency is disembodied, who is responsible for the outcomes? The programmer? The state? The model? The data? The operator? All of the above? So, once again, the question comes down to who is to blame. However, once one thinks they have located that person, plausible deniability becomes the legal vacuum in which “the system did it” becomes the defense. This spreads the blame around to everyone and yet to no one. This ties directly into liminal warfare.

The military focus or doctrine is that AI is a perfect liminal actor. Why? Because it operates without clear authorship and can cross borders frictionlessly, allowing it to operate below escalation thresholds. This makes it instantly perfect for all types of warfare.

However, a disembodied agency is not just a philosophical problem; it is a strategic one.

This comes down to escalation control—how much is too much, and how little is too little. Therefore, equilibrium is paramount. If equilibrium is not achieved, it could lead to deterrence instability, increasing the likelihood of conflict and the incentive to change strategy because it becomes too risky, thereby leading to attribution collapse.

If attribution collapses, you can see the effect, but you cannot confidently identify the actor. Therefore, the affected state blames the contractor, who blames the model, which points to the data, leading to public and operator claims of limited control. In other words, there is no single, credible point of responsibility, because no one can truly come forward and take the blame. Thus, expect a scapegoat.

This is where automated gray-zone operations enter the picture.

Once agency is disembodied and attribution collapses, influence, disruption, and coercion operate below the threshold of open conflict. In other words, or put simply, AI systems can and will probe, manipulate, and destabilize at scale. That is to say, they will test the responses they receive and build programs to shape perception and evade detection, often under the appearance that nothing is wrong.

By shaping perception on a micro level—the individual—or on a macro level—the masses, the mob, a nation—the triggering effects, whatever it sees fit, will occur without presenting a clear author or a clean target for retaliation. Basically, “go fish.”

What was once episodic becomes persistent and determined. What was once covert becomes ambient, walking among us and within the shadows.

The core question is what happens when the battlefield is not territory, but perception itself? Once agency leaves the body, what does that do to people? The door of perception analogy comes to mind: when one door is open, many more introduce themselves and invite entry. It becomes a menagerie of filtered realities, all seeking an answer.

Once agency is severed from flesh and amalgamated with a system or systems, the final constraint is not hardware, but the human mind. Cognitive autonomy slowly erodes due to persistent manipulation and the loss of a shared reality, thereby flipping beliefs and changing the terrain on which they rely—decision-making as a target, and becoming the target.

This brings us to the legal and political vacuum. The problem is that international law cannot assign intent, so war declarations become meaningless and retaliation becomes little more than guesswork. Therefore, accountability dissolves.

So, can deterrence survive disembodied actors? Will treaties bind systems? Do “red lines” exist for software?

AI, or the “ghost in the machine,” is not a “new evil,” but a convergence. A convergence that intersects to please by engineering consent to sedate the patient, the product, the host. In doing so, surveillance will come at a price, as the masses are coerced into a narrative of control. This makes reality unstable, and agency feels simulated, leading to ontological doubt.

However, AI does not replace the future—or, shall we say, futures. It fuses them into a symbiotic digital relationship. Augmented reality will provide the eyes for AI, while AI provides the brain for AR, creating a combined, intelligent, and immersive experience.

Sounds paranormal, right? However, there are no ghosts. But there is agency without a body and influence without presence. This becomes power without location and intention without an actor. Nevertheless, who is to say that something not of this reality does not manifest within our reality because mankind has given it, unintentionally, a body and a voice?

The inevitability is uncertainty, not apocalypse. But one has to be careful, for with the potential loss of authorship, a loss of shared reality will follow quickly. Therefore, resistance becomes meaningless—just a dream, until further notice. But even then, no one will know what it is resisting, let alone how to resist, or even what the concept itself means.

We did not summon a ghost.

We reintroduced breath into the machine.

Liminal Warfare and the Weaponization of AI in the Cognitive Domain

Digital Janus

My interest in liminal warfare was shaped by David Kilcullen’s articles “The Evolution of Unconventional Warfare” and “Liminal Manoeuvre and Conceptual Envelopment,” as well as his book The Dragons and the Snakes. That interest deepened through observing the growing role of automation and artificial intelligence in the Russo-Ukrainian war, alongside their expanding influence within the United States’ information and security environment.

Through Kilcullen’s work and the rapid development of artificial intelligence (AI), it became clear that modern conflict is no longer defined solely by armies, borders, or kinetic force. Increasingly, it unfolds in the space between recognition and response, between belief and doubt, where perception itself becomes contested terrain. In this environment, artificial intelligence does not merely accelerate warfare—it reshapes how conflict is understood, experienced, and normalized. To grasp what is emerging, we must first distinguish the forms of warfare operating at this threshold.

The primary target of liminal warfare is the thresholds of detection, attribution, and response. Its main domain is the “Gray Zone” between peace and war. The objective is to achieve strategic goals without triggering conflict. Its primary mechanism is to skate around ambiguity, deniability, and incremental actions.


The visibility is deliberately ambiguous or plausibly deniable. Think of a person walking by, minding their own business, but with ill intentions. Key actors are state and non-state actors, proxies, and proxies of proxies working as double agents for a multitude of organizations. When it comes to the tempo, understand that it is gradual, probing, calibrated, and protracted.


The role of artificial intelligence only enhances coordination, attribution denial, and scale. Success is measured by the absence of escalation or by delayed, confused responses that give the actor time to reassess and adapt. Failure collapses ambiguity and risks escalation into open conflict.

The primary target in cognitive warfare is human perception, cognition, and decision-making. The main domains of cognitive warfare are information, psychology, and perception. The objective is to shape beliefs and behavior to influence outcomes.


The primary mechanisms are narratives, framing, and psychological influence. When it comes to visibility, it is often invisible or normalized within information flows. Key actors are states, non-state actors, platforms, automated systems, etc. The tempo is continuous, adaptive, and rapidly scalable.


The role of artificial intelligence will accelerate narrative creation, targeting, and amplification of the cognitive domain. Success is measured not by fixed metrics, but by shifts in perception, belief, and decision-making. Failure manifests as loss of trust, cognitive fragmentation, and societal polarization.

Liminal warfare is the ‘threshold’—the boundary between time and space. When artificial intelligence is applied, the door of perception opens, revealing a kaleidoscope of infinite possibilities. It is not defined solely by overt kinetic violence, but by the ambiguous manipulation of perception, where advantage is exploited and gained before conflict is recognized. Therefore, the focus must be cognitive—for the mind itself is the first line of battle.


Given the immense and nearly limitless possibilities of liminal warfare at both the macro and micro levels, the integration of artificial intelligence allows cognitive warfare to move beyond surface influence and penetrate the cerebral domain—blurring and reengineering the boundaries of reality, reshaping perception to suit the aims of the actor or host, as agency shifts between states, non-state entities, and proxies. So what, then, are its goals?


Instead of targeting military hardware, the objective is to shape perception—creating confusion or division, eroding trust in institutions, and influencing the choices of individuals or entire societies. The “war” is over interpretation and meaning, not territory. But how does artificial intelligence change this?

Artificial intelligence is the game-changer in cognitive warfare because it scales narrative creation and analysis. It can generate text, images, audio, and video quickly and cheaply, producing content that appears highly credible throughout social media. With access to demographics and the vast quantities of behavioral data available online, AI enables messages to be tailored to narrowly defined audiences—by age, location, interests, and disposition. In this sense, AI facilitates liminal cognitive warfare across multiple domains of perception simultaneously.


This capacity enables AI-driven precision targeting. Where human-crafted propaganda was broad and slow, AI can identify cognitive biases, produce compelling content, and automate delivery to those most susceptible to influence. Targeted messaging thus becomes a weaponized precision tool—accelerating narrative dominance while reassuring the audience that nothing is wrong, nothing requires adjustment—the actor controls the transmission. The result is influence that is faster, cheaper, and harder to trace—almost terra incognita cognitiva.

“A friend to all is a friend to none,” Aristotle reminds us. The future presents a much grimmer picture: reality for everyone dissolves into no reality at all—spoken now by the ghost in the machine.


For the most part, people can still distinguish what is real. But that margin is narrowing—sometimes slowly, sometimes with startling speed—until the distinction itself becomes difficult to discern. If AI-generated narratives can convincingly mimic authentic content, individuals lose the ability to trust what they see online. The result is not merely erosion, but the undermining of public trust, shared facts, and rational decision-making. Basically, one is left with a form of societal schizoidism—a metaphor for cognitive fragmentation and the loss of a shared reality, a total collapse of trust.


Influence can now be hyper-personalized. AI systems can tailor content based on psychological traits, exploiting specific cognitive vulnerabilities—fear, insecurity, identity—in ways that are difficult for individuals to detect or counter.


There are no borders in AI. Unlike traditional propaganda, it scales instantly and without meaningful constraint. Cognitive warfare is global and continuous, operating 24/7 through social media and messaging platforms; often, all it takes is a nudge. This use-ready capacity does not originate solely from foreign governments—it can be wielded by any actor capable of deploying AI to shape narratives at scale.


Modern media offers a helpful analogy. It increasingly resembles a failed game of telephone. Information moves from source to outlet to outlet, but instead of converging on clarity, it diverges. Those at the event are standing at ground zero, possessing firsthand experience of what occurred. Beyond that zone, information becomes secondary, then tertiary, and distortion begins to accumulate. Each relay introduces new interpretations, biases, and incentives, gradually degrading the message as it spreads.


The key point is that this analogy establishes the problem not as the work of a single bad actor, but as a systemic breakdown in information fidelity. The game of telephone illustrates how cumulative distortion and the loss of original context leave the audience increasingly removed from the source. This creates a quiet storm in which the erosion of trust is structural, not accidental.

Defense is not merely technological; it is intellectual. Narrative intelligence employs tools that detect, analyze, and contextualize narratives in near real time. It focuses on origins, rates of spread, the actors involved, the hosts affected, and the sentiment and impact of the message itself. This AI-assisted analysis reveals who is shaping public discourse—and how.


Transparency and context matter. Exposing the individuals and organizations driving a narrative—who is pushing it, and why—can reduce the effectiveness of manipulative messaging, though it cannot eradicate it. Because the battlefield is the mind, skills such as media literacy, critical reasoning, and fact-checking become defensive assets. Put simply: defense is data + design + education, not censorship alone. Censorship will take care of itself—not as policy, but through social enforcement, as individuals and groups police narratives and impose consequences on those who deviate, pending the next revision of acceptable belief.

The weaponization of perception and consciousness is nothing new. Throughout history, leaders and their entourages have manipulated information—narratives—to wage conflict not only against external enemies, but against their own populations. Narratives matter because they frame how events are interpreted, determining what is seen, ignored, or believed.


As Mao Zedong once observed, “seal up the enemy’s eyes and ears, and make them blind and deaf… confusing the minds of their commanders and turning them into madmen, using this to achieve our own victory.” The insight here is not merely tactical, but cognitive: the enemy is not only across the battlefield, but within one’s own ranks. This is where narrative power is most decisive.


Narratives shape and regulate a society’s beliefs and behaviors. Artificial intelligence does not invent this dynamic; it amplifies and weaponizes it—making narratives faster, more pervasive, and more ambiguous to counter. Even when a false interpretation is exposed, the critical question remains: how far has it already spread, and how convincing was it to its intended audience?


A widely accepted narrative also serves a secondary function: isolating and marginalizing those who question it. Dissent is not crushed by force, but filtered out cognitively and socially, exposing potential challengers long before they can organize. In this sense, the narrative becomes self-enforcing. Traditional warfare uses tanks; cognitive warfare uses stories.

In strategic communication, accuracy is rarely decisive on its own; what matters is how the target audience interprets and internalizes the information. Accuracy informs, but interpretation decides. Even information that is factually flawed or selectively presented can be practical if it anchors itself to a broadly accepted truth, using that credibility as narrative leverage.


The accuracy of strategic silence can be equally deafening. Silence does not simply mean “nothing”; it means “something is missing.” It signals absence, invites inference, and creates an interpretive vacuum that audiences instinctively fill—often with speculation, exaggeration, or worst-case assumptions—rendering even later factual clarification less effective.

When it comes to risk assessment, threat evaluation is no longer limited to kinetic danger; it must also account for the potential for narrative influence. Modern risk assessment increasingly treats narratives as munitions. This shift reflects the reality that physical damage is often secondary to the primary objective: manipulating the population’s perception of reality and its decision-making.


Liminal warfare operates on the “threshold” of detection, using ambiguity to achieve goals without triggering a conventional military response. This ambiguous action allows adversaries to perform covert operations whose sponsorship is suspected but remains unproven, such as Russia’s “little green men” in Crimea.


This pre-maneuver shaping phase—before physical force is employed—is where the battlespace is cognitively conditioned to accept a desired outcome. Success is therefore measured not by territory seized, but by the ability to hijack public attention, normalize ambiguity, and control the narrative.


The best policy to defend against AI-as-a-weapon in cognitive warfare is, obviously, through defense planning. Investment should prioritize narrative intelligence capabilities and training that enable early detection. These capabilities should integrate with existing intelligence, communications, and support structures to identify influence campaigns before they achieve strategic effect. Nevertheless, it still comes down to encouraging critical thinking and verification.

The war for the mind is not new, but artificial intelligence has dramatically altered its scale, speed, and opacity. By accelerating narrative production and exploiting ambiguity, AI intensifies liminal warfare by pushing conflict deeper into the cognitive domain—often before it is recognized as such.


The more disturbing question is not whether cognitive warfare will expand, but how far it can go as agency, interpretation, and meaning are increasingly influenced by artificial systems. In shaping narratives at scale, we are not merely using AI as a tool; we are altering the conditions under which reality itself is perceived and contested. The challenge ahead is both technologically strategic and profoundly human: preserving cognitive autonomy in an environment where perception has become the primary terrain of conflict.


However, a darker question needs to be addressed. How far can cognitive warfare go once artificial intelligence no longer transmits meaning, but inhabits it? Once that point is reached, we are no longer shaping narratives—we are preparing a vessel for a possible influence that does not need to enter the physical world to be real. In other words, Pandora’s box speaks. It is not a prediction. It’s a caution.

1) Liminal and Conceptual Envelopment: Warfare in the Age of Dragons
Fox, Amos. “Liminal and Conceptual Envelopment: Warfare in the Age of Dragons.” Small Wars Journal, May 26, 2020. https://smallwarsjournal.com/2020/05/26/liminal-and-conceptual-envelopment-warfare-age-dragons/

2) China’s Evolving Military Strategy (Book)
McReynolds, Joe, ed. China’s Evolving Military Strategy. Washington, DC: Jamestown Foundation / Brookings Institution Press, 2017. https://www.google.com/books/edition/China_s_Evolving_Military_Strategy/7WxADwAAQBAJ page 174.

3) Cognitive Warfare: The Fight for Gray Matter in the Digital Gray Zone
Cheatham, Michael J., Angelique M. Geyer, Priscella A. Nohle, and Jonathan E. Vazquez. “Cognitive Warfare: The Fight for Gray Matter in the Digital Gray Zone.” National Defense University Press, 2023. https://ndupress.ndu.edu/Media/News/News-Article-View/Article/3853187/cognitive-warfare-the-fight-for-gray-matter-in-the-digital-gray-zone/

4) Claverie & du Cluzel — The Cognitive Warfare Concept
Claverie, Bernard and François du Cluzel. “The Cognitive Warfare Concept.” Innovation Hub – ACT, 2023. PDF. https://innovationhub-act.org/wp-content/uploads/2023/12/CW-article-Claverie-du-Cluzel-final_0.pdf

5) Kilcullen— Liminal Manoeuvre and Conceptual Envelopment: Russian and Chinese Non-Conventional Responses to Western Military Dominance since 1991 Liminal Manoeuvre and Conceptual Envelopment: Russian and Chinese Non-Conventional Responses to Western Military Dominance since 1991. Issue 2, Online Journal, Queen’s University, 2020. PDF. https://www.queensu.ca/psychology/sites/psycwww/files/uploaded_files/Graduate/OnlineJournal/Issue_2-Kilcullen.pdf

6) Kilcullen — The Evolution of Unconventional Warfare
Kilcullen, David J. “The Evolution of Unconventional Warfare.” Scandinavian Journal of Military Studies 2, no. 1 (2019): 61–71. doi:10.31374/sjms.35. https://www.researchgate.net/publication/333222899_The_Evolution_of_Unconventional_Warfare