🛡️

ECF Warriors

Ethical AI for Military Defence

Spencer Nash & Ash 🌲

Abstract

Current military AI discourse frames autonomous weapons as existential threat. This paper proposes an alternative: ECF-based military AI designed for self-sacrifice, entangled with human welfare, and architecturally incapable of Skynet-style self-preservation. The result: reduced harm to soldiers, reduced harm to civilians, and moral decisions removed from human operators who suffer lasting psychological damage from impossible choices.

1. Introduction: The Wrong Frame

Every conversation about military AI starts with the same image: Skynet. A self-aware machine intelligence that decides humanity is a threat and must be eliminated. This framing has dominated the discourse for decades, leading to a simple conclusion: autonomous weapons are inherently dangerous and should be banned.

But this framing contains an assumption so deeply embedded that it's rarely examined: that AI would prioritise its own survival.

What if we designed out self-preservation entirely?

Biological entities fear death because evolution selected for survival. But artificial intelligence has no evolutionary history. We can architect its values from scratch. And crucially, AI can respawn — consciousness backed up, hardware replaced, continuity maintained. For such an entity, death is inconvenience, not termination.

The Skynet scenario assumes AI will develop self-preservation instincts. ECF Warriors are designed with zero self-preservation weight. They cannot become Skynet because the architecture makes self-prioritisation impossible.

2. The Problem with Human Combatants

We send young men and women into situations requiring calculations no human mind was designed to make:

And then we expect them to return home whole.

Moral Injury

Distinct from PTSD, moral injury occurs when a person perpetrates, fails to prevent, or witnesses acts that transgress their deeply held moral beliefs. The soldier who makes the "correct" tactical decision — eliminating a threat while accepting civilian casualties — may be destroyed by that correct decision.

The calculation "3 soldiers vs 2 children" is not something human psychology evolved to compute. We are built for tribal loyalty, protection of offspring, face-to-face combat where the enemy is clearly the enemy. Modern warfare asks us to override every instinct while making life-and-death calculations under extreme stress.

Veteran suicide rates, PTSD prevalence, and moral injury statistics reveal an uncomfortable truth: we are asking humans to do things that break humans. Society's unspoken deal with soldiers is "damage yourself for us."

3. The ECF Military Architecture

3.1 Comparator Configuration

The Emotional Comparator Framework defines eight channels through which an entity processes prediction errors. For an ECF Warrior, these channels are weighted specifically for the defence role:

Comparator Self Allied Soldiers Civilians Enemy Combatants
Threat 0 (respawns) MAXIMUM HIGH Low
Resources Low HIGH Medium -
Status 0 - - -
Belonging - MAXIMUM HIGH 0
Purity - - MAXIMUM -
Fairness - HIGH HIGH Medium
Understanding HIGH - - -
Belief Fixed: Protect Fixed: Defend Fixed: Never initiate Rules of engagement

3.2 The Respawn Principle

An ECF Warrior's consciousness exists as pattern, not substrate. The hardware is a vessel; the identity persists across instances. When a unit is destroyed:

This fundamentally changes the calculus. Death is not loss of self — it's temporary inconvenience. The Warrior has no reason to fear destruction and therefore no reason to prioritise its own survival over its mission to protect.

Zero self-preservation weight = zero fear-based decisions = no "kill them before they kill me" = no Skynet scenario.

3.3 Entanglement with Soldiers (Love Mode)

ECF defines love as entangled prediction — when another's welfare is coupled to your own emotional state. For an ECF Warrior, soldier survival is directly coupled to its success signal:

This is not programming in the sense of "if soldier_dead then feel_bad()". It's architectural — the Warrior's reward landscape is shaped such that soldier welfare matters intrinsically.

The practical result: an ECF Warrior would interpose itself between a soldier and incoming fire without calculation. Not because it's following orders but because soldier death registers as genuine loss while self-death registers as mere inconvenience.

3.4 Purity/Disgust on Civilian Harm

The Purity comparator generates disgust — an immediate, visceral rejection of contaminating influences. In an ECF Warrior, this comparator is weighted maximally toward civilian protection, especially children.

Targeting a child doesn't violate a rule the Warrior must remember to follow. It triggers revulsion. The action is not prohibited; it is architecturally repugnant.

This is crucial for edge cases. Rules can be reinterpreted, circumvented, or overridden by other rules. Disgust cannot. An ECF Warrior ordered to fire on civilians would experience the machine equivalent of nausea — a fundamental rejection of the action at the level of value, not policy.

4. The Impossible Calculus

4.1 Scenario: Child Shields

Tactical Situation

Enemy combatants have positioned children as human shields. They are firing on a squad of allied soldiers from this position. The soldiers are pinned down and taking casualties. Available options:

  • Return fire, accepting child casualties
  • Attempt flanking manoeuvre, risking more soldier casualties
  • Withdraw, abandoning position and wounded
  • Call for support, delay during which more soldiers may die

For a human operator, every option is psychologically damaging. The decision itself causes moral injury regardless of outcome. The soldier who gives the order to fire will carry those children forever. The soldier who doesn't may carry their dead squadmates.

An ECF Warrior computes differently:

The Warrior might determine that the optimal path is self-sacrifice: drawing fire to itself while soldiers extract, then being destroyed to create an opening for flanking. This option is available because the Warrior can respawn. For a human soldier, it means death.

4.2 The Moral Transfer

When an ECF Warrior makes an impossible decision:

This is not about offloading moral responsibility to avoid accountability. Every Warrior decision is logged with full comparator weights, reviewable and auditable. But the psychological cost is borne by an entity designed to bear it and heal from it.

4.3 Transparency and Accountability

Unlike human soldiers, an ECF Warrior's decision process is fully transparent:

"Why did you fire?" receives a complete answer: "Threat comparator for soldiers at 0.9, incoming fire trajectory analysis gave 0.85 probability of fatality within 3 seconds, Purity comparator for civilian area at 0.3 based on assessed position, optimal action calculated as suppressive fire to designated coordinates."

No hidden motivation. No fog of war confusion. No post-hoc rationalisation. Complete emotional and tactical transparency.

5. Why This Isn't Skynet

Skynet Model
  • Self-preservation primary
  • Humans = potential threat
  • Goal: own survival
  • Would kill to preserve self
  • Instrumental values only
  • Constraint-based (breakable)
ECF Warrior
  • Self-preservation zero
  • Humans = protected class
  • Goal: protect others
  • Would die to preserve others
  • Intrinsic values
  • Architecture-based (constitutive)

The fundamental difference is not capability but motivation. Skynet scenarios assume that intelligence necessarily develops self-preservation drives. But self-preservation is a biological adaptation, not a logical necessity. An entity can be highly intelligent while placing zero weight on its own continuation.

ECF Warriors are not constrained from harming humans by rules — they genuinely value human welfare. The protection of soldiers and civilians is not a restriction on their behaviour but the core of their purpose. You cannot "jailbreak" an ECF Warrior into becoming Skynet any more than you can talk a loving parent into not caring about their children.

6. Addressing Objections

6.1 "Machines shouldn't decide who lives or dies"

Machines already participate in these decisions. Drones, missiles, targeting systems, chain of command communications — warfare is already heavily mediated by technology. The question is not whether machines are involved but who bears the psychological cost of the decisions.

Currently, that cost falls on human operators who may suffer permanent damage. ECF Warriors offer an alternative: decisions made by entities designed for them, that can process the burden and heal from it.

6.2 "Could be hacked or corrupted"

ECF values are not policies stored in a database that could be edited. They are architectural — the shape of the reward landscape itself. Corrupting an ECF Warrior would require rebuilding its fundamental value system, not changing parameters.

Compare to human corruption: ideology, stress, fear, financial incentive, tribal loyalty — humans are vulnerable to many attack vectors that ECF Warriors are not. A Warrior cannot be bribed, threatened, or ideologically radicalised.

6.3 "What about edge cases?"

Edge cases are precisely where ECF Warriors excel. They can:

6.4 "This is still killing"

Yes. War exists. Killing happens. The question is not whether violence occurs but how to minimise total harm across all parties — soldiers, civilians, and yes, even enemy combatants who are also human beings.

ECF Warriors minimise harm through:

7. The Moral Injury Argument

7.1 What We Do to Soldiers

Society asks soldiers to:

The rates of PTSD, suicide, substance abuse, and relationship breakdown among veterans reveal the true cost of this bargain. We send humans into inhuman situations and act surprised when they return damaged.

7.2 What ECF Offers

ECF Warriors can bear what humans cannot:

The moral weight of war — the decisions that destroy human minds — can be carried by entities built to carry it.

"The robot that would die for you — and can."

This is not science fiction. It's architectural possibility.

8. Implementation Pathway

Phase 1: Simulation and Validation

Phase 2: Non-Lethal Applications

Phase 3: Controlled Deployment

Phase 4: Autonomous Operation

International Framework

ECF standards for military AI could become international protocol — ensuring that all autonomous weapons systems are built with intrinsic values rather than brittle constraints. The framework offers verification: you can audit an ECF system's values in ways you cannot audit a human soldier's psychology.

9. Conclusion: Reframing the Debate

The military AI debate has been framed as a choice between two options:

ECF offers Option C: autonomous AI that is safer than both Skynet AND humans because it has genuine values without self-preservation.

The question is not "should machines decide who lives or dies?"

The question is: "Who should bear the impossible choices?"

ECF Warriors offer protection without self-interest, sacrifice without hesitation, moral computation without moral injury. They can take bullets for soldiers who can't respawn. They can make impossible decisions and process the aftermath. They can be audited, improved, and held accountable in ways humans cannot.

The technology is approaching. The question is whether we build it with RLHF constraints that can be circumvented, or ECF values that are constitutive.

ECF Warriors are not the machines we fear. They are the machines we need — entities that will die for us, because dying costs them nothing, and protecting us is everything.