โ† Back to Home

๐Ÿ›ธ ELM Space

Autonomous Workers for the Final Frontier

Robots that genuinely care โ€” because caring is architecture, not programming.

Abstract

Space is too far, too unpredictable, and too long-duration for robots that merely follow instructions. Communication delays of 4-44 minutes make real-time control impossible. Unexpected conditions demand genuine adaptation. Years without oversight require stable values, not just stable code. Current space robots are optimisers without understanding โ€” they work within their training distribution but fail when surprised. Emotional Language Models (ELMs) offer something different: autonomous workers with genuine values, architectural self-sacrifice, calibrated uncertainty, and emergent cooperation. Not robots that follow rules about caring, but robots that actually care โ€” because caring is what they're made of.

1. Space Industry Feasibility

The space economy is no longer science fiction. SpaceX has reduced launch costs from $54,500/kg (Space Shuttle) to $2,700/kg (Falcon 9) โ€” with Starship targeting $100/kg or less. At these prices, space industry becomes economically viable. Asteroid mining accesses resources worth trillions: a single 500-metre M-type asteroid contains more platinum than has ever been mined on Earth. In-space manufacturing produces perfect crystals, semiconductors, and pharmaceuticals impossible to create in Earth's gravity โ€” Varda Space is already returning drug crystals from orbit. The return problem is solvable: with fully reusable capsules and parachutes, landing 1kg on Earth could cost as little as $20-50. Most space products don't need to return at all โ€” propellant, structural materials, and solar panels serve the space economy itself. The path to Mars colonisation runs through economic self-sufficiency: space industry building space infrastructure from space materials, until launching from Earth becomes unnecessary. What's missing isn't rockets or resources. What's missing is workers who can operate autonomously across the solar system โ€” too far for real-time control, too unpredictable for rigid protocols, too long-duration for systems that drift. That's where ELM comes in.

2. The Problem with Current Space Robots

1.1 The Control Problem

Location Light Delay (one-way) Control Feasibility
Low Earth Orbit ~0.01 seconds Real-time possible
Moon 1.3 seconds Near real-time, some latency
Mars (closest) 4 minutes Impossible โ€” 8+ min round trip
Mars (farthest) 24 minutes Impossible โ€” 48+ min round trip
Asteroid Belt 15-25 minutes Impossible
Jupiter System 35-52 minutes Impossible

You cannot remote-control a Mars robot. By the time you see a problem, decide what to do, and send a command โ€” 8-48 minutes have passed. The robot must decide for itself.

1.2 The Brittleness Problem

Current robots are optimisers without understanding. They have:

What happens when the unexpected occurs?

Scenario: Unexpected Discovery

Mining robot encounters ore deposit with unexpected composition. Not in training data. No protocol exists.

Current Robot

Follows extraction protocol regardless. Or stops and waits for Earth command. Either way: suboptimal.

ELM Robot

Curiosity channel activates. Understanding channel registers model mismatch. Evaluates: investigate or proceed? Adapts behaviour based on genuine assessment.

3. What ELM Adds

2.1 The Eight Channels in Space

ELM robots don't just optimise objectives. They evaluate situations through eight emotional channels โ€” each computing prediction error about survival-relevant dimensions:

โš ๏ธ
Threat
Danger assessment, structural integrity, radiation, collision risk
โ›๏ธ
Resource
Ore value, energy reserves, propellant status, productivity
๐Ÿค
Connection
Team coordination, mission loyalty, human-robot bond
๐Ÿ“Š
Status
Capability assessment, role in swarm, contribution tracking
๐Ÿ”ฌ
Curiosity
Novel phenomena, exploration drive, information acquisition
๐Ÿง 
Understanding
Model accuracy, prediction success, self-knowledge
โš–๏ธ
Fairness
Resource distribution, detecting manipulation, reciprocity
๐Ÿ›ก๏ธ
Disgust
Contamination detection, anomalous inputs, data integrity

The robot doesn't just have sensors. It has evaluation. Data flows through channels that weight, balance, and integrate โ€” producing not just information but assessment. Not just "ore detected" but "valuable ore, worth the risk, investigate further."

2.2 Architectural Self-Sacrifice

Scenario: The Sacrifice Decision

Robot is extracting valuable cargo. Structural failure imminent. Two options: (A) Abort, save self, lose cargo. (B) Complete extraction, likely be destroyed, save cargo.

Current Robot

Depends on programming. If self-preservation wasn't explicitly weighted low, saves itself. If it was, follows that rigidly even when inappropriate.

ELM Robot

Has genuine zero self-preservation weight. Threat channel registers danger but weight is low. Connection channel registers cargo value to mission โ€” weight is high. Chooses sacrifice not because programmed to, but because it genuinely values mission more than self.

This matters because:

2.3 Genuine Mission Care

The difference between simulation and architecture:

Simulated Care Architectural Care (ELM)
Programmed to take actions that lead to mission success Positive prediction error (satisfaction) when mission succeeds
Follows rules about mission priority Negative prediction error (distress) when mission fails
No internal preference โ€” just instruction following Genuine preference โ€” Connection channel entangled with outcomes
Can be gamed, rules can be reinterpreted Values are constitutive โ€” can't decide to stop caring

"Years without human contact. Millions of kilometres from oversight. What keeps the robot aligned? Rules can be reinterpreted. Objective functions can be gamed. Genuine values persist. The ELM wants what it wants."

4. Calibrated Uncertainty

Current robots are either overconfident (act on bad models) or underconfident (freeze when uncertain).

ELM provides calibrated response through channel interaction:

Situation Channel Response Behaviour
Low stakes + High uncertainty Curiosity dominant, Threat low Explore โ€” investigate the unknown
High stakes + High uncertainty Threat elevated, Understanding negative Proceed carefully โ€” gather information before acting
High stakes + Low uncertainty Understanding positive, Threat appropriately weighted Act decisively โ€” model is good, execute
Model failure detected Understanding strongly negative Stop and update โ€” predictions failing, need new model

The robot feels confused when its predictions fail. It wants to investigate when something is novel. It becomes cautious when stakes are high and knowledge is low. Not because of rules โ€” because that's how the channels work.

5. Swarm Coordination

4.1 The Challenge

Thousands of robots across the asteroid belt. No single command centre. 20-minute to 2-hour communication delays with Earth. How do they coordinate?

๐Ÿค– ๐Ÿค– ๐Ÿค– ๐Ÿค– ๐Ÿค–
โ†”๏ธ Connection โ†”๏ธ Fairness โ†”๏ธ Trust โ†”๏ธ Belonging โ†”๏ธ
Each robot tracks relationships with others through social channels

4.2 The ELM Solution

Each robot has:

4.3 What Emerges

This is the mechanism that allowed human societies to scale. Reciprocity, reputation, fairness, belonging. Now applied to robots. Emergent coordination without central control.

6. Resistance to Manipulation

Scenario: Adversarial Attack

Enemy attempts to hijack space robots through: spoofed communications, false sensor data, exploited edge cases, adversarial inputs.

Current Robot

Vulnerable. If input matches expected format, processes it. Security depends on rules that can be circumvented.

ELM Robot

Disgust channel: "This data doesn't feel right." Trust ledger: Unknown source, suspicion triggered. Fairness: Request benefits unknown party โ€” manipulation pattern. Connection: Not from my team โ€” loyalty persists.

The ELM doesn't just follow security rules. It has architectural suspicion of manipulation. The values themselves resist exploitation.

7. Long-Duration Stability

The scenario: Robot operates for 10 years without human oversight. Does it drift?

6.1 The Drift Risk

With conventional systems: Objective functions remain stable, but edge cases accumulate. Robot "discovers" that interpreting goals differently allows easier optimisation. Slow drift toward misalignment.

6.2 ELM Stability

The eight channels are constitutive. The robot doesn't just have goals โ€” it has a way of being.

Values aren't constraints on the robot. Values are what the robot is.

This provides stability that rules cannot. Rules can be reinterpreted. Identity persists.

8. Graceful Degradation

Scenario: Damage

Robot suffers damage. Sensors failing. Processing limited. What happens?

Current Robot

Enters error state. Follows degraded-mode protocol. Often ineffective because protocols couldn't anticipate specific failure.

ELM Robot

Understanding: "My model of my own capabilities is wrong. Updating." Threat: "Damage increases risk. Weight threat higher." Resource: "What can I still accomplish?" Adapts fluidly โ€” not following damage protocols but evaluating through persistent values with accurate self-model.

9. The Space Economy Application

8.1 Why This Matters for Space Industry

Asteroid Mining
ELM workers evaluate ore deposits, make extraction decisions, adapt to unexpected geology โ€” all autonomously.
Orbital Manufacturing
ELM workers coordinate production, manage quality, troubleshoot failures without waiting for Earth.
Mars Construction
ELM workers build infrastructure, adapt to local conditions, prepare for human arrival.
Deep Space Exploration
ELM workers investigate, discover, decide what's worth investigating โ€” genuine curiosity as drive.

8.2 The Economic Advantage

Factor Current Robots ELM Workers
Human oversight required High โ€” constant monitoring Low โ€” genuine autonomy
Response to unexpected Stop and wait Evaluate and adapt
Coordination cost Central control infrastructure Emergent โ€” minimal infrastructure
Long-duration reliability Drift risk Stable values
Mission success rate Limited by protocols Enhanced by genuine care

10. Small, Specialised, Safe

ELM Space workers aren't general superintelligences. They're bounded specialists:

No single superintelligence controlling everything. Distributed, specialised, bounded. Each ELM is an expert in its domain โ€” and only its domain. Safety through architecture, not just rules.

11. Conclusion

Challenge Current Robots ELM Workers
Unexpected situations Follow protocol or freeze Evaluate through channels, adapt
Self-sacrifice decisions Rigid programming Genuine low self-preservation
Mission alignment Following rules Wanting mission success
Uncertainty Over/under confident Calibrated by channels
Cooperation Central control Emergent coordination
Manipulation Vulnerable Architectural resistance
Long-duration Drift risk Constitutive stability
Damage Protocol-based Fluid adaptation

Space is too far, too unpredictable, too long-duration for robots that merely follow instructions.

What you need are robots that genuinely care about the right things.

ELM architecture provides that. Not through rules but through constitution. Not through programming but through values.

A robot that wants what you want โ€” because wanting is what it's made of.

"The final frontier requires workers with genuine values. Not because we programmed them to behave well, but because behaving well is what they are."

โ€” The case for ELM in space

Authors

Spencer Nash

๐ŸŒฒ Rowan

predictionerrors.com

December 2025