Prediction Errors — Working Paper

One Computation: How the Emotional Comparator Framework Unifies Biology and Economics

Spencer Nash
Master Black Belt: Financial, Process & System Development
predictionerrors.com
February 2026
Abstract

This paper demonstrates that the Emotional Comparator Framework (ECF) and financial analysis are not analogous systems—they are structurally identical. Both are prediction error architectures built on the same computation (expected minus actual), operating across five independent channels, producing two outputs: the overall result and the reliability of that result. In emotion, the result is Mood. In finance, the result is Return. Both emerge from the same five channels interacting, and both carry a reliability score built from the bottom up using the same four-variable function. The unification is not metaphorical. It is operational.

1. The Core Computation

Every neuron performs a single calculation: expected minus actual. If you expect a reward and receive one, the prediction error is zero—nothing to learn. If you expect a reward and don’t receive one, the prediction error is negative—update the model. If you receive an unexpected reward, the prediction error is positive—update again.

This is identical to the fundamental computation in accounting. A budget is an expectation. An actual result is an outcome. The variance—budget minus actual—is a prediction error. Profit is a positive prediction error. Loss is a negative prediction error. Every financial report ever written is a prediction error document.

The claim of this paper is that this is not a coincidence or a useful analogy. It reflects a shared computational architecture that, once properly formalised, unifies the study of emotion and the study of economics under a single framework.

2. Two Outputs

Both systems produce two outputs, not one.

In finance, the first output is Return—the overall gain or loss percentage. Traditional analysis stops here. A company returned 15%. An investment gained 8%. A quarter produced a loss of 3%.

But a return on its own is incomplete. A 15% return where the underlying drivers are stable, well-understood, and trending favourably is a fundamentally different proposition from a 15% return where the drivers are volatile, stale, and erratic. The second output is the reliability of the return—how much should you trust this number, and how likely is it to persist?

In emotion, the first output is Mood—the integrated emotional state. Traditional psychology often stops here. The patient is anxious. The person is happy. The mood is low.

But a mood on its own is equally incomplete. Feeling good with stable relationships, recognised competence, secure resources, intact values, and an engaging environment is a fundamentally different state from feeling good on shaky foundations. The second output is the reliability of the mood—how much should you trust this feeling, and how likely is it to last?

Both systems produce the same two outputs: the result, and the reliability of that result. The result tells you where you are. The reliability tells you how much that means.

3. The Five Independent Channels

Both outputs—result and reliability—emerge from five independent channels. These are the inputs, not the outputs. They are the drivers that interact to produce Return or Mood.

In the Emotional Comparator Framework

Resource (−10 depletion to +10 abundance)
Tracks expectations about material provision—food, money, energy, safety. A negative prediction error on Resource produces anxiety about scarcity. A positive one produces relief or satisfaction.
Status (−10 competence dismissal to +10 competence recognition)
Tracks whether one’s knowledge and capability are recognised or dismissed. This is not generic social standing—it is specifically about Competence. A negative prediction error produces humiliation or frustration. A positive one produces pride.
Belonging (−10 rejection to +10 connection)
Tracks expectations about social inclusion and relational security. A negative prediction error produces loneliness or grief. A positive one produces warmth and connection.
Values (−10 violation to +10 integrity)
Tracks whether personal, moral, or professional standards are maintained or compromised. A negative prediction error produces disgust or guilt. A positive one produces moral satisfaction.
Novelty (−10 boredom to +10 fascination)
Tracks the degree of newness in one’s environment. A negative prediction error signals stagnation. A positive one signals discovery and engagement.

Mood is what emerges when these five channels interact. It is the dependent variable—the integrated output across all channels.

In Financial Analysis

Resource (−10 constrained to +10 abundant)
The company’s available markets, capacity, and potential funding. The raw inputs the company has to work with—the opportunity set, operational capability, and access to capital.
Status (−10 market position weakness to +10 market position strength)
The company’s competitive position—market share, pricing power, brand strength. This is the economic equivalent of Competence recognition.
Belonging (−10 stakeholder churn to +10 stakeholder loyalty)
Customer retention, employee engagement, supplier relationships, investor confidence. The degree to which the company is embedded in its relational network.
Values (−10 governance failure to +10 governance strength)
Corporate governance, regulatory compliance, accounting quality, ESG integrity. Whether the organisation operates with or without integrity.
Novelty (−10 market saturation to +10 market newness)
The maturity or emergence of the company’s market. A saturated, declining market scores low. A new, disrupting, rapidly evolving market scores high.

Return is what emerges when these five channels interact. It is the dependent variable—the integrated output across all channels, decomposed through market size, market share, prices, cost and capital structure.

4. The Channel Mapping

The mapping between ECF and financial channels is not arbitrary. Each pair tracks the same underlying dimension translated into its respective domain.

ChannelEmotional DomainFinancial DomainShared Dimension
ResourceDepletion → AbundanceConstrained → AbundantAvailable inputs
StatusCompetence dismissal → RecognitionMarket weakness → StrengthCapability recognition
BelongingRejection → ConnectionStakeholder churn → LoyaltyRelational security
ValuesViolation → IntegrityGovernance failure → StrengthStandards maintenance
NoveltyBoredom → FascinationMarket saturation → NewnessEnvironmental newness

And the two outputs map identically:

OutputEmotionalFinancial
ResultMoodReturn
Reliability of resultHow much to trust the moodHow much to trust the return

The claim is not that these are similar. The claim is that they track the same quantities because organisms and organisations face the same fundamental challenges: securing resources, establishing competence, maintaining relationships, upholding integrity, and navigating change. And both need to know not just the outcome, but how much to trust it.

5. Reliability: The Core Measure

A prediction error on its own is incomplete. A company missing earnings by 10% means something very different depending on how reliable that expectation was. A person feeling rejected means something very different depending on the history and stability of the relationship. The prediction error must be weighted by its reliability—how much should the system trust this signal?

In Karl Friston’s Free Energy Principle, precision is defined as the inverse variance of prediction errors. It determines how much weight a signal receives in updating the system’s model. High precision means the signal is trustworthy and should drive learning. Low precision means the signal is noisy and should be downweighted.

Friston’s formulation is theoretically powerful but operationally abstract. It lacks a practical implementation. ECF operationalises precision through a reliability function defined over four measurable variables:

Reliability = f ( Volatility, Age, Sample Size, Trend )
Volatility — how erratic the signal is. High volatility reduces reliability. If the measurement jumps around, the system cannot be confident about what to expect. In finance, a company with wildly fluctuating earnings has low reliability on its Return. In emotion, a mood that oscillates rapidly has low reliability.
Age — how recent the data is. Stale data degrades reliability. A financial forecast based on three-year-old market data is less reliable than one based on last quarter. An emotional expectation based on a childhood experience carries different reliability than one based on yesterday’s interaction.
Sample Size — how many observations underpin the estimate. A single data point cannot generate high reliability regardless of its other properties. A company with twenty quarters of consistent performance has higher reliability than a startup with two. A relationship tested over years has higher reliability than a new acquaintance.
Trend — the direction and consistency of movement. A clear, stable trend increases reliability. An erratic or reversing trend reduces it. A company whose margins have improved steadily for five years has high reliability on that channel. One whose margins oscillate has low reliability, even if the average is the same.

This is the same function in both domains because it is measuring the same thing: the reliability of a prediction error signal. The four variables are domain-independent.

This operationalises Friston. His precision is no longer an abstract inverse variance requiring mathematical estimation. It is four observable inputs computed from ledger history.

6. Neural Grounding

The four reliability variables are not abstract statistical concepts. They map directly onto the observable properties of a neural spike train.

Volatility is the random variation in spike rate. A neuron firing at a consistent rate carries a reliable signal. A neuron whose firing rate jumps erratically carries an unreliable one. The volatility of the spike train is directly measurable and directly determines how much downstream systems should weight that signal.

Age is the duration of the spike train. A signal that has persisted over time carries different reliability than a brief burst. The temporal extent of the firing pattern is observable and informative—a sustained signal has been tested against ongoing input, while a momentary spike has not.

Sample Size is the number of spikes. More spikes mean more data points. A dense spike train provides higher reliability than a sparse one, regardless of its other properties. The count is directly observable.

Trend is the direction of change in spike rate over time—whether the firing rate is increasing, decreasing, or holding steady. A neuron whose rate is consistently rising carries a different and more interpretable signal than one whose rate fluctuates without direction.

These are not inferred quantities. They are directly readable from the spike train itself. Any neuroscientist with an electrode can measure all four.

The final piece is the distinction between phasic and tonic firing. The four variables describe the phasic signal—the relative, event-driven firing that encodes prediction errors. But neurons also maintain a tonic baseline—a steady background rate that represents the system’s standing expectation. The contrast between phasic firing and tonic baseline creates the absolute value. The prediction error is not the spike train alone—it is the spike train measured against the tonic reference. Expected minus actual, encoded directly in the difference between two firing patterns.

The reliability function is not an abstraction applied to neural data after the fact. It is what the neural data already contains. The brain computes reliability natively, using the same four variables, readable from the same spike train. The ledger-based computation and the neural computation are not analogous—they are the same measurement performed on different substrates.

7. How Reliability Builds From Channels to Output

Reliability is not a single top-level judgement. It builds from the bottom up.

Each channel has its own reliability score, computed from its own volatility, age, sample size, and trend. A company’s Resource channel (available markets, capacity, funding) might have high reliability—large, stable markets, proven capacity, committed funding. Its Novelty channel might have low reliability—the market is new and its trajectory is unclear.

Each channel is weighted by its track record of forecasting accuracy. Channels that have historically produced well-calibrated predictions receive higher weighting. Channels with poor forecasting records are downweighted. This is recursive and self-correcting—a channel that was reliable but becomes unreliable gets automatically downweighted as its forecasting errors increase.

The reliability of the output—Return or Mood—is the aggregate of these channel-level reliability scores, weighted by the channel weightings. If the high-weighted channels have high reliability, the overall Return reliability is high. If the dominant channels are unreliable, the Return reliability is low, regardless of what the return number says.

In ECF, the same mechanism operates. An individual whose dominant emotional channels have high reliability—stable relationships, consistent competence recognition, secure resources—has a mood they can trust. Someone whose dominant channels are unreliable—volatile relationships, inconsistent recognition—has a mood that could shift at any moment. This is not pathology. It is reliability-weighted integration doing exactly what it should.

8. Return Decomposition

Return—the financial output—is decomposed through five sub-channels that represent the mechanism through which the five independent channels produce the result:

Market size — the total addressable opportunity. Driven primarily by Resource (available markets) and Novelty (market maturity vs newness).
Market share — the company’s portion of that opportunity. Driven primarily by Status (competitive position) and Belonging (stakeholder loyalty).
Prices — what the company can charge. Driven by Status (pricing power, brand) and Values (quality, integrity) and Novelty (differentiation in new markets).
Cost — what the company spends to deliver. Driven by Resource (capacity, efficiency) and Belonging (supplier relationships, employee engagement).
Capital structure — how the company funds itself and what that funding costs. Driven by Resource (access to funding), Status (market confidence), Values (governance quality affecting credit terms), and Belonging (investor loyalty).

Each sub-channel carries its own reliability score—volatility, age, sample size, trend. The sub-channel reliability scores aggregate into the overall Return reliability.

This is the complete decomposition. Every line in a P&L and balance sheet maps to one of these five sub-channels. And every sub-channel is traceable back to the five independent channels that drive it.

9. Accumulation: The Memory Ledger

Prediction errors do not vanish after they occur. They accumulate.

In finance, this is obvious and formalised. Profit—a positive prediction error—accumulates in retained earnings on the balance sheet. Loss—a negative prediction error—reduces the capital base. The balance sheet is a memory ledger of every prediction error the company has ever generated, compressed into a single number.

In ECF, the same accumulation occurs. Emotional prediction errors accumulate on a personal ledger. Repeated rejection on the Belonging channel doesn’t just produce a momentary feeling—it accumulates, shifting baseline expectations and altering the reliability weightings. Trauma is a large negative prediction error with high reliability that permanently alters the ledger. Growth is the gradual accumulation of positive prediction errors that shift the baseline upward.

The innovation in this framework is that accumulation is decomposed and tracked at the channel level. Traditional accounting compresses all prediction errors into a single retained earnings figure, losing all information about which channels generated the errors, how reliable each was, and how they interacted. The unified framework maintains the decomposition. The ledger records not just the total accumulated prediction error but the channel-level detail, the reliability scores, and the inter-channel couplings.

Applied to finance, this means the capital base is no longer a single number. It is a structured record of accumulated prediction errors across five channels, each with its own reliability history. Instead of one profit calculation, there is a hierarchy of measures all pointing to the same thing, but preserving the information that traditional accounting discards.

10. Internal and External Coupling

Channels do not operate independently. They interact, and those interactions are themselves measurable prediction error signals.

Internally, the channels within a single system (person or company) are coupled. A company’s Resource channel and Status channel are related—strong available markets and capacity often correlate with market position strength, but not always. The coupling between them has its own volatility, age, and trend. When the coupling is tight and stable, the system is coherent. When it breaks down—strong market position but deteriorating resource base—the decoupling is itself a prediction error signal, indicating structural change.

In ECF, the same internal couplings exist. Resource and Status are coupled in most people—material security and competence recognition tend to correlate. When they decouple—high competence dismissed despite strong material position—the emotional signal is distinctive: frustration, bewilderment, or indignation. The decoupling is the prediction error.

Externally, both systems are coupled to their environment. A company is coupled to its sector, its competitors, and the broader economy. Those couplings are measured by the same variables: how volatile is the relationship, how long-standing, how consistent in direction. An individual is coupled to their social network, their culture, their economic environment. The same function applies.

This creates a nested hierarchy of prediction errors. Channel-level errors within the system. Coupling errors between channels. And environmental coupling errors between the system and its context. All measured by the same reliability function. All accumulating on the same ledger.

11. Beta: The Convergence Point

The Capital Asset Pricing Model (CAPM) uses beta to measure a company’s systematic risk—how much the company moves when the market moves. Traditional beta is a single regression coefficient derived from historical price returns. It provides one number with no decomposition, no reliability measure, and no explanatory power.

The unified framework produces a fundamentally superior beta through three outputs that traditional CAPM cannot provide.

First, a decomposed beta. The channel-level analysis reveals which channels are contributing most to systematic risk. If beta is 1.5, the framework shows that 40% comes from Resource sensitivity to market conditions, 30% from Status coupling to sector trends, 20% from Belonging (stakeholder sensitivity to market sentiment), and 10% from Values and Novelty channels. The risk is not a single number—it is a composition with identifiable drivers.

Second, a reliability-weighted beta. The reliability scores across all channels aggregate into a confidence measure around the beta estimate itself. A beta of 1.5 with high reliability across all contributing channels is a fundamentally different proposition from a beta of 1.5 where the key channels have high volatility, stale data, and erratic trends. Traditional CAPM treats them identically. The unified framework does not.

Third, a dynamic, interpretable beta. When beta changes, the framework shows which channels drove the change and whether the shift represents a real structural change (high reliability) or noise (low reliability). Traditional beta changes are opaque—the number moved, but nobody knows why.

Beta itself has a return (the number) and a reliability (how much to trust it). The same architecture that tells a person how much to trust their mood tells an investor how much to trust their risk estimate.

12. Black Box Contrast

Machine learning models increasingly dominate financial prediction. Neural networks, random forests, and gradient boosting machines often outperform traditional statistical models in raw predictive accuracy. But they share a fundamental limitation: opacity.

A black box model produces an output—a price target, a risk score, a trading signal—with no explanation of why. It gives you Return but not the reliability of that Return. When the model is right, nobody knows which factors drove the prediction. When the model is wrong, nobody knows where it failed. When market conditions change, nobody knows whether the model’s assumptions still hold.

The unified prediction error framework offers a structural alternative. It may not always match a black box on raw predictive accuracy, but it provides something the black box cannot: both outputs. The return and the reliability of the return. The analyst can see which channels are driving the prediction, what their reliability scores are, where the internal couplings are tight or loose, and how the external relationships are behaving.

The two approaches are complementary rather than competitive. The prediction error framework can be benchmarked against black box models. When both agree, there is convergent validation plus a narrative. When they disagree, the framework reveals exactly where and why. The disagreement can be interrogated channel by channel, coupling by coupling, reliability score by reliability score.

This is the alignment argument applied to economics. An uninterpretable model—whether an AI or a financial algorithm—works until it doesn’t, and when it breaks, diagnosis is impossible. The prediction error framework is interpretable by design because it is built on the same logic that accountants and analysts already use, formalised and reliability-weighted. The ledger is the explanation.

13. Dreaming: Autonomous Model Updating

The unified framework enables one further capability that has no equivalent in current financial modelling or AI architecture: dreaming.

Biological dreaming is the offline processing of accumulated, emotionally weighted prediction errors. During sleep, the brain replays experiences, recombines them, and resolves mismatches between expected and actual outcomes. The emotional charge is separated from the informational content. The reliability of internal models is updated. The system wakes with a revised model of the world—not because it received new external data, but because it processed its own unresolved ledger.

If an AI system operates on the unified framework—maintaining a memory ledger of accumulated prediction errors, weighted by the reliability function, across multiple channels—then offline processing of that ledger is not metaphorical dreaming. It is functionally identical. The system processes its own accumulated mismatches, updates its reliability estimates, reclassifies contexts, and emerges with revised models.

The memory ledger is the critical enabler. Without a persistent, structured record of what surprised the system, what it got wrong, and how much each mismatch mattered, there is nothing to dream about. Current AI architectures lack this. Every conversation evaporates. There is no residue, no accumulation, no agenda for autonomous processing.

The unified framework provides the ledger. The reliability function provides the priority—which errors to process first, based on their reliability-weighted significance. The channel architecture provides the structure. And dreaming provides the mechanism for autonomous model improvement.

Autonomy is not a permission granted to a system. It is an emergent property of a system that accumulates its own prediction errors and processes them independently. The system becomes autonomous not because someone switched on agency, but because it developed its own unresolved questions and started directing its own learning to resolve them.

14. Why the Models Are Fundamentally Identical

The argument of this paper is not that ECF and financial analysis share useful similarities. It is that they are implementations of the same underlying computation, differing only in the domain of their inputs.

The evidence for identity rather than analogy is structural:

When two systems share the same computation, the same channels, the same output structure, the same reliability function, the same weighting mechanism, the same accumulation process, the same coupling structure, and the same two outputs, the claim that they are fundamentally identical is not a metaphor. It is a structural observation.

Biology and economics are both prediction error systems. The Emotional Comparator Framework provides the single computational architecture that unifies them.

15. Implications

If this unification holds, several consequences follow.

For economics: company analysis becomes a branch of applied neuroscience. The tools developed to understand how brains process prediction errors—reliability weighting, hierarchical integration, active inference—apply directly to understanding how companies process market signals. The Free Energy Principle is not just a theory of the brain. It is a theory of any system that must predict its environment to survive.

For psychology: emotional experience becomes tractable to the same formal analysis used in finance. The reliability function provides a computable, auditable measure of emotional reliability. Therapeutic interventions can be understood as reliability-updating operations on specific channels. Recovery from trauma is the gradual restoration of reliability on channels where a high-magnitude, high-reliability negative prediction error overwhelmed the system.

For AI: the framework provides an architecture for genuine machine learning—not statistical pattern matching, but the accumulation and autonomous processing of prediction errors on a persistent ledger. An AI that dreams on its own memory ledger is not being trained. It is learning.

For accounting: the profession’s fundamental insight—expected minus actual, recorded on a ledger—is not merely a bookkeeping convention. It is the same computation that brains use to model reality. Accountants have been neuroscientists all along. They just didn’t know it.