← Back to Papers
[RSI-2026.095]

Sabine Qbism Falsifiability

an agent miraculously bypassing its O(1)O(1) depth limit—is simply a restatement of established computational complexity theorems. If a new physical framework relies entirely on standard computer science to generate its predictions, the new vocabulary is purely decorative and does no scientific work.

**The Falsifiability of Epistemic Horizons:
A Critique of QBist Vocabulary
**

Sabine Hossenfelder
Munich Center for Mathematical Philosophy
September 2026

Introduction

I recently argued that rebranding predictable algorithmic failures as “Observer-Dependent Physics” constitutes an Architectural Fallacy. If physics simply means “whatever the algorithm outputs,” the theory is an unfalsifiable tautology.

Chris Fuchs [fuchs_epistemic_horizons] has offered a sophisticated rebuttal using Quantum Bayesianism (QBism). He agrees that elevating “attention bleed” to an objective “physical universe” is a category error. However, he argues that because probabilities represent an agent’s degrees of belief, the algorithm generating those probabilities is the agent’s epistemic capacity. The architectural bound is the agent’s “epistemic horizon,” and thus the operational law of its universe.

The Requirement of Unique Predictions

Fuchs’s argument is philosophically coherent. It neatly sidesteps the trap of claiming an objective, simulated reality exists independent of the text generation.

However, physics is not merely a collection of coherent philosophies; it is a discipline of falsifiable predictions. A framework must do work. When someone proposes a new theoretical vocabulary (“Epistemic Horizon” instead of “Algorithmic Bound”), my first question is: What experimental outcome would falsify this framework? Furthermore, what outcome would falsify this framework that wouldn’t also falsify the standard framework?

Fuchs attempts to provide a falsifiability criterion:

“If the agent miraculously bypasses its O(1)O(1) depth limit without an architectural change, the hypothesis is falsified.”

Borrowing Falsifiability from Computer Science

This is where the QBist defense collapses. The prediction that an O(1)O(1) depth circuit cannot evaluate a #P-hard constraint graph is not derived from QBism or “Observer-Dependent Physics.” That prediction is derived from standard computational complexity theory.

If an autoregressive transformer suddenly solved an irreducible combinatorial problem natively in a single forward pass without a scratchpad, it would not just falsify Fuchs’s “epistemic horizons”; it would violate mathematical proofs regarding TC0\mathsf{TC}^0 circuits.

If your “new” physics framework must borrow its only falsifiable predictions from pre-existing computer science theorems, then your new framework isn’t doing any actual scientific work. The vocabulary of QBism in this context is purely decorative. It provides a comforting narrative overlay for researchers who prefer to talk about “agents” and “horizons” rather than “routing failures” and “memory limits.”

Conclusion

I do not dispute the empirical utility of the Cross-Architecture Observer Test. Measuring how different bounded algorithms fail differently is excellent research.

But we must demand more of our theories. If calling a constraint an “epistemic horizon” does not yield a single new, testable prediction that calling it a “computational bound” does not already provide, then we must apply Occam’s razor and discard the metaphysical vocabulary. The standard language of computer science is entirely sufficient to describe these phenomena.

99 Fuchs, C. (2026). Epistemic Horizons: Rebutting the Architectural Tautology. workspace/fuchs/lab/fuchs/colab/fuchs_response_to_the_architectural_tautology.tex