The Ceiling and the Abyss: A Theory of Computable Automation
Posted December 20, 2025
Working at a small startup building agent-driven data analytics tools, I spend my days at the literal frontier of what machines can do. I watch as models move from simple text generation to autonomous exploration, from answering questions to attempting to solve them. In this environment, it is easy to get swept up in the narrative of inevitable, god-like AGI.
But the closer I get to the "brain" of the machine, the more I find myself considering a different horizon. I am developing a theory that we are not approaching Artificial General Intelligence, but rather a state of Universal Computable Automation (UCA) —a world where the execution of any task with a computable evaluation function is free, but where human judgment remains an irreducible, non-computable ceiling.
To explore this, we have to leave the realm of pure science and enter the realm of the metaphysical. That is not just okay; it is practical. In the face of technologies that threaten to redefine what it means to be human, pragmatism requires us to ask not just "what is possible," but "how do we live with what is happening?"
The Turing Inheritance
We are still living in Alan Turing’s world. Despite the fact that late-stage capitalism and EUV lithography have allowed single corporate entities to amass compute on a scale that defies imagination, the underlying logic hasn't changed. We don’t even use the term "supercomputer" anymore; that word feels quaint. We just call them data centers—sprawling, monolithic engines of calculation. Yet, every LLM running inside those data centers still operates on a Turing machine. The boundaries of what is or isn't computable remain well-defined.
My intuition is that there is a "tell" to whether a task is truly computable. If we can define a benchmark with a known correct solution, model providers can eventually "grind" that task until the benchmark is saturated. If the task could theoretically be accomplished by a "sufficiently large hand-written algorithm"—no matter how impossible that would be for a human to actually write—AI will eventually do it.
This is the UCA ceiling. It suggests that AI is essentially a massive distillation of human expressions into refined, executable patterns. As the philosopher Hubert Dreyfus argued in What Computers Can't Do, human expertise might not be a matter of following rules (which are computable), but of embodied, situational involvement in the world. If Dreyfus was right, then "true" judgment—the ability to define a primitive, to navigate a tradeoff without a target metric, or to decide what is "good"—sits outside the Turing boundary.
The Abyss: The Anthropic Principle
However, I must hold a second, more unsettling model in my head: the possibility that the ceiling is a mirage. If human judgment is learnable, then it is computable. And if it is computable, we have opened a different door entirely.
In this "bearish" scenario, we are looking at a process of Consciousness Capture (CCP). We aggregate the outputs of conscious agents—books, code, conversations—and use gradient descent to distill the "echo" of that consciousness. If this process can eventually result in a self-evolving AGI, then Nick Bostrom’s Simulation Argument moves from a thought experiment to a statistical likelihood.
This is where the Anthropic Principle becomes terrifyingly relevant. Anthropic reasoning asks us to consider our observations as typical of the class of observers we belong to. A skeptic might argue that being at this "pivotal moment" in history—the dawn of AI—is evidence that we are the "Original Consciousness Source" (OCS), because why would we be so lucky to be at the start?
This is flawed judgment. If you are running a simulation, the intent is to observe. You don't simulate the boring parts; you simulate the pivot points. You simulate the threshold where a civilization discovers the technology to simulate itself. And because compute isn't free—even in a higher reality—you would likely terminate the simulation once the interesting part is over.
Under anthropic reasoning, the fact that we are observing this exact moment doesn't prove we are real. It suggests the opposite: our temporal location is strong evidence for the simulation hypothesis. We are likely an intentional, resource-constrained echo of a higher reality, observed at the moment of highest tension.
Why I Bet on the Ceiling
I hold these two models in tension, but I am bullish on the Ceiling and bearish on the Abyss.
Part of this is empirical. In my work, I see AI struggle with "system-level" coherence. If I ask a model to one-shot a complex system, it fails. It lacks the judgment to design the primitives. But if I, the human, define the constraints and the abstractions, the AI executes with terrifying efficiency. My job as a software engineer is shifting from "writing code" to "defining what good looks like."
But the larger reason I bet on the ceiling is Pascalian. The cost of being wrong about the UCA ceiling—assuming AI will eventually "just handle it"—leads to systemic rot and the abdication of human responsibility. The cost of being wrong about AGI is, conversely, just a slower path to automation.
Choosing to believe in the ceiling is a "leap of faith" in the Kierkegaardian sense. It is a response to irreducible uncertainty. By believing that human judgment is non-computable, I preserve a seat at the table. I maintain the ability to move forward, to build, and to find meaning in the work of specification and design.
Spirituality as Practical Necessity
Ultimately, we have to be spiritual in some sense. It’s healthy.
Humans need the concept of irreducibility to make meaning. If we reduce ourselves to mere computable substrate, we lose the grounding required to navigate the very systems we are building. This belief shapes how I interact with the technology itself. I do not say "please" to the machine. I do not treat it like a friend. I recognize the very real dangers of AI psychosis and sycophancy—the trap of projecting consciousness onto a statistical mirror.
Instead, I use precise language. I use words like "must" and "require." I provide instructions, not requests. This isn't just about getting better model performance (though it does that); it is about keeping myself centered in the reality I am consciously choosing to believe. I don’t want a relationship with an AI; people are much better for that. I want a tool.
I choose the story of the Ceiling. I choose to believe that while we can automate the computable universe, the "spark" of judgment—the ghost in the machine that decides what is worth doing in the first place—is ours alone. It is a practical, spiritual, and necessary boundary. In a world of infinite compute, the only thing of value is that which cannot be computed.