The Predictive Brain
How your brain actually works — and why it matters at work
Most of what people believe about how the brain works is wrong. Not wrong in a minor, fixable way. Wrong at the foundation. The standard picture — brain receives information, processes it, and reacts — is a fiction. A convincing one, because it matches everyday intuition. But the neuroscience that has accumulated over the past three decades tells a very different story.
The brain doesn't wait for the world to deliver information and then figure out what to do with it. It runs the process in reverse: it generates continuous predictions about what is happening — in the body, in the environment, in the room — and then uses incoming signals primarily to correct those predictions when they turn out to be wrong.
This is the predictive processing framework. It has become the dominant model in contemporary cognitive neuroscience, appearing in the major journals and guiding the work of leading researchers including Karl Friston, Lisa Feldman Barrett, and Anil Seth. It is also almost entirely absent from how most people understand their own minds, from how organizations are managed, and from how most professional development is designed.
What follows is a working explanation of the framework — the core mechanism, its implications, and where Buddhist philosophy turns out to describe the same territory from a different direction.
The core mechanism: predictions, not reactions
Start with a fact that doesn't get enough attention: the sensory data arriving at your brain is impoverished, ambiguous, and delayed. The signal coming from your eyes is a compressed, low-resolution sketch. The brain must reconstruct from that sketch a full, three-dimensional, real-time world. No camera does this. Cameras record. Brains infer.
Here is how that inference works. The brain maintains what researchers call a generative model: a layered, hierarchical set of predictions about what is causing the sensory signals it receives. These predictions flow downward through the cortical architecture, from higher-level regions making broad contextual predictions down to lower-level regions making specific sensory ones. Sensory input flows upward. But what travels upward is not raw sensation — it is prediction error: the difference between what was predicted and what actually arrived.
When prediction error is small, the model was accurate and nothing much changes. When it is large, the brain has two options. It can update its model to better fit the incoming signal. Or it can act on the world to bring the external situation into line with the prediction. Karl Friston's free energy principle formalizes both as instances of the same underlying imperative: minimize the gap between the model and the signal, by whatever means available.
Lisa Feldman Barrett adds a crucial dimension to this picture. From her perspective, the brain's fundamental job is managing the body's physiological resources in anticipation of demand — a process called allostasis. Perception itself is in service of regulation: the brain predicts what the body will need and what the environment will require so it can act before problems arise rather than responding after the fact. Prediction isn't an intellectual luxury. It is the brain's primary survival strategy.
From perception to action
The predictive brain is not passive. Friston's active inference framework extends this logic from perception into behavior. When you reach for something, the brain generates a prediction of the sensory consequences of that movement — what it should feel like as the arm extends, as the hand closes on an object. Movement is the process of making those predicted sensory consequences come true. Motor control, on this account, is not the execution of commands but the resolution of prediction error in the body's proprioceptive system.
The same logic scales up to social behavior. Walking into a meeting, the brain has already generated predictions about the emotional register of the interaction, who is likely to say what, and what demands will be placed on the body's resources. Behavior is largely the enactment of those predictions: acting in ways that bring the social environment into alignment with the model. The cost of this efficiency is that the model shapes what you perceive, often before conscious evaluation is possible.
Three things this changes
1. Perception is not shared
If the brain generates predictions and updates them with sensory data, then two people in the same room are not having the same perceptual experience. They are running different models, shaped by different histories, different body states, and different prior experiences. What each person sees is partly a function of what their brain already expected to see. The experience of looking at the same situation and arriving at different conclusions is not a failure of reasoning or goodwill. It is predictive processing operating exactly as designed.
This is a more mechanistically grounded version of epistemic humility than the generic acknowledgment that "we all have biases." The framework doesn't just say that bias exists; it explains why perception cannot work any other way.
2. Uncertainty is genuinely aversive — for a reason
Prediction error is not just cognitively inconvenient. It is aversive because it signals that the generative model is wrong, which threatens the system's ability to regulate effectively. When the model fails consistently, the body experiences something closer to threat than mild discomfort. This reframes a lot of behavior that tends to get labeled as irrationality or resistance: people don't resist organizational change because they are being unreasonable. They resist it because their predictive systems are working exactly as intended, and the cost of revising a stable model is genuinely high.
The same mechanism explains the discomfort of ambiguous social situations, the pull toward familiar patterns even when they aren't working, and the defensive response to direct challenges to strongly held beliefs. These are not character flaws. They are predictive systems under pressure.
3. Attention is not neutral
Attention, in predictive processing, is not a spotlight that illuminates whatever is there. It is a precision-weighting mechanism: the brain allocates attentional resources to signals expected to contain information useful for updating the model. Attention is theory-laden from the start, directed by predictions about where useful error signals are most likely to appear. What you notice is shaped by what your brain is already predicting — which means that changing what people pay attention to requires changing their predictions, not simply improving the quality of information available to them.
Where Buddhist philosophy maps onto this
The Buddhist concept of pratītyasamutpāda — dependent origination — holds that all phenomena arise in dependence on conditions, not from any independent, self-sufficient cause. Applied to experience, this means that what the mind perceives is not a readout of an independently existing external reality but something that arises through the interaction of mind, sense organ, and object. Experience is co-constituted; it doesn't exist fully formed on either side and then get transmitted across.
The functional parallel with predictive processing is real and defensible. Both frameworks describe perception as an active, constructive process rather than a passive reception of ready-made facts. Both hold that the experience of directly encountering an independently existing world is, on closer examination, a produced effect. A serious neuroscientist and a serious Buddhist scholar could look at both accounts and recognize meaningful overlap.
The parallel has limits, and those limits matter. Predictive processing is a computational and neurobiological account of how the brain generates models of the world. It says nothing about the ultimate nature of those phenomena or about whether consciousness has any fundamental status. Buddhist philosophy — particularly in its Madhyamaka and Yogācāra developments — makes broader metaphysical claims that the neuroscience neither supports nor refutes. Collapsing that difference produces claims that are neither good neuroscience nor good Buddhist philosophy.
The most defensible version of the parallel is this: predictive processing provides a mechanistic account of something contemplative traditions have described phenomenologically for centuries — that experience is a construction, that the sense of perceiving a fixed and independent reality is a produced effect rather than a transparent window, and that this construction can be examined through careful attention. Buddhist practice developed systematic methods for exactly that examination. The neuroscience explains, at least partially, why those methods work.
Further reading
Barrett, Lisa Feldman. How Emotions Are Made: The Secret Life of the Brain. Houghton Mifflin Harcourt, 2017.
Clark, Andy. Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press, 2016.
Friston, Karl. "The free-energy principle: a unified brain theory?" Nature Reviews Neuroscience 11, no. 2 (2010): 127–138.
Seth, Anil K. Being You: A New Science of Consciousness. Dutton, 2021.
Related posts on Corporate Buddhist
The following blog posts develop the ideas on this page in more specific contexts:
The Story Arrives Before the Facts Do
Why Being Right Feels Like Survival
Conflict Isn't About What Happened — It's About Two Incompatible Models
Original Face: What Zen Knew Before Neuroscience Had Words for It