Twenty years ago, a neuroscientist named Russell Poldrack published a short critique in Trends in Cognitive Sciences that should have reshaped the entire neuromarketing industry. Most of the industry ignored it. This is what it said, why it was right, and what a post-critique methodology actually looks like.
The critique, precisely
Poldrack's 2006 TICS paper[1] asks a simple question: can cognitive processes be inferred from neuroimaging data? The answer is "not the way most studies try to." The argument is Bayesian.
Reverse inference is the move where a researcher observes activation in region X and infers that cognitive process Y was engaged because prior studies associate X with Y. Formally: P(Y | X) depends on the base rate of Y in the population of studies and on P(X | Y). For most brain regions, activation happens across many cognitive conditions, which makes P(Y | X) low even when P(X | Y) is high. Specificity of activation is what matters. For the medial prefrontal cortex and the insula, specificity is low: they activate across hundreds of tasks.
Poldrack's 2011 Neuron follow-up[2] formalized the argument and introduced meta-analytic decoding (Neurosynth) as a partial mitigation. The critique is not that fMRI is useless. The critique is that reading cognitive states off regional activations, without controlling for regional specificity, is a statistical error.
"Amygdala activation, therefore fear" does not parse. The amygdala activates for fear, for novelty, for uncertainty, for reward, for many things. Specificity is the missing variable.
The error in early neuromarketing
The commercial neuromarketing industry, between 2005 and 2020, made exactly this kind of claim on sales decks at scale.
"Nucleus accumbens activation equals purchase intent." The NAcc also activates for food, money, novelty, uncertainty, and social reward.
"mPFC activation equals self-referential relevance." The mPFC is perhaps the most prominent hub for value integration, theory of mind, and mentalizing. All of those engage it.
"Amygdala activation equals emotional engagement." The amygdala's functional profile includes threat processing, novelty detection, and attention to salience, any of which could account for the activation.
Ariely and Berns's 2010 Nature Reviews Neuroscience piece[3] flagged this explicitly. The commercial industry largely did not update. Sales claims continued to equate regional activation with specific cognitive states for another fifteen years.
Forward prediction as the scientifically adequate alternative
Instead of asking "what does this activation mean," ask "can I predict this outcome."
Operationally: train an encoder on (stimulus, neural response) pairs. Train a predictor on (neural response, behavioral outcome) pairs. Hold out a test set. Report correlation with confidence intervals. Do it again with a new test set. If the model does not predict, the model does not work.
Forward prediction is falsifiable. It does not rely on interpretation of what activation "means." It relies on whether a measurement lets you forecast an outcome on held-out data. The Popperian virtue of being wrong is what makes it science; that is also what makes it the right methodology for commercial work, where the difference between prediction and assertion is the difference between useful and decorative.

The track record of forward-prediction neuroforecasting
The forward-predictive lineage is much better than the category's reputation suggests. None of these studies relies on reverse inference for its claims.
- Berns and Moore 2012[4]. Neural response predicts song popularity three years later.
- Falk, Berkman, Lieberman 2012[5]. mPFC predicts national PSA call volume. Self-report did not. (See the Falk piece.)
- Dmochowski et al. 2014[6]. EEG ISC predicts Nielsen ratings.
- Venkatraman et al. 2015[7]. Ventral striatum uniquely predicts TV ad sales elasticity. Incremental R² 0.10 to 0.14.
- Barnett and Cerf 2017[8]. EEG ISC predicts opening-weekend box office at r of 0.74 to 0.88.
- Genevsky, Yoon, Knutson 2017[9]. NAcc predicts Kickstarter funding. Coined the term neuroforecasting.
- Scholz et al. 2017[10]. mPFC predicts NYT article sharing.
The pattern is clear. Every study that successfully predicts outcomes uses forward prediction. None of them rests its claim on "this region means this cognitive state."
Where forward prediction can still go wrong
Overfitting. Without cross-validation and preregistration, accidental correlations on a single dataset look like real predictive signal.
Population generalization. Predictions trained on US MBA students do not transfer cleanly to global audiences. The model has to be recalibrated on the target population.
Outcome proxy mismatch. Predicting Nielsen ratings is not predicting sales. Predicting CTR is not predicting retention. Each outcome needs its own calibration.
Effect sizes. r of 0.5 to 0.7 is common and useful. r above 0.9 on a real forecast should make the reader suspicious.
Copy creep. Reverse-inference language sometimes reintroduces itself into marketing copy even when the underlying method is forward-predictive. Discipline the copy. The method is only as honest as the sentence that describes it to a buyer.
What OpenAffect does
Our models are forward-predictive by construction. We do not claim "activation in X means feeling Y." We claim "this composite signal predicts this outcome with this error bar, on this distribution of ads, with this confidence interval." That is a falsifiable claim. It is a testable one.
We publish the calibration (see calibration against Meta Ad Library). That is how a field matures.
What we ask of the field
- Preregister analyses on OSF or a comparable registry before collecting data.
- Cross-validate. Report held-out performance, not in-sample fit.
- Publish failures with successes. The average of a vendor's calibration studies is more informative than any one of them alone.
- Retire reverse-inference language from marketing copy. It is cheap to write, expensive in trust.
Poldrack was right. The research community that listened now leads the field. The commercial vendors that ignored him are consolidating (see what is neuromarketing in 2026). Forward prediction is the category's path forward because it is the only framing that survives the next twenty years of scrutiny.
References
- 1Poldrack. Can cognitive processes be inferred from neuroimaging data? TICS 2006.
- 2Poldrack. Inferring mental states from neuroimaging data. Neuron 2011.
- 3Ariely and Berns. Neuromarketing: the hope and hype of neuroimaging in business. Nat Rev Neurosci 2010.
- 4Berns and Moore. A neural predictor of cultural popularity. JCP 2012.
- 5Falk, Berkman, Lieberman. From neural responses to population behavior. Psych Science 2012.
- 6Dmochowski et al. Audience preferences from neural signals. Nat Comms 2014.
- 7Venkatraman et al. Predicting advertising success. JMR 2015.
- 8Barnett and Cerf. A ticket for your thoughts. J Consumer Research 2017.
- 9Genevsky, Yoon, Knutson. Neuroforecasting crowdfunding. J Neurosci 2017.
- 10Scholz et al. A neural model of valuation and virality. PNAS 2017.
- 11Yarkoni et al. Large-scale automated synthesis of human functional neuroimaging data (Neurosynth). Nat Methods 2011.
- 12Plassmann et al. Consumer neuroscience: applications, challenges, potential solutions. JMR 2015.