ScienceInsight 09

What is neuroforecasting?

Using small-sample neural data to predict aggregate population behavior. A formalized discipline since 2017, with fifteen years of evidence across crowdfunding, microlending, media, and public health.

OpenAffect Research//7 min read

Definition in one sentence

Neuroforecasting is the use of small-sample neural data to predict aggregate or future behavior in populations that were never scanned.

It is distinct from individual-level prediction, which is harder and less reliable. It is also distinct from reverse inference (see reverse inference twenty years on), which is the older neuromarketing trap of reading cognitive states off regional activations. Neuroforecasting is forward-predictive and falsifiable by construction.

Small neural samples predict population averages even when individuals cannot be predicted. That is the single insight the field is built on.

Where the term comes from

The term was formalized by Genevsky, Yoon, and Knutson in their 2017 Journal of Neuroscience paper[1], titled "When brain beats behavior: neuroforecasting crowdfunding outcomes." The paper showed nucleus accumbens activity in a small scanned sample predicted Kickstarter funding outcomes above and beyond the stated choices those same subjects made. The specific claim: neural data carries forecast-relevant information that self-report does not.

The handful of papers that built the field

Knutson et al. 2007. The SHOP paradigm[2] showed nucleus accumbens and insula activity predicted individual purchase decisions. Foundational individual-level work.

Falk, Berkman, Lieberman 2012. Neural focus group[3]. Small-sample mPFC activity predicted population-level call volume to 1-800-QUIT-NOW after anti-smoking campaigns aired. Self-report did not predict. (See the Falk piece.)

Berns and Moore 2012. Nucleus accumbens response in teens predicted subsequent song popularity[4]. Effect was strongest three years out, beating self-report.

Dmochowski et al. 2014. EEG inter-subject correlation predicted Nielsen TV ratings and Twitter activity[5]. Small n (12 to 16) forecasting national audiences.

Genevsky and Knutson 2015. 28 subjects' NAcc predicted Kiva microlending aggregate funding success[6].

Venkatraman et al. 2015. fMRI ventral striatum activity uniquely predicted elasticity-adjusted TV ad sales[7]. Incremental R² of 0.10 to 0.14 over traditional measures.

Genevsky, Yoon, Knutson 2017. Coined the term. Kickstarter funding from NAcc.

Barnett and Cerf 2017. EEG inter-subject correlation on movie trailers predicted opening-weekend box office at r of 0.74 to 0.88[8].

Scholz et al. 2017. mPFC plus value-system activity in roughly 40 scanner subjects predicted which NYT health articles went viral among millions of readers[9].

Why it works

The core methodological argument is statistical. Aggregate behavior averages out individual-level noise. A small neural sample, correctly analyzed, can predict population means even when individual prediction is modest.

The neural signal carries information that self-report does not, especially for value integration (mPFC, nucleus accumbens) and for attentional reliability (inter-subject correlation). Self-report is filtered through social desirability, post-hoc rationalization, and verbal-fluency asymmetries. Neural measurement bypasses all three.

Per-content neural profile radar
Figure 01A neuroforecast is built on profiles like this: network-level activation across the seven Yeo systems for a single stimulus, aggregated across a scanned sample. The aggregate profile predicts what millions of unscanned viewers will do. Small n beats large n of self-report when the target is a population average.

What the effect sizes actually are

Typical reported effect sizes: r of 0.3 to 0.8 against population outcomes, depending on stimulus type, outcome measure, and sample. Confidence intervals are wide because stimulus counts are small. This is a moderate-effect method, not a magic wand.

Anyone reporting r above 0.9 on a forecasting task should be read skeptically. Real effects in this range usually involve leakage (outcomes included in training), proxy mismatch (predicting a proxy that is much easier than the real outcome), or cherry-picked stimuli. Honest calibration (see our calibration study) lands in the mid-range.

How TRIBE v2 changes the field

Forward encoding models like TRIBE v2 (see TRIBE v2 explained) let you run the Falk paradigm in-silico on stimuli that no human has ever seen in a scanner. The expensive substrate, scanning a new sample for every study, is removed.

The method does not change. Small-sample neural data, aggregated, still predicts population averages. What changes is the marginal cost. Running a neural focus group on an unreleased ad goes from months to minutes.

What neuroforecasting is not

  • Not mind reading. The method forecasts aggregate outcomes. It does not read thoughts.
  • Not individual-level prediction. Scholz 2017's "forty subjects predict millions of readers" works at the population level, not the person level.
  • Not reverse inference. The claims are predictive ("this ad produces this pattern of activity in a typical viewer"), not interpretive ("this activity means this person felt X").
  • Not a replacement for behavioral data. Neural signal complements self-report and historical benchmarks. It does not replace them (see the four signals framework).

The OpenAffect frame

Neuroforecasting is one signal of four. It is useful. It is not sufficient. Forecasting content performance in 2026 requires integrating neural with linguistic, cultural, and historical signals and publishing the calibration. That is the infrastructure thesis. Neuroforecasting is the neural layer that makes the fusion worth doing.

References

  1. 1Genevsky, Yoon, Knutson. When brain beats behavior: neuroforecasting crowdfunding outcomes. J Neurosci 2017.
  2. 2Knutson et al. Neural predictors of purchases. Neuron 2007.
  3. 3Falk, Berkman, Lieberman. From neural responses to population behavior. Psych Science 2012.
  4. 4Berns and Moore. A neural predictor of cultural popularity. JCP 2012.
  5. 5Dmochowski et al. Audience preferences from neural signals. Nat Comms 2014.
  6. 6Genevsky and Knutson. Neural affective mechanisms predict market-level microlending. Psych Science 2015.
  7. 7Venkatraman et al. Predicting advertising success beyond traditional measures. JMR 2015.
  8. 8Barnett and Cerf. A ticket for your thoughts. J Consumer Research 2017.
  9. 9Scholz et al. A neural model of valuation and information virality. PNAS 2017.
  10. 10Knutson and Genevsky. Neuroforecasting aggregate choice. Current Directions in Psych Science 2018.
  11. 11Smidts et al. Advancing consumer neuroscience. Marketing Letters 2014.