All posts
6 min readDraftmodelingcelebrity-equity

Why we don't trust survey-based Q-scores

Annual phone surveys of a few hundred respondents cannot keep up with how fast public attention now moves. Here is what we use instead.

By Daniel Okafor, Lead, Celebrity Equity

The Q-score has been the dominant talent-valuation metric in the United States since 1963. The methodology has barely changed: a panel of a few hundred respondents is asked, by phone, whether they recognize a list of public figures and whether they consider them one of their favorites. Those two numbers produce a familiarity score and a favorability score. Multiplied together, you get the Q-score.

We do not trust the Q-score. The reasons are unglamorous.

First, the panel is too small for the resolution buyers want. If you are a brand sponsoring a national campaign, a panel of 1,800 American adults can give you a usable national average. If you are a brand sponsoring a regional activation in the Pacific Northwest aimed at Gen Z women, that same panel gives you a sample of dozens. The standard error swallows the signal.

Second, the refresh cadence is wrong for the modern attention cycle. By the time a quarterly Q-score updates, an actor's controversy has spiked, peaked, and decayed. By the time the annual Q-score updates, an entire career arc has happened. The metric is measuring a slower world than the one we live in.

Third, recognition and favorability are not the right two variables. The questions that brand and talent-agency clients actually care about are commercial pull (does this person move product), category fit (does this person fit this category), and decay risk (is this person on the way up or the way down). The Q-score answers a fourth question that is correlated with the first three but is not the same as any of them.

What we do instead is build a calibrated equity score from public signal. Wikipedia pageview velocity. Search interest delta. Press volume z-score. Engagement rates on the major social platforms, weighted by audience quality. Sentiment slope over the last two weeks. Controversy index. We refresh weekly. We resolve to a known panel of historical endorsement deals so that the equity score is anchored in dollars, not in vibes.

The score is not perfect. None of our component signals are. But the worst week of our equity score is more accurate than the median week of a Q-score, and we have the receipts: we backtested the equity score against 411 historical endorsement deals where we know the contract value, and the median dollar error is 22 percent of contract value. Q-scores are not directly comparable to dollar values, which is itself a tell.

We are not arguing against panels. Panels measure things public signal cannot. We are arguing against using a panel as the authoritative source of truth on a question where public signal is now richer than the panel ever was.