The key is what Storchmann calls noise – the publicity a good score gives a wine. “The noise raises the quality perception of the wine, and the noise is larger for bad wine,” he told me. In the study, cheap wine is defined as “bad” because it’s less expensive than “good” wine, and better quality wine should cost more.
What happens, says the study, is that a high score for one cheap wine influences the perception of the entire brand, as well as for different vintages – possibly raising the price of every wine in the brand. That means that if Winery X’s 2011 merlot gets a 92, that score gives consumers the idea that the rest of X’s wines, whether chardonnay or zinfandel or whether 2009 or 2010, are equally as good. Which isn’t necessarily the case.
More, after the jump:
It would seem logical that this would carry over to wine, given how common scores and reviews are – and how important they’re supposed to be. Since we don't know what a wine is like before we taste it, we rely on other people's opinions to guide us, and ratings and scores come from the experts. So a 78, which is a bad score, should stop anyone from buying a wine. That's the theory that Storchmann, the managing editor of the Journal of Wine Economics, was working with.
But, as regular visitors here know, wine is far from logical, and just the opposite seems to happen, says the study: "[The] largest spillover is in the low-quality bracket, resulting in significant overpricing of mediocre wines."
Even more intriguing about the spillover effect: Assume a $6 wine gets a good score, which then allows the winery to raise the price by $2 for every wine in the brand -- a 33 percent increase for what could be a dozen or more wines over several vintages. Compare that to a $100 wine getting a good score and raising the price $10. Who will come out ahead?
This explains a lot about the wine business that I’ve never been able to understand, like why so many of the multi-nationals that make cheap wine care so much about scores. And market them so obsessively. Or why so many smaller wineries are equally obsessed with getting a good score. It’s that damned spillover effect.
It also reinforces, once again, why scores are useless, and may well do more harm than good.
Having said all that, there is an important caveat. The wines and scores used in the study, some 41,000 of them for U.S. wine, appeared in the Wine Spectator between 1984 and 2008, so quality in the study is measured by the Spectator ratings. You can draw your own conclusions from that.