Do experts rate red wines more highly than white wines, regardless of price, vintage, and region? Does this mean there is a critical bias in favor of red wines?
That may well be the case. Data scientist, wine lover, PhD, and former college math professor Suneal Chaudhary did the numbers, analyzing more than 64,000 wine scores dating to the 1970s and taken from the major wine magazines. The results are something I’ve been trying to get a handle on for years, the idea that critics favor reds over whites. The details are after the jump:
The study found:
• More reds score higher than whites, while red wines are over-represented above 90 points and whites are over-represented below 90 points. In fact, reds are 1.2 times more likely to be rated higher than 90.
• As an expert score crossed 90 points, selling price and selling price variation increased quickly – in some cases leading to nonintuitive results, such as median reds costing more than more highly-rated whites.
• When two experts rate the same wine, about the only thing they agree on is if a wine is better or worse than 90 points. When the wines are scored higher than 90, the variation in the ratings increases considerably. In this, the wine experts’ rating may not be as accurate as those for other agricultural products, like potatoes.
We don’t pretend that these results are conclusive, given the variables involved. Red wines may be inherently better than white wines (though that seems difficult to believe). They certainly cost more to make, and that might affect the findings. The review process itself may have influenced the study. Not every critic publishes every wine he or she reviews, and those that were published may have been more favorable to reds than whites. And, third, the scoring process, flawed as it is, may have skewed the results regardless of what the critics did.
Still, given the size of the database, and size matters here, Suneal’s math shows something is going on. And that’s just not our conclusion. I asked three wine academics to review our work, and each agreed the numbers say that what is happening is more than a coincidence. That’s the point of the chart that illustrates this post – 90 percent of the 2010 red wines that we had scores for got 90 points or more. You can click on the chart to make it bigger.
In this, Suneal found what he calls the chicken-egg-chick dilemma, where critics rate red wine more highly because it’s more prestigious; where producers spend more money to make red wine because critics see it as more prestigious and consumers are willing to pay for that prestige; and where consumers are willing to pay a premium for red wine because producers and critics see it as more prestigious.
Finally, about the database: We obtained 14,885 white wine scores and 46,924 red wine scores dating from the 1970s that appeared in the major U.S. wine magazines. They were given to us on the condition of anonymity because the scores do not include every wine that the magazines reviewed, and the source didn’t want to claim that the data was complete or was originally collected with the goal of being a representative sample.
You can download a PDF of the report here.