Tag Archives: wine ratings

Has the wine establishment turned its back on wine scores?

wine scoresThe Wine Curmudgeon writes stuff like this all the time: “Why the 100-point system of rating wine is irrelevant.” In fact, I write about the foolishness of wine scores so often that you’re probably tired of reading about it. But what happens when a member of the wine establishment, someone who uses the word “somm” in everyday conversation, says “the future of wine ratings and recommendations will rely largely on friend recommendations and approval.”

It means wine scores are one step closer to going to where they deserve to go.

Jonathan Cristaldi, who wrote all of that, is about as wine establishment as you can get — an instructor at the Napa Valley Wine Academy and deputy editor for The SOMM Journal and The Tasting Panel Magazine. In other words, he does not espouse the wonders of $5 wine at Aldi or complain about the Winestream Media.

So when Cristaldi says the 100-point scale and wine scores are increasingly irrelevant, it means something. How many of the old white guys who keep defending points were once called a new ?Wine Prophet ? by Time Out New York magazine? Writes Cristaldi:

More and more people will learn of wine ?s complexities through social engagement. Friends and confidants (trade and non-trade) will replace the lone critic and his bully pulpit. Wine drinkers will realize the power and worth of a discerning palate because of the value their friends place on such expectations.

The key here is that Cristaldi isn’t writing for consumers, the 95 percent of us who will never spend more than $20 for a bottle of wine and don’t care one way or the other about scores when we buy Little Black Dress or Cupcake. He is writing for the elite, including the five percent who buy high-end wine; everyone who has helped to make scores part of selling wine over the past four decades and has helped it become the shell game that it is today.

There won’t be a need for wine scores as we know them, says Cristaldi, because of that social engagement. This is more than the social media that the old white guys like to make fun of because they just know that Facebook and Twitter are stupid, but a fundamental change in the way the wine supply chain works. Today, when a retailer or restaurateur buys wine, the distributor’s sell sheets — a handout they give customers — include Parker and Wine Spectator ratings and other wine scores. Because, as one top Dallas chef-owner told me, if the wine gets 95 points in the Spectator, he has to have it, whether he wants it or not.

But in Cristaldi’s future, retailers and restaurateurs will buy wine because someone they know and respect recommends it, and the score will be just one part of that. And, given social media, they can check those recommendation in seconds, whether with a text, a tweet, an Instagram picture, or in apps like Vivino, Delectable, or CellarTracker. He calls this new breed ?social sommeliers, ? because they participate in “the social conversation about wine.”

These people, who are younger and include women and people of color, aren’t waiting for the distributor’s sell sheets with wine scores; they’re already talking about the wine with their colleagues around the world long before the distributor arrives. This is something that has never happened before in the history of wine, and it’s something the old white guys can’t even begin to understand. They think sell sheets are still the cutting edge.

And, finally, if you still think this is all silliness, know about a conversation I had with a 20-something wine drinker during a cheap wine book appearance. Why should I buy your book, he asked me? Who needs it? I can do this — and he twiddled his phone with his thumb — to find a good wine to drink.

Image courtesy of Jacksonville Wine Guide, using a Creative Commons license

Can we use wine back labels to figure out wine quality?

wine back labels

Mark Thornton: “The words — and not what they mean — on wine back labels are a clue to wine quality.”

Because, finally, someone has discovered a way to measure the relationship between what’s written on wine back labels and the quality of the wine.

The breakthrough came from a Harvard Ph.D. student named Mark Thornton, who took data from 75,000 wines in the Wine.com inventory, and compared what was written on their back label — and not what the words meant — with ratings from the site’s users and from wine critics.

The findings? That certain words appear on the back labels of wines of lesser quality, while certain words appear on the back labels of wines with higher ratings. Thornton told me he knows this isn’t perfect, given how scores and wine ratings work, and he wants to improve that part of the study. In addition, he wants to refine the way his software decides which words to analyze, perhaps eliminating regions and better understanding phrases, like grilled meats instead of grilled and meats.

What does matter is that Thornton’s work is apparently the first time anyone has done this kind of research, making it as revolutionary as it is helpful in deciphering the grocery store wall of wine.

Thornton, whose parents teach in Cal State Fresno, says this study interested him because it’s about wine, which he likes, and because it ties into his PhD research, which deals with how we describe things. One of the concepts this study takes into account is called “naive realism,” in which we assume that what we sense has to be true for everyone, when it obviously isn’t. Which dovetails neatly with wine.

Thornton’s findings confirm many of my suspicions about wine back labels, as well as how critics use descriptors. The word clouds on his site summarize the results; I’ve set them up so you can see them more easily here for the consumer ratings and here for the critic ratings.

These are among the highlights of the study:

? Restaurant food pairings or terms like pasta appear on the labels of the lowest-rated wines. Thornton says this may well be because the wine doesn’t have any wine-like qualities to recommend it.

? Words used to describe sauvignon blanc — grapefruit, herb, clean — show up on the critics’ lowest-rated white wines. This is not surprising, given that sauvignon blanc has always garnered less respect from the Winestream Media than chardonnay.

? A location on the back label seems to indicate lower quality white wine; “handcrafted” is in the higher quality word cloud. For reds, “value” and “soft” are poor-quality words, while “powerful” and “black,” probably used to describe black fruit, infer higher quality. Handcrafted is especially interesting, since it doesn’t mean anything in terms of wine production.

Finally, a word about prices, which is also part of the study. Thornton divided the consumer ratings into five price ranges, and there was little difference in perceived quality between the first three ranges. In other words, you get more value buying the cheapest wine. Shocking news, yes?

The critic price-value rankings were even more bizarre. The worst value came from wines that got scores in the mid-90s, while wines in the high 90s (and even 100) were less expensive, and the best value wines were around 90 points. Thornton says he isn’t quite sure why this is true, though it may have something to do with critic bias. My explanation is simpler: Wine scores are inherently flawed.

How to manipulate on-line reviews with a clear conscience — get a federal court ruling

manipulate on-line reviews yelpAlways wondered how legitimate the scores and reviews were on sites like Yelp, Angie’s List, and the Wine Spectator? Now, thanks to a federal appeals court ruling, you don’t have to wonder: Legitimacy may not matter. The sites may be able to manipulate the ratings, and they don’t necessarily have to tell you what they’ve done.

Or, as Lou Bright, the blog’s unofficial attorney, says: ?This does have the ethical aroma of dead rat, doesn ?t it? Yet neither Yelp nor the Wine Spectator are legally bound to be morally upright. The First Amendment allows for an awful lot of disreputable speech. ?

The court decision, made earlier this month in San Francisco, didn’t break new legal ground when it found that the possible ?engineering ? of review postings on Yelp, based on whether businesses bought an ad on the site, were legal. The ruling came after several businesses sued Yelp, claiming the site moved unfavorable reviews higher and moved favorable reviews lower on the site ? or removed favorable reviews altogether ? if the businesses didn’t buy ads.

Said the ruling: ?It is not unlawful for Yelp to post and sequence the reviews. As Yelp has the right to charge for legitimate advertising services, the threat of economic harm that Yelp leveraged is, at most, hard bargaining. ?

A legal thing here, so I don’t get sued. Yelp’s senior director of litigation said the company didn’t make review decisions based on whether anyone bought ads, and there is a disclaimer on the Yelp site. And I’m not saying Yelp does that. Or that Angie’s List, the Spectator or anyone else does it. Or that it goes on at all anywhere.

Rather, as W. Blake Gray wrote when he broke the story last week, the ruling reaffirms that sites or magazines that do reviews can charge for upgraded placement, higher scores, or better reviews with a clear conscience. After all, it’s just hard bargaining.

I talked to three other attorneys for this post, and each said the same thing as Bright: It’s not a consumer-friendly practice,and there may be risk in the long run, but it’s not necessarily illegal. As long as the site or magazine doesn’t commit libel (which is often difficult to prove, says Dallas attorney Trey Crawford), and doesn’t run afoul of the Federal Trade Commission, it’s on safe legal ground. Some court decisions have even gone as far as to equate engineering with “editorial discretion.”

What can you do to make sure ratings and reviews aren’t engineered? Look for a disclaimer on the site, like the one I use, and will continue to use. No one pays me for favorable reviews or to review their product, and it will always be that way. Because, if there isn’t a disclaimer, anything is possible.

Wine business slow? Then boost the scores

There is a reason the Wine Curmudgeon is so cynical about the wine business. It’s news like this:

“The numbers are in, and they’re historically impressive. In Wine Spectator’s report on California Pinot Noir, a whopping 55% of the 350-plus wines from the 2009 vintage had scores of 90 points or higher, including 15 wines that scored a classic 95 or better. It’s the category’s best performance ever.”

More than half are 90-point wines? A record-setting four percent are “classic”? Why? What made the 2009 vintage so special? Robert Parker’s vintage rating called it barely “outstanding,” and one Sonoma winemaker didn’t even go that far; he called the 2009 crop good to very good.

Full disclosure, first, of course. Regular visitors here know that I have no use for scores, and so I view any report heralding scores with a sneer and a quizzical look. Also, I have not tasted all 350 wines in the Spectator report, and it’s always chancy to criticize something when you don’t have all the information.

Having said that, though, there are a couple of things to note about all of those classic wines. First, style matters. The 2009 pinots I have tasted were well made, but in that very ripe and busty style that the Winestream Media enjoys and that makes me reach for something else. Which is, of course, the biggest problem with scores. Second, that many of the high-scoring wines cost more than $30 a bottle. If a wine that costs more than $30 a bottle doesn’t score 90 or better, there isn’t any reason for the winery to be in business. Which is, of course, another problem with scores.

Finally, what would happen if the Spectator did a pinot noir issue that said that the vintage was ordinary and that the wines were ordinary? And what would happen if the magazine did that during a three-year sales slump, like we’re going through now?

Exactly. So don’t worry if you miss this classic vintage; I’m willing to bet there will be another one in 12 months.

Consumer Reports’ top wines

One would think that it would be incredibly difficult to rate wine as if it was a refrigerator. There are objective measurements for refrigerators — how well does it maintain temperature? — and hardly any for wine.

Nevertheless, Consumer Reports, which has been rating products for some 80 years, does wine. I don't know that I agree with all of the choices in the December issue (a famous critter wine made it), but I can't argue with their methodology. This is about as objective as wine tasting gets.

"We're very specific about what we're looking for," says Maxine Siegel (no relation), who oversees the wine project for the magazine. "There are acceptable standards that we're looking for. And it does have to be a tasty wine."

More, after the jump:

Continue reading

The Freakonomics view of the wine business

Stephen Dubner is the co-author of the popular Freakonomics books and blog, which look at economic theory from a less than traditional perspective. In this radio interview transcript, Dubner talks about wine prices and the wine business, and whether price reflects quality and whether the experts are really experts.

A couple of the Wine Curmudgeon's pals show up, including Robin Goldstein of The Wine Trials, and it's a decent discussion of the objective vs. subjective nature of wine quality and wine prices. Which is nothing new to regular visitors here.

But anyone who appreciates what we do on the blog will love this. Dubner quotes his Freakonomics co-author, Steven Levitt: "My approach to buying wine for gifts is simple: I go in the store, and I look for the label that looks the most expensive of anything in the store. And I make sure it costs less than $15, and if it does, then I buy it."

Maybe I should should send each of the Steves a tin of Wine Curmudgeon M&Ms.