Tag Archives: wine competitions

ben kingsley

How to improve wine competitions

wine competitionsWine competitions have received tremendous amounts of criticism, whether it’s from unreliable results, results that seem odd, and results that the experts don’t like. Or, as the co-author of a study of competition failings told me, “Consumers should disregard results from wine competitions, because it ?s a matter of luck whether a wine gets a gold medal.”

Can wine competitions fix these problems and become more reliable? This is especially relevant given the recession, when wineries reduced the number of competitions they entered. This has led to a shakeout in competitions, and those that don’t adapt to the new conditions, where wineries want more value for their entry fees, won’t make it. I can’t emphasize this enough: Wine competitions are at a crossroads, where their results are increasingly irrelevant to consumers and less important to ever more wineries. The millions of people who buy Cupcake Red Velvet probably don’t even know competitions exist.

Hence the need to make the results more statistically valid, and the Wine Curmudgeon’s five suggestions, based on more than a decade of judging, to do that — after the jump: Continue reading

winetrends

TEXSOM International Wine Awards 2015

TEXSOM International Wine AwardsThe wine competition business is at a crossroads, with entries still not back to pre-recession levels, with wineries cutting the marketing budgets that pay entry fees, and with the reliability of competition results called into question. Hence my curiosity in judging the the TEXSOM International Wine Awards this week, which organizers want to become the wine competition that addresses those questions.

TEXSOM used to be the Dallas News Morning News competition, perhaps the leading wine competition in the U.S. that wasn’t on the west coast. Its new organizers (who include friends of mine) understand how the landscape has changed, and want to find ways to adjust.

That means giving wineries more to market their product than just a medal — finding better ways to publicize the wines that earn medals, working with a wine publication to publish tasting notes for medal winners, and publicizing the medal winners with its audience, sommeliers around the world. TEXSOM started life as educational organization for sommeliers and restaurant wine employees, and much of its focus remains there.

In addition, this year’s competition included some double-blind judging, apparently in response to the questions raised about whether medals mean anything. This was particularly intriguing given the quality of the judges, many of whom have MS or MW after their name, and almost all of whom are among the country’s wine retail, wine writing, and winemaking elite. (Whether one can include me in that group I’ll leave to the readers of this post.)

Finally, a word about the wines — or, in this case, not much of a word. I didn’t judge the first day of the two-day competition, thanks to our annual Dallas ice storm. Day 2 was 98 wines, almost all from California, and most of those from Paso Robles. We gave more than our share of golds (two cabernet sauvignons and a viognier in particular), and especially silvers, but few of the wines were memorable. But that’s hardly enough of a sample size for a fair judgment.

winetrends

Judging the 2015 Virginia Governors Cup

2015 Virginia Governors CupThe controversy about whether judges at wine competitions know what they’re doing is never far from my mind when I judge these days. How will the competition I’m working try to fix what seem to be serious problems, including too many wines and not enough judges? The 2015 Virginia Governors Cup took a novel approach — lots of judges, small flights of wine, and standardized score sheets. The process — as well as many of the wines — was impressive. More, after the jump:
Continue reading

winereview

6 wines from San Francisco International Wine Competition

san francisco wine competitionThese wines, which were gold or double gold winners at this year’s San Francisco International Wine Competition, show the strengths and weaknesses of wine competitions. It’s not that the wines are bad or didn’t deserve the medals they got, but that the results speak to the perspective that the judges bring. In this case, three-quarters of the judges were from California, and many of the wines I tasted showed that perspective — pricey, fruity, and oaky, with lots of alcohol. How about a 15.1 percent tannat?

It’s this perspective that is overlooked when we debate the merits of wine competitions. How can a wine — technically well-made and delicious — do well if the judges don’t appreciate its style? The biggest problem I have when judging is being fair to the kinds of wines like those that won at the San Francisco competition. I find them difficult to enjoy and so mark them down. But at least I know I do this and make an effort. Hopefully, this idea of perspective is something that competition organizers take into account when they select judges.

Having said this, I tasted some terrific wines when the San Francisco wine competition did its Dallas road show last week (and the tannat, if not to my taste, was a wonder of winemaking skill). Check them out after the jump: Continue reading

indy-room.jpg

Wine competitions and wine scores

wine scoresThe Wine Curmudgeon’s opinions of wine scores are well known: Get a rope. So what would happen when I had to judge a wine competition that required judges to use scores?

The competition, the Critics Challenge in San Diego, was its usual enjoyable self, featuring wine I usually don’t get to drink as well as some top quality cheap wine. The scores? Meh. More, after the jump (plus some of the best wine I tasted):


Caveats first: The competition pays judges a $500 honorarium and reimburses expenses, and the weather in San Diego is always so much better than it is in Dallas that I’d do it just for the 70-degree temperatures.

But are those good enough reasons to give scores, considering how I feel about them? Probably not. I agreed to judge for two reasons: First, because if you’re going to criticize something, you should do it at least once, and second, because I have tremendous respect for competition impresario Robert Whitley. If Robert wants to do scores, then I’m willing to try it.

Having said that, the scoring process was underwhelming. In years past, we gave wines a silver, gold, or platinum medal; this year, we added scores to those awards. I’m still trying to figure out the difference between a silver medal wine with 87 points and one with 89 points, even though my judging partner, Linda Murphy, did her best to explain it to me. A silver is a silver is a silver, and I don’t understand why two points makes a difference. Or how Linda and I could give the same wine the same medal, but different points. How could one of us like the wine 2.2 percent more than the other (the difference between an 87 silver and an 89 silver)?

Still, there were some terrific wines entered:

? The 2013 Giesen Riesling from New Zealand ($15) was named best in class, an excellent example of the tremendous value available in New Zealand riesling.

? Linda and I agreed that the Yorkville Cellars 2012 Carmenere ($38) was platinum worthy, and it earned best in class honors. Carmenere can be off-putting, unripe and tannic, but this was an intriguing, rich, and earthy effort, with dark fruit and complex finish.

? I’ve been lucky enough to taste sparkling wine from Dr. Konstantin Frank in upstate New York three times since last fall, and each time it has been sensational. The 2007 Chateau Frank Brut ($25) won best of class, and the non-vintage rose ($21) grabbed a silver.

? The 2012 Nottage Hill Chardonnay from Australia’s Hardys ($13) won a platinum, which wasn’t surprising. Aussie chardonnay can often be $10 Hall of Fame quality; the catch, usually, is that the wines vary greatly from vintage to vintage, and what was tasty one year isn’t the next.

? A non-vintage red blend, called Kitchen Sink ($10), won a silver. It’s fruity, but well-made, and I’ve always enjoyed the Kitchen Sink white blend.

winerant

Is Texas wine at a crossroads?

Texas wineTexas wine may be approaching a crossroads, something that was evident during the 31st annual Lone Star International wine competition this week. That’s because some of the best wines at the competition weren’t Texas, but included California wines sold by Texas producers. Which is not supposed to be the point of what we’re doing here.

Years ago, when a lot of Texas wine left much to be desired, what happened this week wasn’t unusual. Or, as I told the competition organizer when I first judged Lone Star in 2005, “Give us better wines, and we’ll give you gold medals.”

Given the revolution in Texas wine quality and production over the past decade, I had hoped those days were gone. But the uneven quality of many of the wines I judged, this year and last, has me wondering. Has Texas wine reached a plateau, where quality isn’t going to get any better given the state’s resources and climate? Or is something else going on?

After the jump, my take on what’s happening: Continue reading

winetrends

Wine competitions, judging, and blind luck

Wine competitions, judging, and blind luckOr, as the co-author of a new study told me: “Consumers should disregard results from wine competitions, because it’s a matter of luck whether a wine gets a gold medal.”

That’s the conclusion of Robert Hodgson, a winemaker and statistician whose paper (written with SMU’s Jing Cao) is called “Criteria for Accrediting Expert Wine Judges” and appears in the current issue of The Journal of Wine Economics. It says that those of us who judge wine competitions, including some of the world’s best-known wine experts, are ordinary at best. And most of us aren’t ordinary.

Because:

… [M]any judges who fail the test have vast professional experience in the wine industry. This leads to us to question the basic premise that experts are able to provide consistent evaluations in wine competitions and, hence, that wine competitions do not provide reliable recommendations of wine quality.

The report is the culmination of research started at the California State Fair wine competition at the end of the last decade. The competition’s organizers wanted to see if judging was consistent; that is, did the same wine receive the same medal from the same judge if the judge tasted it more than once during the event? The initial results, which showed that there was little consistency, were confirmed in the current study.

More than confirmed, actually. Just two of the 37 judges who worked the competition in 2010, 2011, and 2012 met the study’s criteria to be an expert; that is, that they gave the same wine the same medal (within statistical variation) each time they tasted it. Even more amazing, 17 of the 37 were so inconsistent that their ratings were statistically meaningless. In other words, presented with Picasso’s Guernica, most of the judges would have given a masterpiece of 20th century art three different medals if they saw it three different times.

“This is not a reflection on the judges as people, and I don’t mean that kind of criticism,” says Hodgson. “But the task assigned them as wine judges was beyond their capabilities.”

Which, given the nature of wine competitions, makes more sense than many doubters want to believe. Could the problem be with the system, and not the judges? Is it possible to be consistent when judges taste 100 wines day? Or when they taste flight after flight of something like zinfandel, which is notoriously difficult to judge under the best circumstances?

When I asked him this, Hodgson agreed, but added: “But we don’t see an alternative. But it is an inherent problem. You just want to see the competitions give the judges sufficient time to do it.”

Perhaps. But my experience, after a decade of judging regularly, is that the results seem better (allowing for this um-mathematical approach) when I judge fewer wines. That means that the competition is smaller, or that the organizers have hired more judges. Maybe that’s where the next line of study should go, determining if judging fewer wines leads to more consistent results.