Judging a wine competition is not an easy thing. This does not mean that the Wine Curmudgeon is complaining about doing it; as noted several times, what I do beats working for a living. Rather, I mention it in context with a study in a learned journal that says wine competition judges are not always consistent.
This is particularly relevant because I judged a competition last weekend, the San Antonio Wine Competition, part of the city’s very well received wine festival. I’ll post a podcast later this week, featuring San Antonio wine critic John Griffin. My thanks, also, to John Costello, who put together a fine event, for asking me to judge.
The study, which looked at the California State Fair Commercial Wine Competition between 2005 and 2008, found that just 30 of the 65 judging panels produced similar results. In other words, judges gave the same wines different scores, including one instance when a panel gave one wine a double gold after rejecting it in two previous tastings.
Which makes perfect sense. Competitions, by their very nature, don't allow for leisurely discussions of a wine's merit. It's sniff, sip, swirl, and spit, and then move on to the next wine. If there are inconsistencies, they aren’t intentional. My panel judged about 120 wines in six hours on Saturday, including a category with 20 pinot grigios and 29 merlots.
Working through 29 merlots is a difficult task – they taste alike, they look alike, and you don’t have a lot of time to poke around for differences. And that doesn’t include what’s called tasting fatigue, in which the more wine you taste makes it harder to tell what you’re tasting.
In this, I think judging is more honest than wine scores, since judging is more collaborative and is done blind. All we know is that this is a merlot, and we go from there. But that doesn’t mean that we wouldn’t like to improve the system.
The study’s author, Robert Hodgson, says that’s one of the reasons he did the study. The California competition’s chief judge, G.M. Pucilowski, told Wines & Vines magazine that he hopes other competitions do similar studies, with the goal of establishing a baseline for judges that would reduce inconsistent results and improve the quality of results.
Everyone should be in favor of that.