Ever bought a bottle of wine just because it had won a gold medal? Dumb move. You might as well have based your choice on the label design or sound of the name, because wine judges are inept.
It's a scientific fact.
A study of the oldest wine competition in the United States found that judges rarely gave identical, or even close, scores to the same drink when they tasted it multiple times. A mere 10 per cent of judges regularly ranked a wine within the same medal category, with nine out of 10 placing the wine in different categories.
And 10 per cent were so bad they scored one wine "gold" and "no medal" when they tasted it on separate occasions.
In one case, a four-judge panel twice rejected a wine during the preliminary pass-fail screening round before eventually giving it a thumbs-up and then unanimously awarding it top honours - or double-gold - in the medal round.
"It was an amazing result," said Robert Hodgson, the author of the study, who tracked dozens of judges at the California State Fair between 2005 and 2008. Dr. Hodgson, a retired physical oceanographer with expertise in statistics, conducted the study with the state fair's co-operation, secretly slipping triplicate samples of a chosen wine into each of dozens of blind tastings.
Though particularly unflattering to organizers of the Sacramento-based state fair, whose judges typically include about five Canadians, the results serve as a stark caveat for consumers. Those shiny gold-medal stickers found on the sides of wine bottles? They're as much a sign of perseverance and deep pockets on the part of a winery as of quality in the bottle.
"I think if you enter a wine in enough competitions, unless it's a spoiled wine, there's a chance someone will like it," Dr. Hodgson told me over the phone from his home near Mendocino, Calif.
Dr. Hodgson, who owns a small northern California winery called Fieldbrook, had the idea for the illuminating study after watching his own wines take gold at certain competitions while coming up empty at others. He later came across results compiled by California Grapevine, an independent online buying guide, which among other things tracks awards performance by numerous California wines.
"I think they tracked 4,000 wines, and in over 1,000 cases there were wines that received gold medals in one competition and nothing in another. That's 25 per cent. So, I said, maybe the problem is with the judges lacking consistency."
A key objective of Dr. Hodgson's study, published in the Journal of Wine Economics, was to determine whether he could identify "superjudges" who could serve as mentors to their less-consistent counterparts. No such luck. Perhaps the most sensational finding of his research is that judges who performed well one year turned out to be mediocre the next.
Poor performance transcended even the boundaries of specific disciplines. Professional wine writers were no better than winemakers, who were no better than consultants, who were just as bad as sommeliers.
But there was one bright spot of relative consistency: Judges tended to be more reliable when giving the thumbs-down to wines they thought didn't deserve a medal. "They tend to know what they don't like, both individually and as a group," Dr. Hodgson said.
Predictably, some of the judges - who were not identified - have emerged with excuses, most notably "palate fatigue." Judges at such competitions typically taste between 100 and 120 wines a day, with little food in between. All that acid and alcohol can numb the palate - and the mind.
But Dr. Hodgson, who served as a state fair judge years ago, designed the study to compensate for that problem. The triplicate samples were always inserted into the same "flight," or grouping, of about eight to 12 wines, so the judges were tasting the identical samples only minutes apart. Also, most of the study samples came during the second flight of the day, giving panel members enough time to warm up, but not enough to get tired. And the triplicate samples all came from the same bottle, so the samples were, in effect, identical.
"Our goal in doing this was to try to give the judges every benefit of the doubt," he said. "You can't talk to me about palate fatigue."
Another predictable objection came in the form of that old chestnut, taste is subjective. But subjective taste is a red herring in this case. "It's one thing for one person not to agree with another," Dr. Hodgson said. "I fully understand that and I'm not upset about it. What I'm upset with or interested in with wine judges is when they're sampling the same wine. If they're inconsistent, then the result is meaningless."
Imagine a film critic who praises a film one moment, then pans it the next.
Though the study focused on one competition, Dr. Hodgson said the results could easily apply to wine competitions in general. Others agree.
"It's not just the fairs. It's much, much larger," said Karl Storchmann, an economist in Walla Walla, Wash., and managing editor of the Journal of Wine Economics. "Can we really have a number for a thing like wine?" he asked rhetorically, alluding to the controversial 100-point scoring standard employed by influential international publications such as Wine Spectator and Robert Parker's Wine Advocate.
Medal-mania is growing around the world, too, as wineries seek third-party endorsements to promote products that in most cases can't be sampled before purchase and must be sold on reputation.
Competitions have become serious businesses, with each entry typically costing a winery $50 to $75 - not including the cost of up to six sample bottles. But "gold medals sell wine," Dr. Hodgson said.
He hopes the findings will encourage wine-competition organizers to cut down on the number of wines judged by each panel on a given day, enabling judges to devote more attention to each sample. He also hopes competitions will consider supplementing raw numerical scores (typically used in determining medals) with verbal descriptors - such as "average" or "exceptional" or "off." Words can add a dimension of accuracy.
Words to describe wine - how refreshing.