The lead paper in the new issue of the Journal of Wine Economics is a study by Jonathan Reuter arguing that Wine Spectator wine ratings for advertisers were about one point higher than ratings for non-advertisers, when controlled against ratings from Wine Advocate. This is in spite of the magazine’s stated policy of tasting wines completely blind.
This from the abstract:
“In markets for experience goods, publications exist to help consumers decide which products to purchase. However, in most cases these publications accept advertising from the very firms whose products they review, raising the possibility that they bias product reviews to favor advertisers…Although the average Wine Spectator ratings earned by advertisers and non-advertisers are similar, I find that advertisers earn just less than one point higher Wine Spectator ratings than non-advertisers when I use Wine Advocate ratings to adjust for differences in quality.”
These are wine ratings, not the restaurant Awards of Excellence, which I’ve written about in the past; the applicants for those awards are advertisers by definition (having submitted a $250 fee to be considered).
Reuter later retreats to a statement that he “finds little consistent evidence of bias…at worst, the tests for biased ratings suggest that Wine Spectator rates wines from advertisers almost one point higher than wines from non-advertisers. However, selective retastings can explain at most half of this bias and then only within the set of U.S. wines rated by both Wine Spectator and Wine Advocate. Given Wine Spectator’s claim that it rates wines blind, the remaining difference in ratings may simply reflect consistent differences in how the two publications rate quality, which leads to predictable differences in advertising. This interpretation is consistent with the fact that tests for biased awards provide no additional evidence of bias. Therefore, despite the fact that Wine Spectator is dependent on advertising revenue, the long-run value of producing credible reviews appears to minimize bias.”
I think this conclusion is softer than it need be. Even if selective retastings explain only half of the one-point bias, that’s still pretty damning; it means that if you advertise in Wine Spectator, you might well get the benefit of a selective retasting that gets you, on average, an additional half-point. Translation: advertising influences ratings.
With respect to the other half-point, if there are indeed “consistent differences in how the two publications rate quality, which leads to predictable differences in advertising,” then you should try leafing through a copy of Wine Spectator and seeing if you’d trust critics who favor the types of wines that tend to advertise in the magazine. I think the roster of advertisers speaks for itself.
The more important issue, perhaps—especially if you’re a small wine producer—is how difficult it is to get magazines like Wine Spectator to even review your wines at all. And this is where, anecdotally, bias might play an even larger role. “Unsolicited samples,” states the Wine Spectator website, “may not be tasted.” Advertise in the magazine, and that problem seems to go away.
And then there’s the matter of the selection of a wine (Columbia Crest Cabernet Sauvignon Reserve) from a Wine Spectator advertiser (Chateau Ste. Michelle) as this year’s Wine Spectator wine of the year.
Although proving bias in every such case is a complicated, difficult point, the obvious conclusion of all such research is the simplest:
We should be skeptical of criticism whose publication is financially supported by the producers of the products being criticized.
Wine critics should not accept advertisements from wineries.