Says Who?

Reflections on wine rating systems
Some wines are better than others, but the methods used to measure those quality differences are a source of controversy.
Here’s the problem: There’s no such thing as an objective opinion, no matter how qualified and well-intentioned the taster may be. Wine tasting is inherently subjective, and people may be influenced by factors such as the temperature of the wines, the temperature of the room, the lighting, the comfort of the chairs, whether they had a fight with their spouse the night before, whether they were caught in particularly nasty traffic on the way to the tasting, whether they see a rival or heartthrob in the room, their previous experience with the estate, etc. On top of that, everyone has their own standards, preferences, and prejudices, and they have their own inner yardstick by which to measure excellence or mediocrity. While some experts maintain they can be objective, in my experience this is never true.
Historically, the wine world was much smaller than it is today, so there was no need for a system to evaluate quality. Connoisseurs and collectors agreed that the best wines came from Bordeaux, Burgundy and Champagne, along with a handful of German Rieslings; New World wine was either unknown or ignored. The first attempt at a rating system was probably the famous Bordeaux Classifications of 1855. Estates from the Médoc were ranked from First Growth down to Fifth, a total of 61 properties (later revised to 63); the pecking order was based on the market prices of the time. 27 crus from Sauternes and Barsac were evaluated according to a simpler scale of Superior First Growth (1), First Growth (11) and Second Growth (15).
Regions other than the Médoc were excluded from the 1855 rankings, except for Château Haut-Brion. In the years that followed, separate classifications were announced for Graves, St.-Émilion, Crus Bourgeois and Crus Artisans; the latter three categories are revised periodically, and the results are hotly debated. Pomerol has never been classified.

In the modern era, the first attempt at a ranking system was developed by colleagues at the Department of Viticulture and Enology at UC Davis, headed by Professor Maynard Amerine. Wines were rated on a 20-point scale, with half-point gradations permitted. While the criteria of Amerine’s original version was detailed and complex, it was later modified and used by a few noted wine critics including Jancis Robinson MW. She still employs it today, with the following explanations:
· 20: Truly exceptional
· 19: A humdinger
· 18: A cut above superior
· 17: Superior
· 16: Distinguished
· 15: Average, a perfectly nice drink with no faults but not much excitement
· 14: Deadly dull
· 13: Borderline faulty or unbalanced
· 12: Faulty or unbalanced
The most common system today is the 100-point scale, usually attributed to critic Robert Parker. This system resonated with many Americans, since it reminded them of the way they were graded in school. It’s really a 50-point scale, since no wines under 50 are included. In general, this is what the scores mean:
- 95-100 points: Exceptional, a classic
- 90-95: Outstanding, superior
- 85-89: Very good
- 80-85: Good, solidly made
- Most publications do not list any wine scoring below 80
In 1992, when I founded the Florida Wine Bulletin, I used a scale of A/B/C/D/F, which was again intended to remind readers of how they were graded in school. Pluses and minuses were awarded, and the value for money ratio was take into account (i.e., a wine which deserved a B at $10 may have rated a B+ at $7, or a C at $15). Here’s how the ratings broke down:
· A: An outstanding wine of world class depth and character; worth a splurge, cellar material
· B: Good to very good, worth the money for current drinking or laying down
· C: Fair to average; priced right for current drinking
· D: Poor to below average; a weak example of its type; overpriced
When you examine all three scales, they have one thing in common: none of them provides an understandable definition of what the gaps in quality mean. What’s the real difference between outstanding and exceptional, between superior and distinguished? If there’s a half-point difference between wines on the 20-point scale, or a gap of two or three points on the 100-point scale, how exactly is that spread determined? Again, we have to go back to the individual preferences of the taster. Robert Parker, for example, was known to favor red wines that were extremely concentrated and ripe to the point of voluptuousness, and those fared much better to his palate than more delicate or subtle wines.
My system is obviously the muddiest of the three and contains more unexplained gaps. What makes a B+ better than a B? Your high school English teacher knew, or thought he/she knew, but it was still a subjective opinion.

When it comes to economics, a difference of one or two points on the 100-point scale can be huge. Consumers have been conditioned to view 90 points as the gateway to the portal of quality, whereas 88 or 89 tends to elicit a “meh” (unless, of course, they’re cheap). In many wine shops, the necks of the bottles are festooned with colorful tags proclaiming their scores on the 100-point scale in publications such as the Spectator, Advocate, or Enthusiast, none of which has ever been particularly known for providing unbiased information about wine. Given the amounts of money involved, it’s hard to blame producers who make wine in a style designed to please a particular critic or publication.
Now that Parker has retired, things are more complicated. There are still individual, independent critics with outsized importance—Steve Tanzer, Antonio Galloni and Jeb Dunnuck come to mind, although there are differences among them in the way they perceive wine quality. Here on Substack we have Dave McIntyre, former wine critic for The Washington Post, who has forsaken Big Media to forge his own path.
Any attempt to define the precise difference between an 88-point wine and the cherished 90+ is doomed to failure. Remember, you’re either talking about the opinion of one person, or the consensus of a tasting panel (far less reliable, in my experience). Many consumers simply throw up their hands, find a critic or publication they trust, and follow the recommendations. Conversely, if there’s someone you don’t trust, you can always read him/her/them and do the opposite. It’s great if you have a reliable wine merchant, but those are becoming scarce in the era of chain retailers and beverage superstores. Do your research before you go to the wine shop, and try to avoid last-minute purchases (i.e., on your way home from work).
When I first started reviewing wine, I harbored the illusion that my work would have a ripple effect. Since I was publicly proclaiming that my opinion had value, I hoped I could inspire others to feel the same way. I eventually realized that goal was too optimistic, since many consumers want to be told what to think.
For those who don’t, here’s a solution I’ve been advocating for many years: start your own tasting group. Gather some like-minded friends, set a theme, and meet weekly or biweekly. Everyone brings a bottle, and the wines should be tasted blind. If you have a passion for a particular category (California Cabernet Sauvignon, for example), then in six months or so you’ll have the opportunity to sample most of the interesting releases on the market. You won’t need a critic’s opinion at that point, because you’ll have your own—to which you’re entitled.
