Expert Data Assessment of Wine Quality
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction
For this project, I explore the relationship between wine quality ratings, prices, and attributes. I use several sources of information including data from two wine competitions as well as a large database on tasting notes. The first wine competition data that I use comes from the California Commercial Wine Competition. In this competition, over 2,000 wines are assessed by a panel of judges. Wines are reviewed within specific regions and wine style categories. The tastings result in awards varying from Bronze to Double Gold, with other special awards being granted for wines that the best within specific classes.
Data
To gather this data, I used the Beautiful Soup package within Python. This yielded a dataset with approximately 6,000 wines, across eleven regions, and three years. With this dataset in hand, one can start to tackle a simple question: what do judges assess as quality in wine? Initially, I explore the correlation of judge scores with the wine prices. Insofar as wine quality is more than merely an idiosyncratic force, one would expect judge ratings to be positively correlated with market prices. Interestingly, this is not the case here.
Correlations
The scatter plot below illustrates this point. It plots the judge score on the y-axis and market price on the x-axis. There is no obvious visual relationship between the two, with a correlation between these two variables of approximately 0.04.
The lack of correlation between rating and price is interesting and could arise for a number of reasons. First, the judges assess quality that the market does not assess. At some level, this would be the ideal scenario, since the purpose of wine competitions is (presumably) to highlight something about certain wines that individuals are unaware of. It could also be that what is assessed is merely idiosyncratic: the judges prefer certain wines more than the market generally does.
An interesting aspect to explore is whether ratings correlate with prices in other competitions/reviews. To explore this possibility, I also examine data on wine reviews from the San Francisco Chronicle’s Wine Competition. This competition generally reviews over 5,000 wines per year from different states, with wines predominately from California.
Ratings vs Price
Using data from 2014 to 2018, I also find a lack of relationship between ratings and prices. The figure below illustrates this finding. It presents box plots showing the distribution of wine prices (in natural log) for different award categories ranging from Bronze to Double Gold. Each panel captures this relationship for a given year.
Interestingly, while the lack of correlation between rating and price is common among these two data sources, this contrasts with the Wine Enthusiast data. In the Wine Enthusiast data, text descriptions and quality ratings are available for over 100,000 wines. In this data, there is pronounced positive relationship between rating and price. The figure below illustrates this relationship.
While some wine experts clearly prefer wine that the market also does, what drives the ratings of the judges from the California Wine Competition. In other words, if judge ratings are uncorrelated with price, what then do they assess in a wine?
One possibility is that the tastes of the California Wine Competition judges are fairly idiosyncratic: they like certain wines that the market prefers relatively less. I explore this question by merging the California Wine Competition data to the wine descriptions from Wine Enthusiast. From this merged dataset, I use textual analysis to break the wine reviews down into individual word components and then further categorized the words used to describe different aspects of a wine’s flavor profile.
Future
My Shiny app located here, also allows one to explore different wine flavor profiles using word clouds to describe the wine descriptions for specific wine producing regions and flavor attributes. Specifically, one can compare the words used to describe a high-quality Napa wine (e.g., via Gold awards) to a relatively “lower-quality” Napa wines (e.g., via Bronze awards).
Surprisingly, there is not any large visible differences in descriptions across categories. Two hypotheses for this result come to mind: First, aggregating different wine styles may mask the relevant heterogeneity that would be reflected across award categories. Second, two judges may simply prefer different wines because they taste the same wine very different. If this latter case is true, then the text descriptions provided by Wine Enthusiast judges are not accurate representations of the unknown descriptions that would have be given by the California Competition judges.