Looking for a Good Wine?

Posted on Feb 3, 2018

The skills the authors demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

The Wine App

Link:     https://adodd202.shinyapps.io/wine-reviews/

1.0   Concept

My girlfriend drinks wine, she enjoys it, understands the differences between Cabernet Sauvignon and a Pinot Noir. She knows good wine from bad wine. I do not know these things.

But I can make an app that does. This was my goal for the project, to create an easy user interface to select wines based on how good they are and their various traits. For example, I want a Riesling that is under $50? I can find it. What if I don't know much about wine varieties but I know I want a good one, and it should be sweet, and I like raspberries? Well I can find that too.

2.0   Initial Solution

   2.1   Data

To create an app that would be useful in selecting wine, I first needed a dataset. I found such a dataset with approximately 120,000 wine titles on Kaggle's datasets. You can find it here:


The dataset had a number of useful properties for each wine title such as: description, variety, taste tester, twitter handle, country, province, region, price, and rating. This made it a great dataset due to not only its length but its variety of information, both numeric and text.

   2.2  Filtering and Graphing

In order to filter and graph the data, this was a fairly straightforward process, though dealing with R Shiny could sometimes be troublesome. My goal here was to make sure I had a solid understanding of Shiny fundamentals.

With the user interface filters implemented a user could find a variety, set a price cutoff and see the best results in graphical form or list form. One interesting area that I fiddled with but never finished due to time was adding a hover output to ggplot. I did this with a resizing distance threshold that would display the closest result on the graph and look it up in the dataset and display it below the graph. The hover tool would show the output when it gets within a distance threshold of the datapoint.

However, due to the changing scales of the table, I noticed that the tool was often finicky. I solved this by scaling the distance threshold by the size of the window.

   2.3   Shiny Globe and Geocoding

One of my side goals of this project was to create a nice map visualization of the wine data for the user to look at and get a feel for. I wanted it to be clear what wines come from where in the world. In order to do this I could use a more standard map package. But in my search for options, I came across a package called Shiny Globe that would display essentially a normalized bar-graph on a 3D rendition of the Earth. I liked the idea of the visual impact of Shiny globe with the colored bars coming out of different areas. So now, once the filtering was performed, the wine counts would be displayed on the globe.

However, before using the Shiny Globe package, I needed to get latitude and longitude for all of the wines. To address this need, I turned to Google geocoding. But there were 120,000 wine observations and Google's geocoding API only supports 2,500 daily queries. So I created another column in my dataset with "country", "province", and "region" concatenated and took only the unique addresses. I got out only about 1,900 rows so now I could run this set of addresses through a Google geocoding script.

Now with latitudes, longitudes, and wine counts, I could simply scale the wine counts and graph them on the Globe (see below).

3.0   Search Function

   3.1   Goal

Now that a filtering approach has been developed, I wanted to create a more streamlined user interface that would perform what is essentially filtering across many options, including options that may be in the description. For example, sure we can perform filtering by country, taste tester, province, variety, etc. But how would we find white wines? This is not a category name. How would we find sweet red wines or bitter wines with chocolate flavors? The goal was to add the words found in the description to the search algorithm.

   3.2   Implementation

To build this search function first, we want to build a lexicon of the most commonly found words in the dataset, in this case, a set of 1000 words. First we create a new column with all the useful columns pasted together, such as description, variety, country, province, etc. Once you remove stopwords ("it", "I", ".", etc.), perform stemming (removing suffixes), and words that give no insight (e.g. "wine") we see words like "Napa", "Valley", "Sweet", "Cabernet", "Dessert", "Smooth","Velvety", etc.

This was performed with the tm package in R that performs text mining for NLP (see document term matrix function). Next we want to decide on a word vectorization approach for our 120,000 wine dataset. Do we want to to count numbers of words in a single wine description or just the fact that the words are included or are not? Do we need to normalize these vectors?

I used a binary approach of words appearing; they either appeared or did not and were added to a vector of zeros and ones. Once the vector was filled with zeros and ones, it was normalized so that longer descriptions would not be unfairly favored. These vectors were concatenated onto a matrix of of the wines.

3.3 Problems

Soon I had a massive matrix with nearly a gigabyte of data! I had a feeling that this would be difficult to work with for Shiny server memory reasons, performing matrix operations, etc. Even on my own laptop, things were running slowly. It is likely that with Python and working on a GPU, this computational issue would be an easily forgotten issue. But in R on a server without easy access to computationally efficient libraries, I decided to parse down the data.

My first change was to experiment with taking a small subset of wines, only 3,000 of the original 120,000. I thought a random subset of this size would give enough representation to a variety of wines while decreasing memory requirements substantially. I also experimented with decreasing the lexicon size to 300 words but this lead to dropping out important commonly used words.  After looking into a variety of options I opted for a sparse matrix representation using the Matrix package in R. This brought my initial ~.6 Gb of data down to ~1 Mb, a data volume that would be easy to work with, even on the Shiny server.

The next step in the development of a custom search function was to convert the user input and compare it to the 3,000 wine vectors already stored in the matrix. Converting the user input to a word vector followed the same process of zeros and ones vectorization by comparing it the known lexicon. Now to find the closest matches I performed a cosine similarity function between the user input and the 3,000 vectors and took the highest 100 outputs.

All that was left now was some price filtering and ordering it from highest rating at the top of the dataframe to lowest rating at the bottom (see below).

4.0   Conclusion

Overall, the goal of this project was to present the user with a novel way to view and choose their wines based on filters and keywords. This was achieved through some filtering options, ggplot and Shiny Globe for visualization, and a lexicon/cosine similarity maximization function all added into an R Shiny project.

Thanks for reading!

About Author

Andrew Dodd

I am a data scientist at NYCDSA with a mechanical engineering background (BS, Masters). My masters focused on space and robotics applications; this is where I developed my interest in machine learning through courses that involved path planning...
View all posts by Andrew Dodd >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI