Improving Home Depot Search Relevance
Contributed by Amy(Yujing) Ma, Brett Amdur,Β Christopher Redino. They are currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between January 11th to April 1st, 2016. This post is based on their machine learning project (due on the 8th week of the program).
Given only raw text as input, our goal is to predict the relevancy of products to search results at the Home Depot website. Our strategy is a little different from most other teams in this Kaggle competition, where we generated a workflow that starts with text cleaning, passes through feature engineering and ends with model selection and parameter tuning in the attempt to stand out among thousands of competitors.
Feature Engineering
One interesting aspect of this project was that "feature engineering" here was essentially equivalent to "feature creation." That's because the data set that Home Depot provided contained no actual features that we could use as inputs to a model. Instead, our task was to take the data provided (search queries and product titles/descriptions/attributes) and use that data to derive all the features to use as predictors.
From the very beginning of the feature engineering
From the very beginning of the feature engineering process, our primary challenge was relatively clear: fix the upper left problem.
Ultimately, the features we fed into our model fell into four categories, shown at left. "Direct Match"
The last category of features is probably worth some explanation. Certain features we designed were related only to data in the training set, and were therefore "disconnected" from the test set. For example, we devised a methodology for assigning a "word power" score to words contained in search queries. Specifically, for every word in a training set search term (after the data cleansing performed in the first phase, of course), we looked at the average relevancy score for observations where it appeared. This allowed us to create a dictionary with search word - scores as the key-value pair. We then applied this dictionary to the test set. That is, we applied the word power score for each word in the training set search queries to each word in the test set search queries. We used the sum of these word scores to create a word power score for each search in the test set.
One last point about our approach to feature engineering might be worth noting. We used R's tm package, but not for the tf-idf (term frequency - inverse document frequency) calculations for which it is often used. Instead, we found it to be an efficient tool for performing word lookups for word score calculations. Its document term matrix provided a convenient (and relatively fast) way to identify the words in the search term dictionary that also appeared in product titles. From there, it was a straightforward process to calculate the sum of word scores for each observation.
[slideshare id=59901142&doc=kagglepresentationv3-160322195719&w=650&h=350]
The Python 3 code for best model is shown below: