Product Classifier Helping Gather Customer Information

Posted on Jan 15, 2019

Project GitHub | LinkedIn:   Niki   Moritz   Hao-Wei   Matthew   Oren

The skills we demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Team: Rajesh Arasada, Kent Burgess, Nilesh Patel

Problem Formulation

Our capstone project aimed to help a fashion industry startup to improve their customer wardrobe inventory Application. The App gathers the information about the customers’ wardrobe inventory from their emails and catalogs the products into 12 different product categories.

Identifying the women’s products and correctly classifying them into different categories is the big challenge. Through this project, we built machine learning models for the company to efficiently and accurately predict the product category. 

Data Exploration

Given a new email receipt with information such as brand Id, retailer data, item name, retailer Id, and category Id we want to assign the products on it into one of the 12 categories.

Output: Product Category

  • Tops – 110
  • Bottoms - 120
  • Dresses - 140
  • Jumpsuits - 130
  • Outerwear - 150
  • Activewear - 160
  • Beachwear- 170
  • Shoes - 200
  • Bags - 300
  • Accessories 400
  • Beauty 500
  • Miscellaneous 600
  • Kids 610
  • Mens 620
  • None 0

The information we are interested in such as ‘item name’, ‘brand name’ and ‘category Id’ exists in strings. This is a supervised machine learning text classification problem. Predicting the right category on the provided string will help this company best serve its clients.

To tackle this problem, we investigated which supervised methods are best suited to handle text data, multi-class classification and imbalanced classes.  Upon cleaning the data, engineering features, and balancing classes, we implemented Naive Bayes, Multinomial Logistic Regression, Support Vector Machine and Tree-based models.

The following sections walk through our process to optimize our predictions.

Imbalanced Classes

We see that the number of products per category is imbalanced (Figure 1). Kids, men’s, miscellaneous, beauty and non-wardrobe items were least represented. Conventional algorithms are often biased towards the majority class, not taking the data distribution into consideration. In the worst case, minority classes are treated as outliers and ignored.

Figure 1: Distribution of the products in 12 different product categories

To overcome this problem, we undersampled the majority class, configured our models during training and also merged additional data for Men’s, Kids and beauty products scraped from Flipkart and Sephora.

Feature Extraction

We combined the brand names into the product description as the product brand name appeared to be is very important to determine the product class. Products with similar item names but from different brands belonged to different categories. An item described as a ‘legging’ must belong to class 120 (bottoms) if purchased from Victoria Secret’s and must belong to class 160 (activewear) if purchased from a sporting goods brand Adidas. Hence,

The machine learning algorithms cannot directly process the text data and must be converted to numeric feature vectors. To represent the text information as numeric vectors we extracted features to use from the text as a bag of words. Using scikit learn text preprocessing tools we computed the TF-IDF vectors for each product. We tuned:

  • min_df: the minimum numbers of documents a word must be present in to be kept.
  • ngram_range: We trained our models on unigrams, bigrams, trigrams, four-gram and five-gram. Bigrams and trigrams performed better than unigrams.

Machine Learning and Model Evaluation

With the transformed data features and their labels, we experimented with 4 different machine learning models and evaluated their accuracy:

Logistic Regression

Multinomial Naïve Bayes

Support Vector Machine (Linear and Radial Kernels)

XGBoost

Logistic Regression and LinearSVC performed better with an accuracy of 79.44% and 77.74% compared to the other two models XGBoost (71.89%) and Naïve Bayes (70.43%).

Below are the confusion matrices from three of our models showing the discrepancies between predicted and actual labels. The vast majority of the predictions end up on the diagonal (predicted label = actual label), where we want them to be. However, there are a number of misclassifications, and it is interesting to see that these misclassified products belong the under-represented classes beauty products (class 500) and None (class 0).

Figure 2: Confusion matrix of Logistic Regression model

Figure 3: Confusion matrix of SVM linear model

Figure 4: Confusion matrix of XGBoost model

Conclusions:

We achieved close to close to 80% accuracy in predicting the product class from the text data. These models can be further improved by refining our text preprocessing, gathering more information of the imbalanced classes and building an industry specific english STOPWORDS.

Code for this project can be found here.

Please contact us if you have any suggestions or questions. Thank you.

 

 

About Author

Rajesh Arasada

Data scientist and cell biologist with >10 years of bio-medical research experience. Implemented Machine learning (ML) algorithms in R and Python to solve real-world problems.
View all posts by Rajesh Arasada >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI