SkinSmart: A Recommendation System for Skincare Products

Posted on Mar 8, 2017

1. Motivation

As a skincare products enthusiast, I often found myself spending quite some time sifting through reviews to find the ideal skincare product that suited my needs. While most skincare websites prominently display average ratings and reviews for products,Β the overall set of criteria to search for skincare products is not very robust. For instance, a search for aΒ "orange-scented serum that is both cheap and moisturizing" can often lead to an exorbitant amount of hours of tedious reading throughΒ several non-informative reviews before a match is found.

To solve this issue, I decided to take advantage of the vast amount of text-based review data and NLP (Natural Language Processing) techniques to build a basic recommendation system for Skincare Products. My recommendation system uses TF-IDF to process review's text data and recommends products with the top 5 highestΒ cosine similarity score.Β The websiteΒ of choice to obtain the dataset for this project was totalbeauty.com, more specificallyΒ the section pertaining reviews for Face Products.

The dataset for this project was collected using Scrapy in Python. Data processing and Shiny AppΒ was written in R. You can visit my application here. All code is available on Github.

2. Data Collection

The hierarchy of totalbeauty.com is summarized in the image below. To build a recommendation system based on review contents from different products, I extracted information of products and reviews by crawling 2 levels down.

crawl

About 50,000 rows of data from 6000Β skincare products were scraped from totalbeauty.com. The graph below highlights the information I collected for each product:

review

Below is a snapshot of the scraping "spider" Python code used to collect the information described above.

3. NLP + Recommendation System

recommend

My recommendation system's algorithm works as follows:

  1. Read user's input (Category of products and tags of interest)
  2. Compute and sort cosine similarityΒ between products and the user's tags of interest.
  3. Return Top 5 skincare products with highest cosine similarity score.

A. TF-IDF

To create the tags of interest ("query"), text-based reviews were parsed using R by creating a TF-IDF measure for each word within a review. TF-IDF is a NLP technique which stands for "Term Frequency - Inverse Document Frequency". In the context of this recommendation system, it essentially measures how important a word is to a given skincare product compared to a collection of skincare products. Below, you will find a more detailed explanation along with the code implementation in R.

TF-IDF

A.1 TF (Term Frequency)

Term Frequency (TF) measures the number of times a word occurs within a collection of reviews for a particular product. For instance, if a review says "Good serum", then the term frequency for "good" is 1, and the term frequency for "serum" is 1. For my recommendation system, I took use of the normalized version of TF, where the overall term frequency for each word is divided by the total number of words within a single review Corpus.

A.2 IDF (Inverse Document Frequency)

The main purpose of doing a search for skincare products with tags of interest is to find out relevant products matching the query. In the previous step(TF), all terms are considered equally important. This approach leads to a fundamental problem: certain terms that occur too frequently (i.e love, skin) have little sway in determining the relevance of a product, but are given too much importance under TF. We need a way to weigh downΒ the effects of too frequently occurring terms across different Products, and weigh up the effects of less frequently occurring terms. That is where IDF comes into place.

The Inverse Document Frequency(IDF) is a logarithmic measurementΒ of how much information a word provides, that is, whether the term is common or rare across differentΒ skincare products' reviews. Β Note that as df approaches N(i.e. a word is mentioned more and more across different products), Β the argument approaches 1, and the overall IDF gets closer to zero.

A.3 TF-IDF

When TF and IDF are multiplied, we obtain TF-IDF, which is a composite weight given to each word in each corpus of reviews for a given product. The full R code used to compute TF-IDF can be found here:

B. Cosine Similarity

With TF-IDF measurements in place, products are recommended according to a cosine similarity score with the query. Each product and the query of tags is viewed as a vector in an N-dimensional vector space, where each term represents its own axis. Using the formula below, the recommendation system computes the cosine from the angle formed between a product and the query of tags. The closer to 1, the more similar two vectors(products) are.

cossim

The codeΒ below pertains to the actual implementation of the cosine similarity computation and recommendation which is written in R.

Below isΒ a sample output for a query in my Shiny App which searches for an "Anti-Aging Product" with tags "orange", "sensitive", "skin". The top 5 products with highest cosine similarity are returned in this case.

recommendation

4. Further Improvements

There are several improvements to make this recommendation system more sophisticated, such as:

  • Processing misspellings so that words like "clean" and "cleaan" can be accounted as the same word
  • Performing sentiment analysis to distinguish positive from negative reviews
  • Expanding the available dataset by scrapping other skincare review websites
  • Allowing users to assign weights to tags

About Author

Yvonne Lau

Yvonne Lau is a recent Yale University graduate with a B.A. degree in Economics and Mathematics. Hailing from Rio de Janeiro, Brazil, she became interested in data science after serving as a Data Analyst for a nonprofit organization,...
View all posts by Yvonne Lau >

Related Articles

Leave a Comment

Google February 15, 2021
Google Sites of interest we have a link to.
Google January 12, 2021
Google Usually posts some quite exciting stuff like this. If you are new to this site.
Google October 26, 2019
Google Check below, are some totally unrelated web-sites to ours, nevertheless, they are most trustworthy sources that we use.
Google October 14, 2019
Google We like to honor many other net web pages on the web, even when they arenΒ’t linked to us, by linking to them. Beneath are some webpages really worth checking out.
Derma Viva May 20, 2017
If some one desires expert view about blogging after that i propose him/her to go to see this blog, Keep up the nice job.
Tony May 18, 2017
I the efforts you have put in this, thanks for all the great posts.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI