SkinSmart: A Recommendation System for Skincare Products

Yvonne Lau
Posted on Mar 8, 2017

1. Motivation

As a skincare products enthusiast, I often found myself spending quite some time sifting through reviews to find the ideal skincare product that suited my needs. While most skincare websites prominently display average ratings and reviews for products, the overall set of criteria to search for skincare products is not very robust. For instance, a search for a "orange-scented serum that is both cheap and moisturizing" can often lead to an exorbitant amount of hours of tedious reading through several non-informative reviews before a match is found.

To solve this issue, I decided to take advantage of the vast amount of text-based review data and NLP (Natural Language Processing) techniques to build a basic recommendation system for Skincare Products. My recommendation system uses TF-IDF to process review's text data and recommends products with the top 5 highest cosine similarity score. The website of choice to obtain the dataset for this project was, more specifically the section pertaining reviews for Face Products.

The dataset for this project was collected using Scrapy in Python. Data processing and Shiny App was written in R. You can visit my application here. All code is available on Github.

2. Data Collection

The hierarchy of is summarized in the image below. To build a recommendation system based on review contents from different products, I extracted information of products and reviews by crawling 2 levels down.


About 50,000 rows of data from 6000 skincare products were scraped from The graph below highlights the information I collected for each product:


Below is a snapshot of the scraping "spider" Python code used to collect the information described above.

3. NLP + Recommendation System


My recommendation system's algorithm works as follows:

  1. Read user's input (Category of products and tags of interest)
  2. Compute and sort cosine similarity between products and the user's tags of interest.
  3. Return Top 5 skincare products with highest cosine similarity score.


To create the tags of interest ("query"), text-based reviews were parsed using R by creating a TF-IDF measure for each word within a review. TF-IDF is a NLP technique which stands for "Term Frequency - Inverse Document Frequency". In the context of this recommendation system, it essentially measures how important a word is to a given skincare product compared to a collection of skincare products. Below, you will find a more detailed explanation along with the code implementation in R.


A.1 TF (Term Frequency)

Term Frequency (TF) measures the number of times a word occurs within a collection of reviews for a particular product. For instance, if a review says "Good serum", then the term frequency for "good" is 1, and the term frequency for "serum" is 1. For my recommendation system, I took use of the normalized version of TF, where the overall term frequency for each word is divided by the total number of words within a single review Corpus.

A.2 IDF (Inverse Document Frequency)

The main purpose of doing a search for skincare products with tags of interest is to find out relevant products matching the query. In the previous step(TF), all terms are considered equally important. This approach leads to a fundamental problem: certain terms that occur too frequently (i.e love, skin) have little sway in determining the relevance of a product, but are given too much importance under TF. We need a way to weigh down the effects of too frequently occurring terms across different Products, and weigh up the effects of less frequently occurring terms. That is where IDF comes into place.

The Inverse Document Frequency(IDF) is a logarithmic measurement of how much information a word provides, that is, whether the term is common or rare across different skincare products' reviews.  Note that as df approaches N(i.e. a word is mentioned more and more across different products),  the argument approaches 1, and the overall IDF gets closer to zero.


When TF and IDF are multiplied, we obtain TF-IDF, which is a composite weight given to each word in each corpus of reviews for a given product. The full R code used to compute TF-IDF can be found here:

B. Cosine Similarity

With TF-IDF measurements in place, products are recommended according to a cosine similarity score with the query. Each product and the query of tags is viewed as a vector in an N-dimensional vector space, where each term represents its own axis. Using the formula below, the recommendation system computes the cosine from the angle formed between a product and the query of tags. The closer to 1, the more similar two vectors(products) are.


The code below pertains to the actual implementation of the cosine similarity computation and recommendation which is written in R.

Below is a sample output for a query in my Shiny App which searches for an "Anti-Aging Product" with tags "orange", "sensitive", "skin". The top 5 products with highest cosine similarity are returned in this case.


4. Further Improvements

There are several improvements to make this recommendation system more sophisticated, such as:

  • Processing misspellings so that words like "clean" and "cleaan" can be accounted as the same word
  • Performing sentiment analysis to distinguish positive from negative reviews
  • Expanding the available dataset by scrapping other skincare review websites
  • Allowing users to assign weights to tags

About Author

Yvonne Lau

Yvonne Lau

Yvonne Lau is a recent Yale University graduate with a B.A. degree in Economics and Mathematics. Hailing from Rio de Janeiro, Brazil, she became interested in data science after serving as a Data Analyst for a nonprofit organization,...
View all posts by Yvonne Lau >

Related Articles

Leave a Comment

Google February 15, 2021
Google Sites of interest we have a link to.
Google January 12, 2021
Google Usually posts some quite exciting stuff like this. If you are new to this site.
Google October 26, 2019
Google Check below, are some totally unrelated web-sites to ours, nevertheless, they are most trustworthy sources that we use.
Google October 14, 2019
Google We like to honor many other net web pages on the web, even when they aren’t linked to us, by linking to them. Beneath are some webpages really worth checking out.
Derma Viva May 20, 2017
If some one desires expert view about blogging after that i propose him/her to go to see this blog, Keep up the nice job.
Tony May 18, 2017
I the efforts you have put in this, thanks for all the great posts.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp