Web Scraping Influenster: Find a Popular Hair Care Product for You

Yu-Han Chen
Posted on Aug 30, 2017

Are you a person who likes to try new products? Are you curious about which hair products are popular and trendy? If you're excited about getting your hair glossy and eager to find a suitable shampoo, conditioner or hair oil merchandise, using my ‘Shiny (Hair) App’ could help you find what you seek in less time. My codes are available on GitHub.


Research Questions

What are popular hair care brands?

What is the user behavior on Influenter.com?

What kind of factors may have critical influences on customers satisfaction?

Is it possible to create a search engine, which takes charge of phrases and returns related products?


Data Collection

To obtain the most up-to-date hair care information, I decided to web scrape Influenster, a product discovery and review platform. It has over 14 million reviews and over 2 millions products for users to choose from.  

In order to narrow down my research scope, I focused on 3 categories: shampoo, hair conditioner, and hair oil. I garnered 54 top choices for each one. For product datasets, I scraped brand name, product name, overall product rating, rank and reviews. Plus, the web scraping review dataset includes author name, author location, content, rating score, and hair profile.



Top Brands Graph

Firstly, the “other” category represents the brands which have one or two popular products. Thus, judging from the popular brands' pie chart, we can see that most of the popular products belong to huge brands.


Rating Map

As to checking users’ behaviors on Influenster in the United States, I decided to make two maps to see whether there are any interesting results linked to location. Since I scraped top 54 products for each category, the overall rating score is high across the country. As a result, it is difficult to see regional differences.


Reviews Map

However, if we take a look at the number of hair care product reviews on Influenster.com across the nation, we know that there are 4740, 3898, 3787, 2818 reviews in California, Florida, Texas and New York respectively.


Analysis of Rating and Number of Reviews

There is a negative relationship between rating and number of reviews. As you can see, Pureolog receives the highest score 4.77out of 5, but it only has 514 reviews. On the other hand, OGX is scored 4.4 out of 5, though, it gains over 5167 reviews.


Wordcloud & Comparison Cloud

As we may be interested in what factors customers care about most and what contributes to their satisfaction with a product, I decided to inspect the most frequently mentioned words in those 77 thousand reviews. For the first try, I created word clouds for each category and the overall reviews. However, there is no significant difference among the four graphs. Therefore, I created a comparison cloud to collate the most common words popping up in reviews.From the comparison cloud, we can infer that customers regard functionalities of products and fragrance as the most important. In addition, the word “recommend” shows up as a commonly used word in the reviews dataset. Consequently, in my perspective, word of mouth is a great marketing strategy for brands to focus on.


Search Engine built in my Shiny App (NLP: TF-IDF, cosine similarity)


TF-IDF is a NLP technique, which stands for “Term Frequency–Inverse Document Frequency,”, a numerical statistic that is intended to reflect how important a word is compared to a document in a corpus.

For my search engine, I utilize “tm” package and employ weightSMART “nnn” weighted schema for term frequency. Basically, the weightSMART “nnn”, a natural weighting computation, counts how many times each individual word matches up with the document in the dataset. If you would like to read more details and check more weighting schemas, please feel free to take a look at the R documentation.

Cosine Similarity

With TF-IDF measurements in place, products are recommended according to a cosine similarity score with the query. To further elaborate how cosine similarity works, it is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. In the case of information retrieval like a search engine, the cosine similarity of two documents will range from 0 to 1, because the term frequencies (TF-IDF weights) cannot be negative. In other words, the angle between two term frequency vectors cannot be greater than 90 degrees. Additionally, when the cosine value is closer to 1, it means that there is a higher similarity between the two vectors (products). The cosine similarity formula is shown below.




Most of the products belong to household brands.

The more active users of the site are from California, Florida, Texas and New York.

There is a negative relationship between the number of reviews and rating score.

Functions and the scent of hair care products are of great importance.

Even though “recommend” is a commonly used word, in this project, it is difficult to tell whether is positive or negative feedbacks. Thus, I can conduct sentiment analysis in the future.

The self-developed search engine, applied with TF-IDF and cosine similarity concepts, will work even better if I include product descriptions. By adding up product descriptions, users can have a higher probability to match their inputs to not only product name but product description, so that they are able to retrieve more related merchandises and explore new features of products.

About Author

Yu-Han Chen

Yu-Han Chen

Yu-Han is currently pursuing a Master’s degree in Management and Systems at New York University, and being a part-time data scientist and teaching assistant at NYC Data Science Academy. In her prior role as a market research consultant,...
View all posts by Yu-Han Chen >

Related Articles

Leave a Comment

Yu-Han Chen October 25, 2017
Thank you so much. Your encouragement gives me a great reason to keep writing blogs.
BioSilk Cream October 22, 2017
Keep up the superb piece of work, I read few blog posts on this web site and I think that your web blog is very interesting and has circles of superb information.
Web Scraping Influenster: Find a Popular Hair Care Product for You – Mubashir Qasim August 30, 2017
[…] article was first published on R – NYC Data Science Academy Blog, and kindly contributed to […]
Web Scraping Influenster: Find a Popular Hair Care Product for You – Cloud Data Architect August 30, 2017
[…] post Web Scraping Influenster: Find a Popular Hair Care Product for You appeared first on NYC Data Science Academy […]

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp