Web Scraping Influenster: Find a Popular Hair Care Product for You
The skills the authors demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Are you a person who likes to try new products? Are you curious about which hair products are popular and trendy? If you're excited about getting your hair glossy and eager to find a suitable shampoo, conditioner or hair oil merchandise, using my ‘Shiny (Hair) App’ , with the data collected, it could help you find what you seek in less time. My codes are available on GitHub.
Research Questions
What are popular hair care brands?
What is the user behavior on Influenter.com?
What kind of factors may have critical influences on customers satisfaction?
Is it possible to create a search engine, which takes charge of phrases and returns related products?
Data Collection
To obtain the most up-to-date hair care information, I decided to web scrape Influenster, a product discovery and review platform. It has over 14 million reviews and over 2 millions products for users to choose from.
In order to narrow down my research scope, I focused on 3 categories: shampoo, hair conditioner, and hair oil. I garnered 54 top choices for each one. For product datasets, I scraped brand name, product name, overall product rating, rank and reviews. Plus, the web scraping review dataset includes author name, author location, content, rating score, and hair profile.
Results
Top Brands Graph
Firstly, the “other” category represents the brands which have one or two popular products. Thus, judging from the popular brands' pie chart, we can see that most of the popular products belong to huge brands.
Rating Map
As to checking users’ behaviors on Influenster in the United States, I decided to make two maps to see whether there are any interesting results linked to location. Since I scraped top 54 products for each category, the overall rating score is high across the country. As a result, it is difficult to see regional differences.
Reviews Map
However, if we take a look at the number of hair care product reviews on Influenster.com across the nation, we know that there are 4740, 3898, 3787, 2818 reviews in California, Florida, Texas and New York respectively.
Analysis of Rating and Number of Reviews
There is a negative relationship between rating and number of reviews. As you can see, Pureolog receives the highest score 4.77out of 5, but it only has 514 reviews. On the other hand, OGX is scored 4.4 out of 5, though, it gains over 5167 reviews.
Wordcloud & Comparison Cloud
As we may be interested in what factors customers care about most and what contributes to their satisfaction with a product, I decided to inspect the most frequently mentioned words in those 77 thousand reviews. For the first try, I created word clouds for each category and the overall reviews. However, there is no significant difference among the four graphs. Therefore, I created a comparison cloud to collate the most common words popping up in reviews.
From the comparison cloud, we can infer that customers regard functionalities of products and fragrance as the most important. In addition, the word “recommend” shows up as a commonly used word in the reviews dataset. Consequently, in my perspective, word of mouth is a great marketing strategy for brands to focus on.
Search Engine built in my Shiny App (NLP: TF-IDF, cosine similarity)
TF-IDF
TF-IDF is a NLP technique, which stands for “Term Frequency–Inverse Document Frequency,”, a numerical statistic that is intended to reflect how important a word is compared to a document in a corpus.
For my search engine, I utilize “tm” package and employ weightSMART “nnn” weighted schema for term frequency. Basically, the weightSMART “nnn”, a natural weighting computation, counts how many times each individual word matches up with the document in the dataset. If you would like to read more details and check more weighting schemas, please feel free to take a look at the R documentation.
Cosine Similarity
With TF-IDF measurements in place, products are recommended according to a cosine similarity score with the query. To further elaborate how cosine similarity works, it is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them.
In the case of information retrieval like a search engine, the cosine similarity of two documents will range from 0 to 1, because the term frequencies (TF-IDF weights) cannot be negative. In other words, the angle between two term frequency vectors cannot be greater than 90 degrees. Additionally, when the cosine value is closer to 1, it means that there is a higher similarity between the two vectors (products). The cosine similarity formula is shown below.
Insights
Most of the products belong to household brands.
The more active users of the site are from California, Florida, Texas and New York.
There is a negative relationship between the number of reviews and rating score.
Functions and the scent of hair care products are of great importance.
Even though “recommend” is a commonly used word, in this project, it is difficult to tell whether is positive or negative feedbacks. Thus, I can conduct sentiment analysis in the future.
The self-developed search engine, applied with TF-IDF and cosine similarity concepts, will work even better if I include product descriptions. By adding up product descriptions, users can have a higher probability to match their inputs to not only product name but product description, so that they are able to retrieve more related merchandises and explore new features of products.
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.