Consumer Insights From Amazon Product Results
I used Selenium to scrape the product results from an Amazon query to gain customer insights and deliver a customized product. This query was done on all English Amazon sites. Data analysis was done in Jupyter Notebooks.
This project was a proof-of-concept to show that it is possible to gain consumer insights from Amazon product results that would then be used to create and deliver a customized product.
Selenium was used to scrape the results from the query “improve my life” on the seven English Amazon sites.
United Arab Emirates: https://www.amazon.ae/s?k=improve+my+life United Kingdom: https://www.amazon.co.uk/s?k=improve+my+life
United States: https://www.amazon.com/s?k=improve+my+life
There were three datasets in total. Two were the first page search results from each country, one of which was sorted by “Avg. Customer Review.” The last dataset included pages beyond the first page.
Two stats indicate why the first two datasets were important despite their small size (https://www.webfx.com/blog/internet/amazon-search-result-infographic/):
- 54% of product searches start on Amazon versus traditional search engines like Google and Bing.
- 70% of Amazon customers never click past the first page of search results
It was important to collect unsorted data, as the results would largely be the result of Amazon’s ranking algorithm “A9” and advertising strategies, since there would not be any customer data to tailor results to. The products displayed would reflect a combination of what Amazon thinks I want to buy and what Amazon thinks I should buy.
The data columns (variables) were as follows:
- Country: Website Country
- Title: Product Title
- Rating: Customer Rating.
- Price: Product Price
- Sponsored: Sponsored Items (Boolean)
- Form: Type of Product (i.e. Book)
The number of total search results and the number of items displayed on each page for each query were also stored.
I created word clouds from the text in the product titles to provide a visualization of the frequency of words per country. These word clouds only represent the multiple result pages, but one may be able to glean information if these were created for the first page results as well. If I were selling a book in this category I would want its title to include some of the largest words in my country of interest.
Besides the obvious cultural differences that likely contributed to different word distributions, it would be interesting to see if one could determine any other factors at play. The following are some words that stood out to me:
Canada: Dragon (referring to child)
United Arab Emirates: Run
United Kingdom: Planner
United States: Guide
I made n-grams from the titles as well. An n-gram is a set of co-occurring words which are ranked according to the amount of times they appear in a given text. The images below would be more insightful if they were country-specific, rather than a collection of them all. That would be an avenue I would take in the future to aid in the data visualization.
I was interested in what I could garner from the pricing data to find an appropriate price for my own product. I converted the prices to US Dollars. From the graphs below it would make sense to price the product higher in the United Kingdom compared to similar offering in India.
The next step would be to continue exploratory data analysis on product reviews for books, as books were the overwhelming form present in the search results. Here’s what I’d do:
- Scrape review and product data related to my query.
- Focus on books with titles I’m interested in.
- Extract customer sentiments from the review texts and determine what they appreciated the most about the books.
- Determine what had the largest effect on product popularity and find patterns amongst the top and bottom products. Pricing, rating, responses to questions, number of reviews, pictures, product description, sponsored, and word choice are a few things to be considered.