Web Scraping TV Show Sites: Analysis and Exploration
Television shows are an important part of everyday life. They can be portals to a parallel universe or windows to our wildest dreams. All these different shows give rise to many different review websites, each with its own scoring system and critic reviews. Out of curiosity, let's compare these review websites to see how different their scores are compared to each other. For this project, three popular TV show websites was chosen for analysis: The Movie Db, Rotten Tomatoes, and IMDb.
The main tool used in this project is Scrapy. It is a free web crawling framework. The main component of Scrapy is a "spider", which is a web crawler with a series of customizable commands for scraping information from a website. Here is a scraping workflow for this project.
From TVDb main page, I obtained a list of TV shows, then using it as a library to scrap show information. The information I scraped for each TV show includes show name, genre, network, show status, and show rating for each of the three websites. A total of four spiders were implemented in this project: one spider for TVDb , one spider for Rotten Tomatoes, and two for IMDb. IMDb was more difficult to scrape because of its complex url addresses; (each show is encoded as a number rather than its actual name), thus a spider was used to scrape the url of the show. Through the url, another spider was implemented to scrape the attributes of the show. All the spider codes can be find here.
The top plot is the overall density distribution of all three website scores. The first important observation we can make is that the overall trend is very similar, all three distributions are skewed to the left, meaning few shows have very low scores. The bottom plot is a violin plot of the three score distributions. Rotten Tomatoes has the highest distribution with a median of 8.4, IMDb is the second highest with a median of 7.5, and followed by TVDb with a median of 6.3. Another interesting observation is that Rotten Tomatoes and TVDb has zero values for TV score, and IMDb does not, this might due to the imputation method used for null values of each website.
To explore deeper into the dataset, I analyzed the following distributions of attributes with potential influence: Status, Network, and Genre.
There are three factors in the status attribute: Cancelled, Ended, Returning Series. From the bar plot, returning series has the highest score compared to cancelled and ended shows. Intuitive, cancelled and ended shows tend to have lower score which ultimately lead to show termination.
Top ten network were selected out of 107 different networks. The HBO network has the highest average rating in all three websites, followed by the BBC network.
Although all score distributions exhibit similar behavior, the top genres are different for each distribution. Rotten Tomatoes ranked Talk, News, and Musical as the top three genres scores. IMDb ranked Adventure, Honor, and Crime as the top three scores. TVDb ranked News, Family, and Adventure as the top three. Also note that TVDb has no information on the Talk genre.
From all the exploratory analysis, we can see there is definitely an underlying differences between these three website scoring systems. How do we validate and quantify this difference? Let's perform some statistical analysis to find out!
From the scatter plot matrix, we can see a very weak linear relationship or collinearity between IMDb and Rotten Tomatoes scores, and no other definite linear relationship can be visualized from the scatter plots.
Another approach to measure collinearity is through the biplot obtained from PCA. The biplot is composed of the top two principal components, which are orthogonal vectors composed of a linear combination of the three score vectors. From the biplot, the IMDb and Rotten Tomatoes vector points in a similar direction, which again shows that they exhibit some collinearity with each other.
Since both the scatter plot matrix and biplot show that there is a linear relationship, albeit a weak one, are these two score distribution statistically similar, or significant different? A two sample, unequal variance, 95% confidence, two-tail t-test will be able to answer this question.
Finally, an One-Way ANOVA was performed to test if all three scores are different.
An extremely small p-value indicates that we reject the null hypothesis, therefore the three distributions are in fact significantly different from each other.
As observed from both the exploratory and statistical analysis, we can definitely state there is a significant difference in the distribution of scores for these three websites. From the visualization, we can conclude that the scores from Rotten Tomatoes tend to give higher scores than the rest. Also, we saw a weak degree of collinearity between IMDb and Rotten Tomatoes distributions. However, after performing statistical testing, the t-test concluded that IMDb and Rotten Tomatoes score distributions are significantly different from each other. One-way ANOVA saw that these three distributions are indeed all significantly different. When we compare ratings from different websites, it is pivotal that we compare TV shows on the same scales.