Data Scraping Instagram for Likes
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction:
In the exponentially growing digital age, the internet is flooded with data. Along with the advent of smartphones equipped with high-definition cameras, personal pictures have become an integral part of social media. In particular, some social media mobile applications chooses photo-sharing as their main platform. One of the most popular photo-sharing applications is Instagram, where millions of users share curated photos in hopes of becoming "insta-famous" with a large number of followers, likes, and comments.
"A picture is worth a thousand words"
The above phrase is a very common and widely used to imply that pictures often contains a surplus of information. Until now, this quote was mainly geared toward humans interpreting information from pictures individually. Although in principle, one can make strong conclusions from plethora of data in images, it is not efficient or desired to go through each and every image by hand. Fortunately, with the growth in the computer vision technology and powerful computing resources, we are inching closer to automating low-error extraction of information from images. This information can potentially give us a whole new dimension of data to use.
In this project, the ultimate goal was to use image classification technology to process Instagram photos to gain insights about @instagram account. In this blog, I will detail how to scrape Instagram photos without using the API explicitly, instead using Python's Scrapy package. Next, I will describe the different image classification technologies, Google's Inception and Clarifai, and examples of good and bad classifications. Finally, some exploratory analysis was done to find the top-liked and commented classifications. This blog will end with some next steps to improve upon this workflow and correct lessons learned throughout this project.
Data:
The data for this project was scraped from the @Instagram account. At the point in time when it was scraped, 3346 images were downloaded. The reason this account was used as the first prototype was because it has a heavy following (over 199 million followers) and a diverse set of photos pulled from the Instagram community. It was assumed that would be a close representation of the diverse population of Instagram users.
Data Web Scraping:
Scraping Instagram proved to be difficult because the developers made their data not easily accessible. Although they have an API, it was geared towards businesses looking to advertise their brands. Furthermore, their website was dynamically loaded with AJAX calls, making it a prime candidate for a package like Selenium that uses a bot that manually records information. Luckily, a proxy site called imgrum.net has done half of the heavy-lifting by building a simplified web display of Instagram photos by pulling data from their API. With that repository in place, instead of Selenium, Scrapy was used as the scraping package for this project.
Here is the Imgrum website that lays out the images, numbers of likes, and numbers of comments in plain sight. A quick inspection of the HTML source code, we can find that each image and its associated likes and comments are stored in separate classes. Going one level deeper, we can extract the image url, number of likes, and number of comments.
Now that we discerned a pattern in the source code, Scrapy was used to deploy a Scrapy spider to crawl through the website recording all this information into a .csv file. Lastly, each set of images are stored on pages on the website, so after the spider extracted all the image information, the "Next" page link was extracted to load the next webpage. The scraping concludes when there are no more pages to crawl.
Image Classifiers:
Convolutional Neural Networks
As I mentioned earlier, two different classifiers were used on the images. The first being Google's Inception model, and second being Clarifai's computer vision API. The underlying algorithm that both models rely on are convolutional neural networks(CNN). A CNN is a deep learning algorithm that recently became popular in the computer vision community back in 2012 when it was used to win that year's ImageNet competition.
Before then, computer vision algorithm used a whole collection of feature detection algorithms to first extract features from the image, before putting it through a standard classifier like logistic regression or support vector machines. Despite the hard work in extracting features in a preprocessing step, the accuracy of these algorithms were still subpar. The lowest error rate was about 23% in 2011's ImageNet competition. When CNN's were first used in 2012 by AlexNet, it won the ImageNet competition that year with about 16% error rate, 7% better than the previous year. Currently, the best image classification model even surpasses average human performance.
ImageNet
CNN's on the other hand, does the feature extraction for you automatically. This is done by adding convolutional layers as the first couple of hidden layers in a Neural Network. The convolutional layers use a sliding window operation that runs across the image pixels and performs a filtering operation to create a set of new abstract features. Each convolutional layer will further create even more abstract features for the neural network to use.
For example, in facial recognition, the first convolutional layer will transform the data from a set of pixel values to a set of detected edges. The second layer will transform the set of edges to a set of simple shapes. The third layer may recognize facial features such as a nose or mouth. The process will continue until you have a set of faces that can be used as a basis to recognize a given face. A typical CNN architecture will look like:
Google Inception Model
The first of the two classifiers used was Google's Inception image classifier. Their model was the winner in the 2014 ImageNet competition that had the lowest error rate of 6.7% when it came to classifying images into 1000 different classes. The model was trained on an enormous data set of over one million images. This model was released to the public in 2015 along with their code using their deep learning framework, TensorFlow, to classify new images.
Here is an example of using this technology to classify a picture of a lion. The top-five classes and their probabilities are given as output. What is not shown is the 995 other classes that usually have very low probability scores. In this case, the classifier is over 90% sure that this is a photo of a lion. So far it is performing fairly well.
Here is an example of trying to run this classification on a collage of images. The model now performs poorly for multiple reasons. First, the model wants to classify multiple things at once, but it can only have a total probability of 1, so it starts to split the confidence among multiple classes. Second, the image is not homogenous, so this confuses the image classifier. Lastly, some of the things in the images may not belong to any of the 1000 classes, so the classifier will try its best to guess the closest class it knows.
Clarifai's API
The second of the two classifiers is Clarifai's software. Clarifai is a computer vision AI startup that hopes to bring complex and intensive computer vision services to businesses in the form of easy to use APIs. Clarifai is also built using convolutional neural networks, but it also has a region selection algorithm as well. This allows the multiple objects and classes to be extracted from the image. Specific detail of Clarifai's algorithm is unknown since it is proprietary technology.
Here is an example of Clarifai classifying an abstract image of a man flying through the sky with balloons. Although this image is more artistic than realistic, with Clarifai's regional detection algorithm, it was able to accurately predict what was inside the image. It also has classes that seemed to be more general than Google's classifier.
Here is an example of Clarifai classifying a man holding an obsolete camera. It was able to correctly classify the man and his hat, but mistakenly classified the camera as a gun/weapon.
Exploratory Data Analysis:
After processing each image, and classifying the images with these two models, some preliminary exploratory analysis was performed. A weighted average was used here to penalized classes that had low probabilities. Here are the top 10 classes for both number of likes and number of comments using Google's Inception classifier. It turns out that the top 10 classifications are mostly pictures of outliers. Looking deeper into the data set, surprisingly, most of these classes were only reported with high probability on one image.
Likewise, although Clarifai had classes that were more general, some of these classes were also only reported with high probability on one image.
Conclusion:
-
Web scraping is messy and sometimes nearly out your control (server problems)
-
Computer vision is pretty awesome (but needs work)
-
Not all data will be nicely distributed, outliers dominated the top liked and comments photos
-
Clarifaiโs system seem to be better than the Google classifier, but would cost money
Next Steps:
-
Scrape comments and do Natural Language Processing to gain more insight
-
Use other computer vision models, i.e Googleโs Show and Tell or Caffe Model Zoo
-
Train new model for specific category (i.e food) and perform similar analysis but on food Instagram
-
Fit machine learning model to predict the number of likes/comments
If you are interested in seeing the nitty-gritty details of the code, please visit my GitHub!