Building up a Simple Facial Feature Analysis Platform using Shiny and Microsoft Oxford API

Posted on Aug 11, 2016

In this project I am planning to build up an interactive facial feature analysis platform using data as an extension on the previous project. The main tools would be used are Shiny packages in R for UI design and the Microsoft Oxford API for connection with Microsoft facial detection and recognition system using the data.

The basic idea is to first scrape through the image database websites to download existing facial images of different genders, ages and races , send them to the Microsoft system for further analysis and get back all the generated features to form the final local facial features database as multiple large RDS files. And then, after the initiation of the whole database, when the user begin exploring the Shiny App, he or she can upload any picture from local system to the app.

The app, in turn, send the chosen picture to the Microsoft Oxford for this particular analysis. Finally, when the app get back the analytic data, it will automatically conduct deep descriptive analysis based on the whole distribution of the facial features and that of the uploaded one.

1.Initiate the local database

Since we don't have an existing database with the right features recorded, we need to generate one first. In order to do that, we first need to scrape on some websites which contain some pretty long lists of images with well structured framework. I choose the IMDB site for reference: http://www.imdb.com/. This website contains many lists of famous celebrities, which is very convenient for us to scrape through and download for further modification. I finally chose the list of 1000 celebrities(http://www.imdb.com/list/ls058011111/) as the data source.

 

1

The screenshot of the website

 

Using the inspect function of Chrome or other browsers, we can have a clear view of the HTML structure of the image list. The very information we want is the exact url links of every image:

 

屏幕快照 2016-08-09 下午4.02.17

the inner structure of the image elements

 

Next step, we can continue to write codes to get all the links from the website. We need library "rvest" for scraping. And since there are totally 10 pages with 100 persons per page. A small iteration loop is needed to let the program go over the ten pages one by one:

屏幕快照 2016-08-09 下午4.20.02

 

After retrieving, we can print out the imglinks variable to see the link list we just got from the internet:

 

屏幕快照 2016-08-09 下午4.24.50

 

After we restore all the urls into one vector, we could continue to send every element inside a vector into the Microsoft Oxford API. We should iterate the vector from beginning to the end to send every web link directly to the APIs.

The two important APIs are functions getFaceResponseURL and getEmotionResponseURL, which are in charge of sending picture links to corresponding interfaces and get back responses. The variables facekey and emotionkey are the accessing keys used for successfully getting connected. They can both be got by simply registering on the website. The two functions can get back the facial features and facial emotion analytic results.

 

屏幕快照 2016-08-09 下午4.33.14

 

One thing worth mentioning is that since the accessing key is provided for a free account, there is a limitation of how many requests can be sent every minute. Microsoft currently limit the number to 20 and the rest requests that exceeds 20 would be just ignored. So I actually tried to run this loop three times to get as many responses as possible. The structure and content of the final result is as follows:

 

3

the structure of the final result frame

 

4

the head content of the final result frame

 

At last, I stored this large data frame into local directory as an RDS file in case of loss. However, this is not the end of the initialization. For another function of face matching, we need more works to be done before shifting our focus onto the implementation of Shiny.

If we go back to look at the structure of the final frame, we would find there is a feature called "faceId" at the first column. This is a unique ID generated automatically as we sent the pictures to APIs. But they can just last for 24 hours. For a permanent ID for each facial picture, we would need to build a face list where a permanent ID is stored for each picture. To do this, we first send the request of initiating several new face lists:

屏幕快照 2016-08-09 下午6.15.51

 

And again we need to send all pictures this API to add them into corresponding face list according to the groups they in:

 

屏幕快照 2016-08-09 下午6.36.16

 

屏幕快照 2016-08-09 下午6.36.41

 

This is even a much longer procedure which takes nearly half an hour. In case to keep all the results, another RDS file is needed as "dataframe_list.rds" in local path.

 

2. Building The Shiny Interactive APP

After the works of all preparations, we can move on to build our interactive UI, Shiny. Different from that of the first section, in this section I would not go into details of the designs of the whole shiny framework, which contains hundreds of lines of codes in multiple script files as ui.r, server.r and global.r. Instead, I would focus on the final visual result of the Shiny. All my source codes can be found inside my Github directory:

Before browsing the web page, there are a few things need to be made clear in advance due to some minor drawbacks of the interactive system.

The first page comes in our sight is the introduction page, we can see a navigation bar on the top while a brief introduction passage is located in the middle of the page:

 

屏幕快照 2016-08-09 下午8.20.32

 

At the second page, when we click the "Getting Started" button, we will see a sidebar with an upload function asking us to upload a local picture:

 

屏幕快照 2016-08-09 下午8.20.43

 

The default label language turns into Chinese in my computer browser. But it should be English in others. After successfully uploading the file, we can immediately see the result with a brief piece of information on the left and the original picture shows up on the right:

 

屏幕快照 2016-08-09 下午8.21.07

 

There are a few things need to be noticed in advance.

First, I still don't have a full comprehension of how to get the real local path of the file uploaded, whether a relative one or an absolute one. We can see at the left side there is a parameter called "datapath". However I haven't figured out how to translate it into the real path name. Thus so far I have set the default path as the uploading path, which is the shiny directory containing the ui.r and server.r files. All the files need to be uploaded would first be moved into that directory for further processing.

Secondly, the other functions attached to the navigation bars after "Getting Started" could only be triggered after an uploading of a file. So in order to experience other functions we should first finish this step.

 

After successfully uploading, we could continue to the next function of face matching. This is really a magic functional page where we can get a detailed facial features analysis result as well as a list of most matched faces inside the image database:

 

屏幕快照 2016-08-09 下午8.21.34

 

屏幕快照 2016-08-09 下午8.21.45

 

屏幕快照 2016-08-09 下午8.21.52

 

When we click the "show detail" button, we could get a float window showing all the detailed information we get from the API:

 

屏幕快照 2016-08-09 下午8.26.26

 

After viewing the second main function of the web page, we can move on to the third one, which contains the basic facial feature distribution along with the location of the user's own feature against the overall distribution. The discrete values would be shown as pie chart or bar plot while the continuous value would be shown as histogram diagrams. For the emotion analysis, we would use density distribution and offer the choice to overlap one with another. Additionally, we would also have specific ways to mark the user's own location:

 

屏幕快照 2016-08-09 下午8.22.02

 

屏幕快照 2016-08-09 下午8.22.11

 

屏幕快照 2016-08-09 下午8.22.36

 

屏幕快照 2016-08-09 下午8.22.51

 

The second to last part is the multidimensional analysis. This part two first offer the user a simple filter to let users to choose the analytic data in a specific range. Then users could arbitrarily set the x, y and z axis to conduct multidimensional analysis. As the type of every axis variable varies from discrete to continuous, the type of plot would also be different from one to another:

 

屏幕快照 2016-08-09 下午8.24.07

 

屏幕快照 2016-08-09 下午8.23.22

屏幕快照 2016-08-09 下午8.24.19

 

屏幕快照 2016-08-09 下午8.25.39

 

The last part is the simple implementation of the searching function through a large table. On the top of the side bar, we could select the columns we want to show up. And then we could type in a sub string to search for a specific group of IDs through faceId. Although a datatable object has its own search function, it aims to search with the functionality of "contain" while the search function in the side bar aims to search by "start with":

 

屏幕快照 2016-08-09 下午8.25.55

 

屏幕快照 2016-08-09 下午8.26.10

 

3.Conclusion

By far I have implemented a bunch of functions for a Shiny interactive app, which includes many basic Shiny components and interactive logics. However, this is not the end, I will continue to enhance and modify it, to add new functions and to adjust the old ones. And in the near future, I wish I could enlarge the current database dynamically and make it able to update itself by periodically crawling on other SNS platforms such as Facebook or Instagram.

 

About Author

Leave a Comment

cartier bracciali love falso March 7, 2017
cartierbraceletlove My relatives were Tutono and Maria Achille that lived om Kilby Street in Hingham….any info on them? cartier bracciali love falso http://www.braceletluxe.fr/it/
imitation montre cartier tank February 10, 2017
Thank you for this Blog!! I ran across the same problem this morning with Mr. Schnaufer and reported it to the FBI immediately. imitation montre cartier tank http://www.buycheapwristwatch.ru/fr/
Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API December 5, 2016
[…] Contributed by Shuye Han. He takes the NYC Data Science Academy 12 week full time Data Science Bootcamp program from July 5th to September 22nd, 2016. This post is based on their first class project – the Exploratory Data Analysis Visualization Project, due on the 2nd week of the program. You can find the original article here. […]

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI