Home » Uncategorized

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

  • SupStat 

Contributed by Shuye Han. He takes the NYC Data Science Academy 12 week full time Data Science Bootcamp program from July 5th to September 22nd, 2016. This post is based on their first class project – the Exploratory Data Analysis Visualization Project, due on the 2nd week of the program. You can find the original article here.

In this project I am planning to build up an interactive facial feature analysis platform as an extension on the previous project. The main tools would be used are Shiny packages in R for UI design and the Microsoft Oxford API for connection with Microsoft facial detection and recognition system.

The basic idea is to first scrape through the image database websites to download existing facial images of different genders, ages and races , send them to the Microsoft system for further analysis and get back all the generated features to form the final local facial features database as multiple large RDS files. And then, after the initiation of the whole database, when the user begin exploring the Shiny App, he or she can upload any picture from local system to the app. The app, in turn, send the chosen picture to the Microsoft Oxford for this particular analysis. Finally, when the app get back the analytic data, it will automatically conduct deep descriptive analysis based on the whole distribution of the facial features and that of the uploaded one.

1.Initiate the local database

Since we don’t have an existing database with the right features recorded, we need to generate one first. In order to do that, we first need to scrape on some websites which contain some pretty long lists of images with well structured framework. I choose the IMDB site for reference: http://www.imdb.com/. This website contains many lists of famous celebrities, which is very convenient for us to scrape through and download for further modification. I finally chose the list of 1000 celebrities(http://www.imdb.com/list/ls058011111/) as the data source.

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Using the inspect function of Chrome or other browsers, we can have a clear view of the HTML structure of the image list. The very information we want is the exact url links of every image:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Next step, we can continue to write codes to get all the links from the website. We need library “rvest” for scraping. And since there are totally 10 pages with 100 persons per page. A small iteration loop is needed to let the program go over the ten pages one by one:Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

After retrieving, we can print out the imglinks variable to see the link list we just got from the internet:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

After we restore all the urls into one vector, we could continue to send every element inside a vector into the Microsoft Oxford API. We should iterate the vector from beginning to the end to send every web link directly to the APIs. Here the two important APIs are functions getFaceResponseURL and getEmotionResponseURL, which are in charge of sending picture links to corresponding interfaces and get back responses. The variables facekey and emotionkey are the accessing keys used for successfully getting connected. They can both be got by simply registering on the website. The two functions can get back the facial features and facial emotion analytic results.

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

One thing worth mentioning is that since the accessing key is provided for a free account, there is a limitation of how many requests can be sent every minute. Microsoft currently limit the number to 20 and the rest requests that exceeds 20 would be just ignored. So I actually tried to run this loop three times to get as many responses as possible. The structure and content of the final result is as follows:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

At last, I stored this large data frame into local directory as an RDS file in case of loss. However, this is not the end of the initialization. For another function of face matching, we need more works to be done before shifting our focus onto the implementation of Shiny. If we go back to look at the structure of the final frame, we would find there is a feature called “faceId” at the first column. This is a unique ID generated automatically as we sent the pictures to APIs. But they can just last for 24 hours. For a permanent ID for each facial picture, we would need to build a face list where a permanent ID is stored for each picture. To do this, we first send the request of initiating several new face lists:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

And again we need to send all pictures this API to add them into corresponding face list according to the groups they in:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

This is even a much longer procedure which takes nearly half an hour. In case to keep all the results, another RDS file is needed as “dataframe_list.rds” in local path.

2. Building The Shiny Interactive APP

After the works of all preparations, we can move on to build our interactive UI, Shiny. Different from that of the first section, in this section I would not go into details of the designs of the whole shiny framework, which contains hundreds of lines of codes in multiple script files as ui.r, server.r and global.r. Instead, I would focus on the final visual result of the Shiny. All my source codes can be found inside my Github directory:

Before browsing the web page, there are a few things need to be made clear in advance due to some minor drawbacks of the interactive system.

The first page comes in our sight is the introduction page, we can see a navigation bar on the top while a brief introduction passage is located in the middle of the page:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

At the second page, when we click the “Getting Started” button, we will see a sidebar with an upload function asking us to upload a local picture:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

The default label language turns into Chinese in my computer browser. But it should be English in others. After successfully uploading the file, we can immediately see the result with a brief piece of information on the left and the original picture shows up on the right:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

There are a few things need to be noticed in advance. First, I still don’t have a full comprehension of how to get the real local path of the file uploaded, whether a relative one or an absolute one. We can see at the left side there is a parameter called “datapath”. However I haven’t figured out how to translate it into the real path name. Thus so far I have set the default path as the uploading path, which is the shiny directory containing the ui.r and server.r files. All the files need to be uploaded would first be moved into that directory for further processing. Secondly, the other functions attached to the navigation bars after “Getting Started” could only be triggered after an uploading of a file. So in order to experience other functions we should first finish this step.

After successfully uploading, we could continue to the next function of face matching. This is really a magic functional page where we can get a detailed facial features analysis result as well as a list of most matched faces inside the image database:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

When we click the “show detail” button, we could get a float window showing all the detailed information we get from the API:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

After viewing the second main function of the web page, we can move on to the third one, which contains the basic facial feature distribution along with the location of the user’s own feature against the overall distribution. The discrete values would be shown as pie chart or bar plot while the continuous value would be shown as histogram diagrams. For the emotion analysis, we would use density distribution and offer the choice to overlap one with another. Additionally, we would also have specific ways to mark the user’s own location:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

The second to last part is the multidimensional analysis. This part two first offer the user a simple filter to let users to choose the analytic data in a specific range. Then users could arbitrarily set the x, y and z axis to conduct multidimensional analysis. As the type of every axis variable varies from discrete to continuous, the type of plot would also be different from one to another:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

The last part is the simple implementation of the searching function through a large table. On the top of the side bar, we could select the columns we want to show up. And then we could type in a sub string to search for a specific group of IDs through faceId. Although a datatable object has its own search function, it aims to search with the functionality of “contain” while the search function in the side bar aims to search by “start with”:

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

Building up a simple facial feature analysis platform using Shiny and Microsoft Oxford API

3.Conclusion

By far I have implemented a bunch of functions for a Shiny interactive app, which includes many basic Shiny components and interactive logics. However, this is not the end, I will continue to enhance and modify it, to add new functions and to adjust the old ones. And in the near future, I wish I could enlarge the current database dynamically and make it able to update itself by periodically crawling on other SNS platforms such as Facebook or Instagram.