Attempting to see what is not shown in PDF reports of

Posted on Feb 20, 2017

The two main goals of this visualization project were:

  1. To get an idea if I could make any connections between the number of applications the USCIS is working on and the average length of a single application’s processing time.
  2. To obtain a general idea of USCIS performance level over time.

The data at hand is the quarterly reports on the number of petitions and applications for lawful permanent resident (LPR) status based on a family relationship with an LPR or U.S. citizen.

As it turned out, there was no unified dataset, that would contain all the quarterly reports in it. In addition to that, the CSV files for each report provided by were not organized in a Data Visualization or Data Analysis friendly manner. This CSV files simply mimicked their PDF versions.Screen Shot 2017-02-20 at 2.49.02 PM

This is the first page of the PDF of the 2016 3rd Quarter Report.

The following is the look of the CSV file provided by

raw data

Eventually, after cleaning the data, reorganizing it and reshaping the dataset, it was brought to a state where I was able to start my Exploratory Visualization. Bellow is a graphs representing the number of Approved applications for the category "All Other Relatives". This is actually the category where I, and my petition for my family fall into. Looking at this graph, I sadly realized a sudden drop of the number of applications approved after the 3rd Quarter of 2015.


The numbers are not fluctuating as sharply in the Graphs of  "Number of Applications Received", which could possibly be the cause.


Cleaning the Data.

The 80/20 rule showed itself in my work. 80% of my time and effort was spent on cleaning the data and reconstructing it to feet my needs. Bellow are some screenshots that demonstrate the '80%' part.

Screen Shot 2017-02-20 at 2.47.58 PM


Grouped columns

In total, to constrict a dataset containing information of 3 years, i had to clean 12 such CSV files. In addition, it seems that USCIS does not have some set standard of writing those CSV files, so every year the shape and the way the information was written into CSV would change. For instance, the data from 2011 is written in a one CSV file, and it has a horizontal shape, because there are columns of information for each quarter for the whole year. Since my goal was to create data set, in which each column represents 1 type of variable (for example, a column that has the "Number of Received Applications", or another column with "Numbers of Approved Applications"), instead of representing the same variable in different 'flavors' (for example, "Number of Received Applications, 1st Quarter, 2014", and another column with "Number of Received Applications, 2nd Quarter 2014"), the cleaning and reshaping the earlier dated CSV files was out of question. That being said, I do intend to work and expand the dataset, so that with the longer time period I can make better predictive analysis.

An example of 2013 1st Quarter dataset CSV file is bellow.


Overall, the satisfaction of seeing a some coherent graph, that gives one a tangible perspective was worth every minute of my work. Seeing the result, even if it is a very simple histogram, does motivate me to continue with developing my data set, possible finding other sources of data and answering questions such as "Is the system used by U.S. Citizenship and Immigration Services working productive, to its full potential? Can the data analysis and modeling help improve it, and possibly show its inefficiencies in some areas? How can we shorten the wait time for all those families who wait to be united and are separated because we are spending our tax money one a "Brocken Machine"...

About Author

Vahe Voskerchyan

My main interest in Mathematics, in conjunction with my studies in Behavioral Economics and Philosophy, helped me to hone down to choosing Data science as a career. I see data science as an excellent ‘experimental lab’ where I...
View all posts by Vahe Voskerchyan >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI