My Quest for Data Hygiene

Adam Cone
Posted on Apr 26, 2016

Contributed by Adam Cone. He is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between January 11th to April 1st, 2016. This post is based on his first class project - R visualization (due on the 2nd week of the program).

Open Produce is small grocery store on the south side of Chicago. The owners wrote their own Point-of-Sale (POS) system to collect data relating to deveries, inventory, sales, and other aspects of the business.

I received sales data as a set of .tst files, which I read into R as data frames. After installing the R packages dplyr and chron, I commenced data cleaning. My idea was to get data with excellent hygiene and in the process to get to understand the business. Afterwards, I would look at the data with no particular goal or preconceived notion and to find an interesting aspect to analyze. Open-minded, right?

The following code is that I used to consolidate the data I received into two data frames:

https://gist.github.com/adamcone/04eb2973fffcee36d59611e672efac3e

Again, besides some basic formatting and joining, little data hygiene was accomplished in the above code. Two data frames resulted from this. Before cleaning, sales_tbl has 619,506 rows describing independent sales:

https://gist.github.com/adamcone/61323fa7a87875f1baee8215593701c9

code_output1

Before cleaning, items_tbl has 1,973,166 rows describing each item sold. items_tbl has all the data of sales_tbl (suppressed in the following code echo):

https://gist.github.com/adamcone/e61f3c589ac8d734eec56aa556437902

code_output2

Now, it was time to understand the data and get it clean in preparation for analysis. As I went through the data, I saw some peculiar things. For example, some of the sales were “bulk orders”. For example, Sale_ID = 16,070 was for one item that cost $352.80 (see video for more details: scroll to the bottom of this blog). This seemed unspecific and expensive for this kind of store. There were over 1,000 such transactions. Well, in order to get clean data, I inquired with Open Produce: what is a bulk order. I learned that Open Produce offers nearby cooperative housing operations (there are several in the University of Chicago catchment area) special prices on bulk sales. These items are priced differently, categorized differently for tax purposes and essentially consistute a separate business than their standard retail business. I was advised to drop all such rows from my analysis, which I did.

However, I noticed further confusing aspects of the data:

  • bulk orders
  • negative sales
  • payment types
  • voided transactions
  • sales over $2,000
  • $0.00 sales
  • $0.01 sales
  • tab payments
  • partial item categorization
  • InstaCart

After about seven days of email and phone correspondence, expanding my understanding of the business (tab transactions, instacart, voided transactions, bulk orders, details of running credit cards, etc.) my new code to generate “clean” data was

https://gist.github.com/adamcone/47b1c9bac969e1621e3173057c46a3c1

code_output3

From this table it seems that my data-cleaning efforts had questionably useful results, from a gross, single-statistic point of view. While the benefit seems low for the first four quantities, the last quantity, sales-range, indicates that my data-cleaning efforts had at least one significant effect.

What if I check for a time-dependent quantity, graphically? Here’s a graph of revenue over the entire time my data spanned for the unclean data:

https://gist.github.com/adamcone/d16d68723bee568336dcc4da4aa2ccbb

code_output4

Now, the same quantity on the same axes with data that I spent days cleaning up:

https://gist.github.com/adamcone/d16d68723bee568336dcc4da4aa2ccbb

code_output5

These graphs look qualitatively identical to me: it appears that Open Produce’s revenue has gone up consistently since they opened. Between the above table and these two graphs, I conclude that my data cleaning efforts did not necessarily yield analytical boons of commensurate value.

Although I didn’t generate the kind of analysis I originally intended, I found this effort educational. Most importantly: next time I will generate an analytical goal before I begin my data cleaning. The standard of “clean” depends entirely on what I will do with the data. For instance, if I just want to know weather I have a data set or not, I don’t have to clean anything. If I care about total revenue, I don’t need to bother getting rid of sales for $0.00, regardless of their idiosyncratic history. If I only care about how many credit cards were used, I don’t need to worry about what any sale actually consisted of.

I conclude that while being open-minded about what stories are interesting in the data is a good thing, it’s critical to at least go into the data cleaning process with some kind of intention in mind. Otherwise, every possible missing or suspect value is a potentially catastrophic GIGO error that must be 100% addressed before any analysis can happen.

About Author

Adam Cone

Adam Cone

Adam Cone received his BA with honors in Math from NYU, where he conducted original research in computational neural science and experimental fluid mechanics. He received his MA in Applied Math from UCLA, concentrating in differential equations and...
View all posts by Adam Cone >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp