My Quest for Data Hygiene

Posted on Apr 26, 2016

Contributed by Adam Cone. He is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between January 11th to April 1st, 2016. This post is based on his first class project - R visualization (due on the 2nd week of the program).

Open Produce is small grocery store on the south side of Chicago. The owners wrote their own Point-of-Sale (POS) system to collect data relating to deveries, inventory, sales, and other aspects of the business.

I received sales data as a set of .tst files, which I read into R as data frames. After installing the R packages dplyr and chron, I commenced data cleaning. My idea was to get data with excellent hygiene and in the process to get to understand the business. Afterwards, I would look at the data with no particular goal or preconceived notion and to find an interesting aspect to analyze. Open-minded, right?

The following code is that I used to consolidate the data I received into two data frames:

Again, besides some basic formatting and joining, little data hygiene was accomplished in the above code. Two data frames resulted from this. Before cleaning, sales_tbl has 619,506 rows describing independent sales:

code_output1

Before cleaning, items_tbl has 1,973,166 rows describing each item sold. items_tbl has all the data of sales_tbl (suppressed in the following code echo):

code_output2

Now, it was time to understand the data and get it clean in preparation for analysis. As I went through the data, I saw some peculiar things. For example, some of the sales were “bulk orders”. For example, Sale_ID = 16,070 was for one item that cost $352.80 (see video for more details: scroll to the bottom of this blog). This seemed unspecific and expensive for this kind of store. There were over 1,000 such transactions. Well, in order to get clean data, I inquired with Open Produce: what is a bulk order. I learned that Open Produce offers nearby cooperative housing operations (there are several in the University of Chicago catchment area) special prices on bulk sales. These items are priced differently, categorized differently for tax purposes and essentially consistute a separate business than their standard retail business. I was advised to drop all such rows from my analysis, which I did.

However, I noticed further confusing aspects of the data:

  • bulk orders
  • negative sales
  • payment types
  • voided transactions
  • sales over $2,000
  • $0.00 sales
  • $0.01 sales
  • tab payments
  • partial item categorization
  • InstaCart

After about seven days of email and phone correspondence, expanding my understanding of the business (tab transactions, instacart, voided transactions, bulk orders, details of running credit cards, etc.) my new code to generate “clean” data was

code_output3

From this table it seems that my data-cleaning efforts had questionably useful results, from a gross, single-statistic point of view. While the benefit seems low for the first four quantities, the last quantity, sales-range, indicates that my data-cleaning efforts had at least one significant effect.

What if I check for a time-dependent quantity, graphically? Here’s a graph of revenue over the entire time my data spanned for the unclean data:

code_output4

Now, the same quantity on the same axes with data that I spent days cleaning up:

code_output5

These graphs look qualitatively identical to me: it appears that Open Produce’s revenue has gone up consistently since they opened. Between the above table and these two graphs, I conclude that my data cleaning efforts did not necessarily yield analytical boons of commensurate value.

Although I didn’t generate the kind of analysis I originally intended, I found this effort educational. Most importantly: next time I will generate an analytical goal before I begin my data cleaning. The standard of “clean” depends entirely on what I will do with the data. For instance, if I just want to know weather I have a data set or not, I don’t have to clean anything. If I care about total revenue, I don’t need to bother getting rid of sales for $0.00, regardless of their idiosyncratic history. If I only care about how many credit cards were used, I don’t need to worry about what any sale actually consisted of.

I conclude that while being open-minded about what stories are interesting in the data is a good thing, it’s critical to at least go into the data cleaning process with some kind of intention in mind. Otherwise, every possible missing or suspect value is a potentially catastrophic GIGO error that must be 100% addressed before any analysis can happen.

About Author

Adam Cone

Adam Cone received his BA with honors in Math from NYU, where he conducted original research in computational neural science and experimental fluid mechanics. He received his MA in Applied Math from UCLA, concentrating in differential equations and...
View all posts by Adam Cone >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI