My Quest for Data Hygiene
Contributed by Adam Cone. He is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between January 11th to April 1st, 2016. This post is based on his first class project - R visualization (due on the 2nd week of the program).
My Quest for Data Hygiene
Adam Cone
April 25, 2016
Open Produce is small grocery store on the south side of Chicago. The owners wrote their own Point-of-Sale (POS) system to collect data relating to deveries, inventory, sales, and other aspects of the business.
I received sales data as a set of .tst files, which I read into R as data frames. After installing the R packages dplyr and chron, I commenced data cleaning. My idea was to get data with excellent hygiene and in the process to get to understand the business. Afterwards, I would look at the data with no particular goal or preconceived notion and to find an interesting aspect to analyze. Open-minded, right?
The following code is that I used to consolidate the data I received into two data frames:
Again, besides some basic formatting and joining, little data hygiene was accomplished in the above code. Two data frames resulted from this. Before cleaning, sales_tbl has 619,506 rows describing independent sales:
Before cleaning, items_tbl has 1,973,166 rows describing each item sold. items_tbl has all the data of sales_tbl (suppressed in the following code echo):
Now, it was time to understand the data and get it clean in preparation for analysis. As I went through the data, I saw some peculiar things. For example, some of the sales were โbulk ordersโ. For example, Sale_ID = 16,070 was for one item that cost $352.80 (see video for more details: scroll to the bottom of this blog). This seemed unspecific and expensive for this kind of store. There were over 1,000 such transactions. Well, in order to get clean data, I inquired with Open Produce: what is a bulk order. I learned that Open Produce offers nearby cooperative housing operations (there are several in the University of Chicago catchment area) special prices on bulk sales. These items are priced differently, categorized differently for tax purposes and essentially consistute a separate business than their standard retail business. I was advised to drop all such rows from my analysis, which I did.
However, I noticed further confusing aspects of the data:
- bulk orders
- negative sales
- payment types
- voided transactions
- sales over $2,000
- $0.00 sales
- $0.01 sales
- tab payments
- partial item categorization
- InstaCart
After about seven days of email and phone correspondence, expanding my understanding of the business (tab transactions, instacart, voided transactions, bulk orders, details of running credit cards, etc.) my new code to generate โcleanโ data was
From this table it seems that my data-cleaning efforts had questionably useful results, from a gross, single-statistic point of view. While the benefit seems low for the first four quantities, the last quantity, sales-range, indicates that my data-cleaning efforts had at least one significant effect.
What if I check for a time-dependent quantity, graphically? Hereโs a graph of revenue over the entire time my data spanned for the unclean data:
Now, the same quantity on the same axes with data that I spent days cleaning up:
These graphs look qualitatively identical to me: it appears that Open Produceโs revenue has gone up consistently since they opened. Between the above table and these two graphs, I conclude that my data cleaning efforts did not necessarily yield analytical boons of commensurate value.
Although I didnโt generate the kind of analysis I originally intended, I found this effort educational. Most importantly: next time I will generate an analytical goal before I begin my data cleaning. The standard of โcleanโ depends entirely on what I will do with the data. For instance, if I just want to know weather I have a data set or not, I donโt have to clean anything. If I care about total revenue, I donโt need to bother getting rid of sales for $0.00, regardless of their idiosyncratic history. If I only care about how many credit cards were used, I donโt need to worry about what any sale actually consisted of.
I conclude that while being open-minded about what stories are interesting in the data is a good thing, itโs critical to at least go into the data cleaning process with some kind of intention in mind. Otherwise, every possible missing or suspect value is a potentially catastrophic GIGO error that must be 100% addressed before any analysis can happen.