R Shiny Project: A/B Test Dashbord
The purpose of this project is to to build an intuitive data dashboard that allows non-data peers and leaders to understand data results.
Taken directly from the kaggle page:
“A company recently introduced a new bidding type, “average bidding”, as an alternative to its existing bidding type, called “maximum bidding”. One of our clients, ….com, has decided to test this new feature and wants to conduct an A/B test to understand if average bidding brings more conversions than maximum bidding.
The A/B test has run for 1 month and ….com now expects you to analyze and present the results of this A/B test.”
While kaggle didn’t specify the difference between these bidding types, nor the platform they are on, with research I found these are forms of Google Ads. The key difference is that Maximum Bidding is best when you are advertising a specific product, say a specific watch from your watch store. And Average Bidding is better when you are advertising your story or a category of products from your store, say watches.
Maximum Bidding was missing one day, which amounted to one row of data. There were several options of which I mainly considered two: a) To impute values based on the average for both groups b) To delete the same row of data from Average Bidding. Considering that we only had 30 rows of data, the more balanced solution appeared to be option ‘b’, deleting both rows.
I utilized a power test to see if twenty-nine days provided enough data to statistically answer if one variation was better than the other. The power test indicated that 30 days was enough.
I engineered Clickthrough Rate, Purchase Rate, and Cart Completion Rate as the Conversion Metrics to answer the clients question: does Average Bidding have a higher Conversion Rate?
We see Maximum Bidding has a higher Conversion Rate for each metric.
But does that mean Average Bidding results in fewer conversions (i.e. purchases)? This is where Key Metrics come in
Key Metrics are top level, decision driving metrics. Relating to the clients question, we determined the Key Metrics to be Impressions (number of ad views), Website Clicks (number of ad clicks), and Purchases (number of purchases).
From this standpoint, Average Bidding performed better.
Average Bidding generated a million more Impressions. This means Average Bidding ads had one million more unique views, a valuable consideration. And Average Bidding edged out more purchases from fewer clicks.
But how about the costs?
I engineered Cost Metrics to answer these questions: Cost per Impression, Cost per Click, and Cost per Purchase. These are calculated by dividing the number of each Metric by Spend (i.e. Impressions / Spend = Cost per Impression)
Average Bidding costs less at each step of the funnel. Notably, Cost per Impression was 50% less and Cost per Purchase 14% less, indicated by the Lift volume (the difference of which Average Bidding was less or more than Maximum Bidding).
Average Bidding Spend was 10.43% less, though it achieved 1,000,000 more impressions, and a few hundred more purchases.
While Maximum Bidding had a better Conversion Rate, it had a significantly lower number of Impressions and slightly lower number of Purchases.
I anticipate Maximum Bidding users may be more loyal and spend more over time. Unfortunately, this can’t be confirmed with the available data. Had this been a real-world example, I would continue to collect data on each group's customers.
I recommend Average Bidding. Considering the limited data, Average Bidding appears to be the more effective strategy overall because it generates significantly more awareness (Impressions) and slightly more Purchases at a lower cost.
- Simplify App Presentation
- Easy import from data collection platforms
- Prompt Queries:LLM integration to engage in conversational Q and A
- Would love to have data on future purchases of each group