Understanding The Drivers Of CTR In Mobile Display Advertising
Introduction:
According to the Wireless Associationโs Website (www.ctia.org), there were approximately 255.4 million American wireless subscribers, representing a penetration rate of 84%, as of the end of 2007. This number has only grown since that date as mobile advertising has become one of the current frontiers in digital advertising. As greater amounts of data are gleaned on an even more personal and constantly accessible level, advertisers are seeking ways to devise highly targeted campaigns that maximize brand engagement from every segment of their consumer base.
With that in mind, our group embarked upon a journey to further understand what kind of insights can be drawn in the mobile display advertising industry and explored a massive dataset of Real-Time-Bidding (RTB) data that was generously provided to us by a partner mobile advertising firm.
RTB Overview:
The diagram below presents a high-level overview of the different steps within the programmatic advertising lifecycle. Our data specifically deals with mobile advertising, so both apps and web page ad inventory is considered. The dataset provided to us was broken into three groups of "Bid Request", "Impressions", and "Clicks". So, the mobile advertising firm is integrated with various ad exchanges and will bid on certain ads that meet the criteria of their given clients. From there, a subset of bid requests will have "won", resulting in the impressions dataset. Lastly, if a user viewing an impression actually clicks on that ad, then an even smaller subset will be recorded as clicked.
Data Overview:
The data provisioned to us represented only a single month's worth of data, but was easily over 30TB in size when we accessed the files via a series of Amazon S3 buckets. From this slide one can tell that the ratios of bids to impressions to clicks is on the scale of tens of thousands to one. Given that we were not privy to the firm's bidding algorithms that drove impressions and the sheer size of the dataset, we sought to narrow the scope of our investigations and focus solely on the factors driving click through rate.
Therefore, we decided to focus our analysis only on impressions that received a click and that were located within the contiguous United States, resulting in a final data frame that was over 30 million rows.
For a view of the features that we included in our analysis and how we dealt with vast amounts of missing information, we visualized the columns below. In many of our previous analyses, it was often discussed how missingness can often become valuable in itself as it could, for instance, identify a particular user segment of devices or indicate a faulty link in a data pipeline.
Exploratory Data Analysis:
Next, we sought to provide a high-level overview on CTR regarding the various features we selected to provide the firm with the overarching trends in the data and possible avenues for further investigation.
Day of the week:
We observed that the greatest impression volume occurred on Thursdays with the next highest volumes occurring over the weekends. However, the greatest CTR was observed on Sunday.
Day of the Month:
Next, we viewed how CTR shifted through the day of the month. In-line with our observation of greater click-rate over the weekends, we see somewhat of cyclical behavior with a dip in activity on Fridays leading into greater engagement on the weekends (highlighted in blue).
Ad Size:
Ad size refers to the height and width (in pixels) of the displayed advertisement. The below image shows how some of the various sizes would be displayed on both tablets and phones:
We observed that the vast majority of the ads were the smallest (320x50 or 300x250) sizes, but that these sizes also recorded the lowest CTRs. The best performing size was 480x320, which represents an ad that covers the entire screen of a phone. This leads us to believe that advertisers looking to optimize their campaign performance could focus on this size exclusively.
Device Brand and OS:
We observed the CTR of various devices as well as the total number of impressions. Interesting to note, that apple has the lowest CTR out of all devices, but has almost twice as many impressions as the next leading device (Samsung). This contradicts the idea of spending more to get more. The next 3 brands have comparable click rates even though the amount of impressions for each gets sequentially smaller.
In-line with our insights on device brand, we also see that Android (non-Apple) devices also display a greater click-through rate.
Carrier Type:
When dividing the dataset by mobile carrier, we noticed that Boost Mobile had a higher CTR than most followed by T-Mobile and AT&T. However, in terms of total impression volume, we also saw that Boost had the 3rd lowest count, which indicates that it could be one of the more efficient carriers to advertise with.
Landing Page:
Our dataset also contained a column indicating which page a user would be redirected to upon clicking on the displayed advertisement. We saw that certain firms outperformed others but that the firms varied widely in volume (likely based on amount of budget given to the mobile advertising firm).
After performing this EDA, we then ran some rudimentary models to understand what features could be important in determining click-through rate. What we found was that the "landing page" feature was consistently the most influential factor, which supported our intuition that particular clients could have greater budgets, marketing strategies, or initial brand recognition, which would naturally lead to them having an above average CTR.
Therefore, we opted to dive deeper into a few of the firm's largest clients by volume and evaluate several predictive models for each. Our end-goal in this exercise was to establish a framework the firm could use to help an individual client improve CTR for their given campaigns.
Unbalanced Data:
To run any type of predictive model on this dataset, we realized that model performance suffered greatly due to the highly imbalanced nature of this dataset. The ratio of clicked relative to unclicked impressionsย was roughly ~0.005, so a model that blindly predicted that every impression was unclicked would actually perform quite well by most metrics. To help remedy this imbalance, in our model training set we downsampled the majority class until the ratio was about 50% and opted to use area under the precision-recall curve AuPR rather than the traditional AuROC as our evaluation metric when tuning since we wanted to optimize for precision rather than accuracy.
Next, we attempted to use all of the available binary classification models within PySpark's MlLib implementation:
Client Analysis:
After performing extensive encoding, feature engineering, and data cleaning to shape the data into MlLib's required format for model ingestion, we were ready to further analyze the firm's key clients:
When viewing the landing pages by volume, we took the top 4 companies Ford, Kirklands, Sephora, and StraighTalk, since we believed they would be the firm's most valuable contracts to investigate further.
Tree-Based Model Feature Importance:
In the below charts, we display each of the 4 clients and the most important features that the models used to determine results. The columns that contain a green underline of the title are columns that represent IAB Categories. IAB Categories are an agreed upon standard to classify the nature of a particular advertising inventory. In essence, it tells us what kind of app or website that the ad was displayed on. The full list of IAB Categories and a deeper discussion can be found here:
From the above charts, we see that the differing features per client that bear the most weight in determining CTR. However, some columns such as Exchange, Carrier , or Device name seem to be in common across most. These findings could highlight avenues for further investigation. For instance, the chart below displays a breakout per client based on which exchange (the firm used 4 major exchanges) their ads are typically purchased through.
From this chart we can readily observe that the Nexage exchange consistently produces a higher CTR. Taking Ford as an example, we see that Nexage ads give them the highest CTR, but also that they are the least used exchange compared to the other 3. Equipped with this information, we believe that a client service team member assigned to the Ford account could further investigate the Nexage-purchased ads and make a recommendation to the client to increase their spend with the exchange if the data holds true!
Logistic Model Coefficients:
Another method of identifying drivers of CTR is to analyze the coefficients of a linear (logistic) regression. In this case, we have taken the top 5 positive and bottom 5 negative coefficients for each client to understand if there are any major drivers of performance for each client:
Across each client, we see that Region (US State) is often the most important contributor to campaign success. Clients looking to further target and segment their customer base could use these insights as a starting point to know what state holds their most engaged consumers.
However, on the other side of the coin are the client's negative coefficients, which represent features that actually detract from their campaign's CTR. From this graph we can see that Region is still quite important but also that IAB Category has a large impact. These clients are now enabled to identify what apps they are purchasing ads through that fit into these categories and could decrease their spend from them accordingly.
Model Performance:
Lastly, we wanted to investigate whether different models were particularly well suited for certain industries or for RTB data as a whole. The below table displays the evaluation metrics used where AreaUnderPR optimizes for precision and AreaUnderROC for accuracy. We did not see any significant trend or differences in the results but believe that producing more advanced Random Forest or Neural Network models could be promising. However, another major lesson learned in dealing with such vast amounts of data is to take into consideration the computational time it takes to tune and evaluate more complex algorithms. In the end, additional client input would be necessary to determine the required quality, speed, and interpretability of the optimal model.
Closing Thoughts:
We were very excited to tackle this massive dataset and familiarize ourselves with implementing big-data-scale machine learning algorithms in PySpark and MlLib. While there are always many possible directions for further investigation, we know that the most important driver for CTR is firstly the client being represented. Using this finding,ย we delivered to the firm a reusable framework to predict CTR and identify the initial features that are both drivers and detractors of a given client's current campaigns. We wanted to thank the partner firm for providing this great learning opportunity to us in the form of a real-world dataset and hope that they find value in our analyses.
Thanks for browsing our work and don't hesitate to reach out if you have any additional questions or feedback on the approach and techniques used within this project.