Acquisition Due Dilligence Automation for Smaller Firms
Introduction
Background
From 2020 to 2023 approximately 80,000 mergers and acquisitions, worth a net total of $8.8 trillion, were closed. However, only 10-50% reached their expected value (i.e., were successful).
This problem stems from the resources allocated by acquisition firms when conducting due diligence on target firms. Regardless of the size of the target firm, the overhead costs of the analysis remains constant. That is why acquisition firms focus primarily on targeting larger firms.
During the due diligence process, a private equity firm hires an accounting firm to analyze the general ledger data for the purpose of judging the target company’s financial health. The main decision factors for the acquiring firm include the volatility, consistency, and growth of the target firm’s revenue stream.
Purpose
The goal of this project is to automate the due diligence process for acquiring small or midsize target firms. This allows for more informed decision-making on these firms, similar to what would be performed for larger ones, but with far less overhead.
Data
Transformations
The data from this project comes from a mid-sized manufacturing company that uses traditional accounting software practices. For data transformation, non-sales revenue accounts were filtered out. Labels and transaction amounts were all anonymized to protect privacy. The data included transaction descriptors: Description (Transaction ID), Organization, and Account.
The Description field was split into two engineered fields, namely Desc1 (general ID) and Desc2 (specific ID). The time intervals used were based off of week or monthly accounting period. They detected the rate and seasonality of changes in revenue over a 4 year span. Table 1 displays a sample of the resultant data.
Transactions
Figure 1 shows the sparsity of the data when grouped by Desc1. This suggests that we must cluster the data into higher-level groupings. A few Desc1s (e.g., CB) display both high transaction volume and magnitudes over the 47-period range.
Pandemic Effects
Since the pandemic started in the first fiscal year (FY2020), it is important to explore the trends in transaction frequency and totals within the first two years. Figure 2 shows that both the maximum and minimum transaction count occurred in FY2020 at the very beginning of the Covid-19 pandemic. This is further supported in the transaction sum where the minimum occurred in FY2020, whereas the maximum occurred during the recovery in FY2022. Based on these findings, it may be necessary to exclude FY2020 when detecting seasonality.
Methods
Clustering
In order to articulate the acquisition firm’s decision factors, the data must be analyzed for variance. The sparseness of the records made it necessary to identify groups of transactions with similar variance. This led to the use of PCA feature reduction that showed the Desc1s with the highest variance from all the other Desc1s. In Figure 3, Desc1 CB shows the greatest variation by approximately a factor of 6 compared to the average. EM and BK also show significant separation from the rest of the Desc1s.
KMeans provided further verification of data clusters and assisted in determining relationships between Desc1 and Account. Even though each Desc1 has a distinct association with given Accounts, the relationship is not reciprocal. As Accounts occur across multiple Desc1s, it is better to use Desc1s to classify or group the data rather than Accounts. In Figure 4, the Top 3 Desc1s makeup 68% of total transactions, and CB alone contributes 46%.
Seasonal Decomposition
After splitting the data into Top 3 Desc1s and Other transactions, it is now necessary to analyze its yearly and monthly consistency. This can be done using seasonal decomposition with additional data transformations to consider Covid effects and relative values.
In Figure 5, the trend for both transaction total and percentage is increasing. The upward trendline was confirmed for stationarity (Augmented Dickey-Fuller test p-value > 0.05). However, the seasonality appears to be inconclusive for both.
In Figure 6, percentage seasonality is much more significant than transaction seasonality. The seasonal range is approximately 4x the residual range. This suggests that Covid had an effect on the transaction percentage’s seasonality.
Models
Next, models are trained and tested based on the Top 3 Desc1s using percentage and excluding FY2020 due to Covid. In this section, ARIMA modeling will be tested against ensemble (i.e., RandomForest & GradientBoosting) modeling. From seasonal decomposition, it was evident that there is a strong seasonal component to the data when FY2020 is excluded. Therefore, ARIMA was considered for creating a model that captures the seasonality present in the data. Additionally, ensemble modeling assisted in verifying results from the abbreviated data and improving the predictive performance.
ARIMA
Figure 7 (excluding the first year due to Covid effects) shows a purely seasonal model. This means that the prior period’s transaction percentage does not affect the current period’s prediction. Additionally, the pure seasonality is evident with FY2021 being flatlined because it is used to form predictions on FY2022 and FY2023. Lastly, the transaction percentage optimizes the model because it standardizes the scales, alleviating any extreme volatility. This ARIMA model is the best descriptive model with an in-sample MAPE of 17.8% and shows a clear representation of the dataset’s seasonality.
Ensemble
Ensemble modeling analysis included RandomForest and GradientBoosting. Shown in Figure 8, the best model to predict the Top 3 Desc1s by percentage was Gradient Boosting with a cross validation MAPE of 19.1% and in-sample MAPE of 8.2%. A 12-month window was created to generate this model with the 12th month serving as the target. The Gradient Boosting model is the best choice for predictive modeling, even though forecasting was not included due to the limited dataset size.
Conclusion
The main concern for this dataset was whether or not the frequency and intensity of transaction peaks and troughs could be explained through seasonality or randomness. From the best ARIMA model, we found that seasonality was the prominent component of the prediction. This is assuming that the revenue stream was significantly affected by the pandemic and was driven by 3 transaction categories (Desc1s CB, EM, and BK).
In addressing private equity firm concerns, it was found that the target company’s revenue stream was quite volatile due to the Top 3 Desc1 transactions exhibiting a standard deviation 41 times the average. Due to seasonality, it was also found that no component of the revenue stream was inconsistent or sporadic. Lastly, the target company was experiencing reliable growth as confirmed through significant non-stationarity and upward trend.
This project explored methods of automating the acquisition due diligence process by finding ways of categorizing the data using transaction IDs, clustering using techniques like KMeans and PCA to group the data, decomposing to validate trends and seasonality, and modeling using ARIMA and ensemble for yearly and monthly predictions respectively. With these methods, private equity firms can conduct nearly as robust of a due diligence analysis as they would with larger firms, but with less overhead.
Appendix
Another aspect of automating the due diligence process involves converting manually generated Excel diagrams into automated Python images. Figure 9 represents the original Excel diagram for monthly sales, generated by the accounting firm. Figure 10 depicts the automated Python image for monthly sales, created by this project, duplicating the same dual axis format, bar charts, colored line plots, legend, labels, and annotations.
Links & References
- M&A Article: Yahoo Finance
- Institute for Mergers, Acquisitions, and Alliances: USA M&A Statistics
- Github & Final Presentation: Acquisition Due Diligence Capstone Project