Machine Learning Application in Hedge Fund
Introduction
Most stock market data is not publicly available even though individuals could have access to more market data through Yahoo Finance than ever before. Numerai is the first interface between machine learning intelligence and global capital, which manages an institutional grade long/short global equity strategy for the investors in hedge fund, transforms and regularizes financial data into machine learning problems for global network of data scientists. People do not need financial domain knowledge for machine learning model development. Numerai has an updated open data source which provides high quality encrypted stock market data for developing machine learning models.
Data Feature
The data is clean and tidy and you could apply whatever methods you would like to apply.
Firstly, let’s take a look at what the data looks like, which has been used for competition between Dec 14 ---Dec 21, 2016 (21 features, 1 target for prediction, 136573 observations for training, 21 features, 13518 observations for testing). The data has already been scaled between 0 and 1.
The model performance for measurement is logloss. Logloss is suitable for measuring the probability of a binary outcome. It considers the confidence of the prediction when assessing how to penalize incorrect classification. For example, when you have a binary classification problem, a prediction outcome of 0.99 has a more confidence level compared with the outcome of 0.59 through logloss measurement, but you could only classify them as one outcome if you set a 0.5 threshold.
Exploration Data Visualization
Firstly, we checked the distribution of the training dataset by using barplot, boxplot and violin plot as shown in figures, from plots we could see the data is evenly distributed and no significant difference among features and we could not extract a lot of information from those plots.
Secondly, we checked correlations among all the features, it is found that more than half of the features are highly correlated and we could do some feature importance analysis to decide whether we could do dimension reduction or expansion.
So for this project, we have two plans for developing machine learning models for Numerai projects, “Less” approach and “More” approach.
Less Approach
In the “Less” approach, lasso regression, random forest have been adopted for feature exploration, logistic regression, random forest and XGBoost have been adopted for model training and development.
In the lasso model for feature deduction, the lambda is set as 1e-3 and the result shown in the figure is that feature 4,6,10,13,18,19, 20, 21 are significant and should be kept as important features. While in the random forest model results, feature 6, 20, 13, 21, 10, 2, 7, 9, 5, 12
14, 8, 11, 16, 15, 17, 1, 9, 4, 18, 3 are significant which is slightly different with the result got from lasso. Anyway feature 4, 6, 10, 13, 18, 21 are proved to be important features by both models. Due to the different results shown above and it is difficult to decide whether we should only keep some important features for modeling, all features have been kept for the initial model development.
Parameter | Value | GridSearch Results |
Number of trees | range(50, 50, 300) | |
Colsample by tree | [0.1, 0.2, 0.4, 0.6, 0.7, 0.8] | |
Max depth | [ 2, 4, 6, 8] | |
Sub sample | [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8] | |
Learning rate | [0.001, 0.1, 0.2, 0.3] |
Then we tried random forest algorithm with 300, 500, 800 trees and cross-validation, the result we got is 0.69501. Finally, we tried XGBoost, which is famous for machine learning competition. We implement grid search for parameter optimization as shown in the table, the process is shown in figure. The best combination of parameters is 0.6 for colsample by tree, 0.8 for subsample, 0.1 for learning rate, 50 for number of estimators and 2 for depth. We put the grid search results into a XGBoost model, the results shown in the leaderboard is 0. 69028. Based on the results of these models, it is found that logistic regression has the best fitting of the model and the prediction model could be improved more with the more feature engineering work.Logistic regression has been selected for model training since it is easy to implement, efficient to try, so we applied it to get a quick check about the prediction performance. With the cross validation of the training dataset, we got a logloss with a value of 0.68910 on the leaderboard.
In this part, Python scikit learn has been used for model development since it includes efficient supervised and unsupervised machine learning algorithms.
Ridge Regression
We tried the ridge regression, which always determine the lambda by the deviance.
After the cross validation, the lambda was found close to 0, which seems that there is no need to add penalty term. If we change the cost function, what will happen?
Two more kinds of cost function were utilized to tune the lambda, the left graph show the tune with logloss function, and the right graph utilize the class accuracy. From these two graph we found the lambda close to 0 too, which result is very abnormal in logistic regression model.
More Approach
What cause the lambda not necessary?
Hypothesis: Feature is not enough, risk of high bias, try the feature expansion.
Feature engineering
- Neural Network “expand features” automatically
- Expand features from 21 to 42 by exp(-feature)
- Expand features from 21 to 126 with the response kept within (0,1) by the transformation below
- Expand features from 126 to 864, multiply every two features to create cross term
Taylor expansion:
Above are the density plots of each transformation. The 1st graph is the density plot of the original feature, from which we can see the high density for class 1 at feature 21 smaller than 0.5. We can see similar distribution in each transformation, and in the last graph, the different distribution of each transformation could improve the accuracy like resolution increasing.
How does the feature expansion performance?
Neural Network
We train the original date by neural network, with several grid search.
The best result is the activation as "RectifierWithDropout", with the logloss 0.692 on train set, but always large error on test set. Most of the observation's response is between 0.45 to 0.55, which make the model too sensitive, not stable.
Ridge Regression
Ridge Regression was used to train three data with different number of features.
Original feature | 42 features | 126 features | 864 features | |
Lambda | 0.008277857 | 0.01942 | 0.09637935 | 0.226128 |
Accuracy on test set | 0.5206971 | 0.521893 | 0.5221127 | 0.5237723 |
From the result data, as the number of feature increase, the regularization term get larger and the accuracy get improved too. The logloss increase by ~0.001, comparing with the difference between the top and bottom score on the lead board, the 5% improvement is significant.