Machine Learning Application in Hedge Fund

and
Posted on Dec 21, 2016

Introduction

Most stock market data is not publicly available even though individuals could have access to more market data through Yahoo Finance than ever before. Numerai is the first interface between machine learning intelligence and global capital, which manages an institutional grade long/short global equity strategy for the investors in hedge fund, transforms and regularizes financial data into machine learning problems for global network of data scientists. People do not need financial domain knowledge for machine learning model development. Numerai has an updated open data source which provides high quality encrypted stock market data for developing machine learning models.

Data Feature

The data is clean and tidy and you could apply whatever methods you would like to apply.

Firstly, let’s take a look at what the data looks like, which has been used for competition between Dec 14 ---Dec 21, 2016 (21 features, 1 target for prediction, 136573 observations for training, 21 features, 13518 observations for testing). The data has already been scaled between 0 and 1.

screen-shot-2016-12-18-at-7-40-07-pm

Figure 1 Data Summary

The model performance for measurement is logloss. Logloss is suitable for measuring the probability of a binary outcome. It considers the confidence of the prediction when assessing how to penalize incorrect classification. For example, when you have a binary classification problem, a prediction outcome of 0.99 has a more confidence level compared with the outcome of 0.59 through logloss measurement, but you could only classify them as one outcome if you set a 0.5 threshold.

screen-shot-2016-12-18-at-3-16-26-pm

Exploration Data Visualization

Firstly, we checked the distribution of the training dataset by using barplot, boxplot and violin plot as shown in figures, from plots we could see the data is evenly distributed and no significant difference among features and we could not extract a lot of information from those plots.

feature-balance-verification-2 feature-value-distributions rplot03

Secondly, we checked correlations among all the features, it is found that more than half of the features are highly correlated and we could do some feature importance analysis to decide whether we could do dimension reduction or expansion.

rplot

So for this project, we have two plans for developing machine learning models for Numerai projects, “Less” approach and “More” approach.

screen-shot-2016-12-18-at-8-29-13-pm

Less Approach

In the “Less” approach, lasso regression, random forest have been adopted for feature exploration, logistic regression, random forest and XGBoost have been adopted for model training and development.

In the lasso model for feature deduction, the lambda is set as 1e-3 and the result shown in the figure is that feature 4,6,10,13,18,19, 20, 21 are significant and should be kept as important features. While in the random forest model results, feature 6, 20, 13, 21, 10, 2, 7, 9, 5, 12

14, 8, 11, 16, 15, 17, 1, 9, 4, 18, 3 are significant which is slightly different with the result got from lasso. Anyway feature 4, 6, 10, 13, 18, 21 are proved to be important features by both models. Due to the different results shown above and it is difficult to decide whether we should only keep some important features for modeling, all features have been kept for the initial model development.

screen-shot-2016-12-18-at-9-13-21-pm

3

Parameter Value GridSearch Results
Number of trees range(50, 50, 300)
Colsample by tree [0.1, 0.2, 0.4, 0.6, 0.7, 0.8]
Max depth [ 2, 4, 6, 8]
Sub sample [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8]
Learning rate [0.001, 0.1, 0.2, 0.3]


Then we tried random forest algorithm with 300, 500, 800 trees and cross-validation, the result we got is 0.69501. Finally, we tried XGBoost, which is famous for machine learning competition. We implement grid search for parameter optimization as shown in the table, the process is shown in figure. The best combination of parameters is 0.6 for colsample by tree, 0.8 for subsample, 0.1 for learning rate, 50 for number of estimators and 2 for depth. We put the grid search results into a XGBoost model, the results shown in the leaderboard is 0. 69028. Based on the results of these models, it is found that logistic regression has the best fitting of the model and the prediction model could be improved more with the more feature engineering work.Logistic regression has been selected for model training since it is easy to implement, efficient to try, so we applied it to get a quick check about the prediction performance. With the cross validation of the training dataset, we got a logloss with a value of 0.68910 on the leaderboard.

4-colsample_bytree2 4-subsample1 5-learning_rate 6-depth

In this part, Python scikit learn has been used for model development since it includes efficient supervised and unsupervised machine learning algorithms.

Ridge Regression

We tried the ridge regression, which always determine the lambda by the deviance.

 

ridge

 

After the cross validation, the lambda was found close to 0, which seems that there is no need to add penalty term. If we change the cost function, what will happen?

ridge2

Two more  kinds of cost function were utilized to tune the lambda, the left graph show the tune with logloss function, and the right graph utilize the class accuracy. From these two graph we found the lambda close to 0 too, which result is very abnormal in logistic regression model.

More Approach

What cause the lambda not necessary?

Hypothesis: Feature is not enough, risk of high bias, try the feature expansion.

Feature engineering

  • Neural Network “expand features” automatically
  • Expand features from 21 to 42 by exp(-feature)
  • Expand features from 21 to 126 with the response kept within (0,1) by the transformation below
  • Expand features from 126 to 864, multiply every two features to create cross term

Taylor expansion: 

explog sincostanh

feature-engineering

Above are the density plots of each transformation. The 1st graph is the density plot of the original feature, from which we can see the high density for class 1 at feature 21 smaller than 0.5. We can see similar distribution in each transformation, and in the last graph, the different distribution of each transformation could improve the accuracy like resolution increasing.

How does the feature expansion performance?

Neural Network

We train the original date by neural network, with several grid search.

The best result is the activation as "RectifierWithDropout", with the logloss 0.692 on train set, but always large error on test set. Most of the observation's response is between 0.45 to 0.55, which make the model too sensitive, not stable.

 

Ridge Regression

Ridge Regression was used to train three  data with different number of features.

Original feature 42 features 126 features 864 features
Lambda 0.008277857 0.01942 0.09637935 0.226128
Accuracy on test set 0.5206971 0.521893 0.5221127 0.5237723

From the result data, as the number of feature increase, the regularization term get larger and the accuracy get improved too. The logloss increase by ~0.001, comparing with the difference between the top and bottom score on the lead board, the 5% improvement is significant.

 

About Authors

leizhang

He got his PhD degree in Physics from City University of New York in 2013, and recently completed his post doctoral projects funded by CDMRP (Congressionally Directed Medical Research Programs, Department of Defense) and US Department of Energy,...
View all posts by leizhang >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI