Predictions for Everyone with the Price Prediction App: An MLOps Platform
Introduction:
Deploying a machine learning classifier to predict future price movement of financial assets can involve a lot of software and infrastructure set up to train and visualize predictions correctly in real-time. Price Prediction App is designed to streamline this process for Data Scientists and Machine Learning Engineers.
How it works:
The basic workflow of the Price Prediction App is an extension of a typical machine learning model development process. The app is broken down into three steps. 1) Label future price movement and configure training/test data into a CSV dataframe for offline EDA and training. 2) Upload and compare models and train/test results. 3) Deploy and visualize live price predictions and model performance.
Inspiration and Development:
The motivation behind the Price Prediction App is to provide a cloud-based MLops infrastructure and an environment where model pipeline development can quickly be experimented with and iterated on. It offers over 50 macro-economic core input features indexed hourly and dating back to 2017 for feature engineering, model training, and live deployment. The goal is to provide the Machine Learning engineer with historical input proxy features and price movement for model development.
How Price Prediction App Works:
The application is broken down into three steps, as mentioned above, and the user interface is detailed below.
Step 1 (Configure Dataframe):
Configuring and using the on-platform dataframe to train the model is essential to ensure consistency among inputs once uploaded and deployed. The application provides a user interface to select a price output (currently only serving BTC/USD pair) and visually explore potential input features.
Next, the selected output needs to be labeled. There is a simple, look-ahead binary labeling capability for the selected historical data. The basic idea is that if the chosen historic output price increases above the percent change threshold within the lookahead timesteps, the row will be classified as 1 (buy) at its original datetime index; otherwise, it will be classified as 0 (hold). The labeling strategy aims to identify historically bullish and bearish price movements. Once the time range and labeling are complete, the graph will update with highlighted regions of bullish, labeled data and ready for model development with a downloadable CSV file.
Step 2 (Model Comparison):
Once the model is developed locally, train/test results are saved in a Price Prediction App specific database and an MLflow instance. The model object is uploaded to Google Cloud Storage for retrieval during live predictions. Then, trained models are accessible by the associated label ID from Step 1. This is intended to keep models grouped based on similar lookahead/percent change thresholds. These models can be accessed from the "Available Labels" dropdown, where information on lookahead values/threshold and the number of trained models appear below, as well as an updated timeframe on the price profile graph where the selected labeling is applicable.
Next, the user can explore available trained models in the "Trained Models" dropdown. Basic model information, a confusion matrix, and an ROC curve on the test set will appear below.
Models can be activated/deployed for live predictions by check marking "activate" in model details and view live predictions on the next page by clicking "view here" or "3) Live Predictions" navigation bar located at the top of the page.
Step 3 (Live Predictions):
Once a model is activated/deployed, the prediction service will pull the model object, buffer the database with recent predictions, and schedule the following predictions hourly. The model remains scheduled until deactivated via checkmark from Step 2. Users can visually see these predictions by clicking on the model ID and viewing the provided price trace or tabulated following predictions.
Once the model is activated and prediction history is available, model performance can be visualized in the “model performance” link. This provides a metrics model-specific performance based on recent price prediction history. A running accuracy difference trace, confusion matrix, and results table are accessible.
Finally, an embedded Grafana dashboard is available on the “System Info” link, where the cloud infrastructure observability can be accessed. Price Prediction App is built on Google Cloud Kubernetes Engine, and system design will be discussed in more detail next.
System and High Level Technical Details:
The system design for the Price Prediction App had several high-level requirements for maintainability, consistent data processing, and cloud infrastructure. To maintain and update/edit different areas of the application without interrupting other services, the system was designed with containerized services connected to various data sources and coordinated in a Kubernetes cluster.
System Design:
The application contains two databases and three services: feature, prediction, and frontend services. The feature service connects with Polygon.io and Alpha Vantage APIs to call historical and current data every minute from over 50 macroeconomic features and BTC/USD price. Then, it commits them to a feature service specific Postgres database for use in the application’s feature store. This service has several functions intended to initialize data, call current price data/features, and a Dag folder ready to be utilized with an Airflow/Cloud Composer instance.
The prediction service has a model directory database and connects with Google Cloud Storage to store and retrieve trained Scikit-learn model objects. It also connects with the feature store via Kubernetes volumes to access a consistent CSV dataframe for the prediction models. The primary function samples any change in the activated model list from Step 2 and commits predictions with corresponding price lookahead/threshold values. This function commits predictions hourly for active models. There is also a local ETL/training script to upload models locally to the model directory and MLflow database.
The frontend service connects with the feature store and prediction service database for user access to data sources, model information, and prediction values. The Plotly Dash framework powers it and has an MLflow entry point link and embedded Grafana dashboard.
Cloud Infrastructure:
The application is deployed with Google Cloud Computing and uses CloudSQL for Postgres database management, Kubernetes Engine for container orchestration, and Cloud Storage for model artifact storage. The cluster configuration is designed to manage the three services mentioned above in separate deployments and pods. Then, the frontend dashboard pod has an external load balancer entry point where the user can access the web interface. Also, the Grafana and MLflow services are maintained on this cluster with external endpoints.
Conclusion:
Price Prediction App delivers an end-to-end solution for streamlining the ML modeling workflow and deployment for live predictions of financial assets. It was designed with best practices in MLops/DevOps and built on Google Cloud infrastructure. This is an ongoing project and is ready for the addition of new features and asset pairs. Happy price predicting!