Data Analysis on NBA Success

Posted on Dec 11, 2019
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.


Over the last decade the NBA always seems to be dominated by the same franchises. Most recently, the Golden State Warriors have gone to 5 straight championships, winning 3 of them. As a lifelong fan of a not so successful franchise (I’m from New York), I wanted to explore what makes teams successful. I sought to explore three questions for this data project:

  1. Can teams’ wins and losses be successfully predicted?
  2. What kind of players go onto successful teams?
  3. Given salary constraints, how can a competitive roster be more efficiently constructed?


All data was scraped from for individual player and team stats for the past 25 years. Given that the NBA website has dynamic tables I decided to use Selenium to scrape the data. Selenium can be used to drive through the web page and interact with the dynamic table.

Data Analysis on NBA Success

Data Analysis

The three point shot has become the hallmark and standard if you want a competitive team in the NBA. One of the largest trends over the past decade has been the increase in average three point shots taken per game. The reason for this is simply that three points is worth more than two points, so players have abandoned the mid range game in favor of taking more threes. We can see from the graph below that average three point shots have more than doubled for the league as whole since the 2006 season.

However, teams have not gotten much better at shooting threes as we can see that their three point made percentage has remained relatively stable. The increase in threes has had an impact on effective field goal percentage (eFG%). eFG% is an important metric for players and teams as it takes into account the increased value of threes. The growth in threes taken has led to a several percentage point jump in eFG% over the past several years as can be seen in the second graph.

Data Analysis on NBA Success
Data Analysis on NBA Success

NBA 4 Factor Data Model

The first question I wanted to answer was what factors can help in determining team wins and losses. In order to do this I looked at the 4 factor model developed by Dean Oliver. As a quick introduction to the 4 factor model, the 4 factors determining success in the NBA are straight forward:

  1. Scoring

     This is measured by eFG% mentioned earlier. This gives a sense of points per field goal attempt by a team. Again, this metric adjusts for three point shots made.
  2. Rebounding percentage

    This is somewhat intuitive, if you don’t score you want to grab rebounds. The higher percentage of available rebounds you get the better.
  3. Free throw rate

    This metric gives a sense of how well a team does at getting free throw shots for each field goal attempted. In basketball, free point opportunities are a good thing.
  4. Turnover percentage

     Don’t turn the ball over! You don’t want to lose precious scoring opportunities and give the other team more chances.

There is one more thing to mention for this, the 4 factor model is actually 8. It is actually 8 factors because teams play both offense and defense, so each of the 4 factors also has a defensive equivalent measure.

Plotting the correlation between the 8 factors and number of wins for the season gives us a sense that all of these factors are strongly correlated with wins. The factors are also in the direction that we would expect. For example, scoring is important and we can see eFG% and opponents eFG% have strong correlations (greater than 0.8) and eFG% has a positive relationship with wins and opponents eFG% has a negative relationship.

I wanted to use the 4 factor model to see how well these factors help predict teams wins and losses. In order to test this I ran a linear regression with wins between the independent variable we want to predict. The coefficients were significant and in the direction that we would expect. Additionally, the r squared was over 95%.

Predicted Wins vs Actual Wins

To get a sense of how accurately this model could predict wins for the season I plotted the predicted wins against the actual wins for each team.

The predicted wins were very close to the actual wins, and all predictions were within 1 standard deviation of the actual results. 2015 was an interesting year, it had both the largest difference between the expected wins (the Philadelphia 76ers) and actual wins for a team and also had the highest estimate wins for a team (Golden State Warriors).

This year the 76ers were the worst team in the league and were expected to be throwing the season in order to get the number 1 draft pick. Perhaps, this prediction could be used to search for teams that were intentional bad/ throwing the season in order to chase a high draft pick. 2015 was also the year the Golden State Warriors put up the best regular season record in NBA history. The expected wins captured this as it had the Warriors as the highest expected wins over the past 15 seasons. Overall, it seems that these factors do quite well at helping us predict expected wins.

Additional Work

As next steps, I would like to explore the factors and individual player contributions. This can give us a sense on how to construct a team that would improve each of the factors. Additionally, traditional positions are somewhat restrictive and outdated. Player skills have evolved and many players are multi positional. I would like to apply cluster analysis in order to group players into more natural position groups.

About Author

Tomás Nivón

Tomás is currently a data science fellow at NYC Data Science Academy. He has several years experience in Finance and Consulting. He holds a Masters of Engineering in Financial Engineering from Cornell University. Tomás is passionate about applying...
View all posts by Tomás Nivón >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI