Web-scraping NBA player statistics (Sumanth Reddy & Joseph Russo)

Posted on Jul 24, 2015



During the recent NBA playoffs, the Cleveland Cavaliers suffered multiple key injuries to some of their star players: Kevin Love and Kyrie Irving.

With a very top-heavy roster in terms of talent, this tasked Lebron James with the challenge of carrying the team. James is the best player in the NBA, and immediately saw a large uptick in traditional NBA counting statistics.

Despite this, we were curious if James' efficiency changed, as it seems logical to see an adverse effect when losing the presence of great teammates.

Before we could dive into the analysis, we had to scrape the data from NBA Reference, a website that stores all historical data concerning the NBA.

Using Python, one is able to access the underlying data inside the HTML’s Document Object Model (DOM) and eventually store this information as a comma-separated value (CSV) file.


Importing Libraries

A common first step is to load useful libraries, and in particular, for web-scraping the BeautifulSoup library is very handy.  In addition, Pandas and Seaborn libraries were imported for data analysis and visualization, respectively.

from bs4 import BeautifulSoup
import requests

import pandas as pd
import numpy as np

#import matplotlib.pyplot as plt
import seaborn as sns

#needed to convert unicode to numeric
import unicodedata

from IPython.display import Image

%pylab inline



Upon investigating the DOM, there are various tags in the tree structure, and in particular the "thead" and "tbody" elements had the information of interest.  The "trick" was to use these object's methods to form the rows and columns objects, which eventually are used to form a friendly Pandas dataframe object.


#regular season data
leb = 'http://www.basketball-reference.com/players/j/jamesle01/gamelog/2015/'

player_string_key = leb
req = requests.get(player_string_key)
text = BeautifulSoup(req.text, "html.parser")
stats = text.find('table', {'id': 'pgl_basic'})

# find the schema
cols = [i.get_text() for i in stats.thead.find_all('th')]  

# convert from unicode to string
cols = [x.encode('UTF8') for x in cols]                    

#these are schema with empty string names

# get rows
rows = [i.get_text().split('n') for i in stats.tbody.find_all('tr')] 

# convert rows to strings
for i in range(len(rows)):
    rows[i] = [x.encode('UTF8') for x in rows[i]]                          


short = []

for i in range(len(rows)):
    if len(rows[i]) < 31:
new_rows = []

for i in range(len(rows)):
    if i in short:

l = range(len(new_rows))

#change df name for each player (lebron, love, etc....needs to part of automation for multi-player)
lebron = pd.DataFrame(columns=cols, index = l)    # create dataframe with schema

for i in l:
        # the split function was adding an empty string to the front and end of each row, needed to be removed
    except ValueError:
for i in l:
    except ValueError:


Data Clean Up

Once the HTML data was stored into a Pandas Dataframe object, a quick inspection shows that there are duplicate rows, columns with missing names, and that the numeric values are not actually numeric, but strings (which is much more difficult to analyze statistically).

#Note: raw scraped data is not numeric.

#since the entire schema is replicated every N rows, we can just check one of the columns for the match
validRow_boolVector = lebron['Rk'] != 'Rk'
validRow_boolVector = lebron['Rk'] != 'Rk' #and (lebron['MP'] != ('Did Not Play' or 'Inactive')))

#use the boolean vector as a mask for only keeping True values
lebron = lebron[validRow_boolVector]


#Jason - head function returns a formatted string about to make it "pretty"
#saveHead = blah.head() isnt a good idea, as the head function is more for display, and not saving original data

#This will overwrite the dataframe itself with the converted object types....AVOID!
#lebron = lebron.convert_objects(convert_numeric=True).dtypes

#Convert "objects" in the dataframe to numeric, float, etc
lebron = lebron.convert_objects(convert_numeric=True)

#test that these columns are now actually numeric
#print lebron.FG[0] + 1

print lebron[1:25]


Data Validation

Once the data has been cleaned, it was checked against the CSV file provided  by the website.   When comparing the files, they looked very similar, and confirmed the values generated by the HTML web-scraping were accurate.

#load CSV from website
lebron_validate = pd.read_table('Lebron_CSV_validatedSetFromWebsite.txt', sep=',')

#rename empty schema with appropriate names
lebron_validate.rename(columns={'Unnamed: 5': 'home_away'}, inplace=True)
lebron_validate.rename(columns={'Unnamed: 7': 'win-loss'}, inplace=True)

#quick glance at validation set
#print lebron_validate[1:10]

##validate we got the dataframe
#print lebron_validate.head(10)

#since the entire schema is replicated every N rows, we can just check one of the columns for the match
validate_validRow_boolVector = lebron_validate['Rk'] != 'Rk' #and (lebron_validate['MP'] != ('Did Not Play' or 'Inactive')))
#validate_validRow_boolVector = (lebron_validate['Rk'] != 'Rk' and (lebron_validate['MP'] != ('Did Not Play' or 'Inactive')))

#use the boolean vector as a mask for only keeping True values
lebron_validate = lebron_validate[validate_validRow_boolVector]

##Are these numeric?
#print type(lebron_validate.STL[0])

#Convert "objects" in the dataframe to numeric, float, etc
lebron_validate = lebron_validate.convert_objects(convert_numeric=True)

##Check conversion
#print type(lebron_validate.STL[0])
#print lebron_validate.FG[0] + 1

#Change NaNs to empty string
#glancing at downloaded .txt file....the empty values between commas were replaced with NaN
#lebron_validate = lebron_validate.fillna(' ', inplace=True)
lebron_validate = lebron_validate.fillna('')

##validate NaNs are replaced with empty string
#print lebron_validate.head(10)

print lebron_validate.head(25)


Visualization using Seaborn

Each NBA player has specific key statistics that are useful to summarize their performance per game.  Seaborn allows a direct way to summarize and visualize these key statistics using a box-plot.

#set the SeaBorn plotting parameters
sns.set(context='talk', style='white', palette='deep', font='sans-serif', font_scale=2, rc=None)

#choose columns of lebron dataframe for plotting
plotColumns = ['PTS','AST','STL','BLK','FG','FGA','TRB','TOV']

#set the parent figure size

#grab handle to boxPlot object
lebron_boxPlot = sns.boxplot(lebron[plotColumns])

#get axes child handle
lebron_boxPlot_axes = lebron_boxPlot.axes

#set the title
lebron_boxPlot_axes.set_title('Lebron: 2014-2015 - Regular Season')

#set the range to allow BLK feature to not be flush with bottom of boxPlot
#another way

#set current axes labels




Visualization in R

For a preliminary analysis, various box-plots were created using R to visualize the impact of the prescience of Cavalier teammate's on Lebron James's performance.


As seen from this first boxplot, the data was not evenly distributed. Out of 20 play-off games, only three featured Lebron, Kevin, and Kyrie in the same lineup.

During the regular season however, all three players participated in almost 90% of the games together.  We felt this added a better baseline of consistent performance, and Lebron's average regular season statistics are represented by the blue line.

Even without considering teammates, Lebron clearly played a lot more minutes in the playoffs relative to the regular season.

That said, when both Kevin and Kyrie were hurt, Lebron played more minutes in every game than his averages from any other situation.



The 2nd box-plot is easier to interpret: As star teammates get hurt, Lebron's usage exhibits large jumps.

Trying to gauge how often Lebron had the ball in his hands, we defined usage as the summation of shots attempted, assists, and turnovers.

This appears to support the hypothesis that NBA teams constantly funnel the ball through their star players.



The final box-plot tested our main question: How did Lebron's efficiency change as his usage went up?

While the trend downwards is quite clear, it is worth noting that Kevin and Kyrie got hurt at different points in the playoffs, and the Cavaliers were facing tougher teams over the course of time.

While it would be natural to assume that Lebron's efficiency fell dramatically as his teammates got hurt, it is also possible that the increasing strength of his opponents was just as significant of a factor, if not more.



  • The Python library BeautifulSoup allows a direct way to access the underlying HTML elements or “scrap” a web-page.
  • The Pandas and Seaborn libraries provide convenient storage, manipulation, and visualization of data.
  • When star NBA players get injured, their allotted minutes and usage are compensated for by the most prominent remaining player(s) on that team.
  • Box-plots have limitations when it comes to small sample sizes. It is difficult to reach conclusions about the distribution of your data without performing more intensive analysis.

Future Goals

  • Scrap the entire NBA for the last 10 years
  • Apply machine learning algorithms to prediction of future NBA championship playoffs


About Author

Leave a Comment

http://rubyslape.wordpress.com/2015/06/24/is-hammertoe-surgery-successful August 21, 2017
I'm not positive the place you are getting your information, but good topic. I needs to spend some time finding out more or figuring out more. Thank you for fantastic info I used to be in search of this information for my mission.
strider toddler bike February 16, 2016
With all the pleasing features, the pros obviously outweigh the cons of this product. But what if you have already co-signed for a car loan. If you also think the same and want your child to develop then educational toys is the answer to all your problems.
strider toddler bike February 9, 2016
First, Cobb worked patiently with Norman and taught him how to ride a scooter. Also, you need to set a schedule for exercise so that the biological clock of your body is set. Here's a way you can polish your long putting: First, don't go for an opening.
cheap kazam bike review February 9, 2016
Since, the cycle trainer is a qualified & experienced person having enough knowledge & experience to guide you through the right path. Dee Dee has been training at Florida Fitness Concepts since 1994. You should be able to make your own health potions, PP potions, and Pokeballs.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp