NBA player statistics (Sumanth Reddy & Joseph Russo)
The skills we demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction
During the recent NBA playoffs, the Cleveland Cavaliers suffered multiple key injuries to some of their star players: Kevin Love and Kyrie Irving.
With a very top-heavy roster in terms of talent, this tasked Lebron James with the challenge of carrying the team. James is the best player in the NBA, and immediately saw a large uptick in traditional NBA counting statistics.
Despite this, we were curious if James' efficiency changed, as it seems logical to see an adverse effect when losing the presence of great teammates.
Before we could dive into the analysis, we had to scrape the data from NBA Reference, a website that stores all historical data concerning the NBA.
Using Python, one is able to access the underlying data inside the HTML’s Document Object Model (DOM) and eventually store this information as a comma-separated value (CSV) file.
Importing Libraries
A common first step is to load useful libraries, and in particular, for web-scraping the BeautifulSoup library is very handy. In addition, Pandas and Seaborn libraries were imported for data analysis and visualization, respectively.
from bs4 import BeautifulSoup import requests import pandas as pd import numpy as np #import matplotlib.pyplot as plt import seaborn as sns #needed to convert unicode to numeric import unicodedata from IPython.display import Image %pylab inline
Web-Scraping
Upon investigating the DOM, there are various tags in the tree structure, and in particular the "thead" and "tbody" elements had the information of interest. The "trick" was to use these object's methods to form the rows and columns objects, which eventually are used to form a friendly Pandas dataframe object.
#regular season data leb = 'http://www.basketball-reference.com/players/j/jamesle01/gamelog/2015/' ####### #lebron ####### player_string_key = leb req = requests.get(player_string_key) text = BeautifulSoup(req.text, "html.parser") stats = text.find('table', {'id': 'pgl_basic'}) # find the schema cols = [i.get_text() for i in stats.thead.find_all('th')] # convert from unicode to string cols = [x.encode('UTF8') for x in cols] #these are schema with empty string names cols[5]='home_away' cols[7]='win-loss' # get rows rows = [i.get_text().split('n') for i in stats.tbody.find_all('tr')] # convert rows to strings for i in range(len(rows)): rows[i] = [x.encode('UTF8') for x in rows[i]] rows=rows[1:-1] short = [] for i in range(len(rows)): if len(rows[i]) < 31: short.append(i) new_rows = [] for i in range(len(rows)): if i in short: continue else: new_rows.append(rows[i]) l = range(len(new_rows)) ###### #change df name for each player (lebron, love, etc....needs to part of automation for multi-player) ###### lebron = pd.DataFrame(columns=cols, index = l) # create dataframe with schema for i in l: try: # the split function was adding an empty string to the front and end of each row, needed to be removed new_rows[i]=new_rows[i][1:-1] except ValueError: continue for i in l: try: lebron.loc[i]=new_rows[i] except ValueError: continue
Data Clean Up
Once the HTML data was stored into a Pandas Dataframe object, a quick inspection shows that there are duplicate rows, columns with missing names, and that the numeric values are not actually numeric, but strings (which is much more difficult to analyze statistically).
#Note: raw scraped data is not numeric. #since the entire schema is replicated every N rows, we can just check one of the columns for the match validRow_boolVector = lebron['Rk'] != 'Rk' validRow_boolVector = lebron['Rk'] != 'Rk' #and (lebron['MP'] != ('Did Not Play' or 'Inactive'))) #use the boolean vector as a mask for only keeping True values lebron = lebron[validRow_boolVector] ####### #2015-06-23 #Jason - head function returns a formatted string about to make it "pretty" #saveHead = blah.head() isnt a good idea, as the head function is more for display, and not saving original data #unicodedata.numeric(lebron.FG[:,]) #This will overwrite the dataframe itself with the converted object types....AVOID! #lebron = lebron.convert_objects(convert_numeric=True).dtypes #Convert "objects" in the dataframe to numeric, float, etc lebron = lebron.convert_objects(convert_numeric=True) #test that these columns are now actually numeric #print lebron.FG[0] + 1 print lebron[1:25]
Data Validation
Once the data has been cleaned, it was checked against the CSV file provided by the website. When comparing the files, they looked very similar, and confirmed the values generated by the HTML web-scraping were accurate.
#load CSV from website lebron_validate = pd.read_table('Lebron_CSV_validatedSetFromWebsite.txt', sep=',') #rename empty schema with appropriate names lebron_validate.rename(columns={'Unnamed: 5': 'home_away'}, inplace=True) lebron_validate.rename(columns={'Unnamed: 7': 'win-loss'}, inplace=True) #quick glance at validation set #print lebron_validate[1:10] ##validate we got the dataframe #print lebron_validate.head(10) #since the entire schema is replicated every N rows, we can just check one of the columns for the match validate_validRow_boolVector = lebron_validate['Rk'] != 'Rk' #and (lebron_validate['MP'] != ('Did Not Play' or 'Inactive'))) #validate_validRow_boolVector = (lebron_validate['Rk'] != 'Rk' and (lebron_validate['MP'] != ('Did Not Play' or 'Inactive'))) #use the boolean vector as a mask for only keeping True values lebron_validate = lebron_validate[validate_validRow_boolVector] ##Are these numeric? #print type(lebron_validate.STL[0]) #Convert "objects" in the dataframe to numeric, float, etc lebron_validate = lebron_validate.convert_objects(convert_numeric=True) ##Check conversion #print type(lebron_validate.STL[0]) #print lebron_validate.FG[0] + 1 #Change NaNs to empty string #glancing at downloaded .txt file....the empty values between commas were replaced with NaN #DESTROYING DF #lebron_validate = lebron_validate.fillna(' ', inplace=True) lebron_validate = lebron_validate.fillna('') ##validate NaNs are replaced with empty string #print lebron_validate.head(10) print lebron_validate.head(25)
Visualization using Seaborn
Each NBA player has specific key statistics that are useful to summarize their performance per game. Seaborn allows a direct way to summarize and visualize these key statistics using a box-plot.
#set the SeaBorn plotting parameters sns.set(context='talk', style='white', palette='deep', font='sans-serif', font_scale=2, rc=None) #choose columns of lebron dataframe for plotting plotColumns = ['PTS','AST','STL','BLK','FG','FGA','TRB','TOV'] #set the parent figure size figsize(16,10) #grab handle to boxPlot object lebron_boxPlot = sns.boxplot(lebron[plotColumns]) #get axes child handle lebron_boxPlot_axes = lebron_boxPlot.axes #set the title lebron_boxPlot_axes.set_title('Lebron: 2014-2015 - Regular Season') #set the range to allow BLK feature to not be flush with bottom of boxPlot lebron_boxPlot_axes.set_ylim(-10,50) #another way #sns.set_ylim #set current axes labels sns.axlabel("Features","Number")
Visualization in R
For a preliminary analysis, various box-plots were created using R to visualize the impact of the prescience of Cavalier teammate's on Lebron James's performance.
As seen from this first boxplot, the data was not evenly distributed. Out of 20 play-off games, only three featured Lebron, Kevin, and Kyrie in the same lineup.
During the regular season however, all three players participated in almost 90% of the games together. We felt this added a better baseline of consistent performance, and Lebron's average regular season statistics are represented by the blue line.
Even without considering teammates, Lebron clearly played a lot more minutes in the playoffs relative to the regular season.
That said, when both Kevin and Kyrie were hurt, Lebron played more minutes in every game than his averages from any other situation.
The 2nd box-plot is easier to interpret: As star teammates get hurt, Lebron's usage exhibits large jumps.
Trying to gauge how often Lebron had the ball in his hands, we defined usage as the summation of shots attempted, assists, and turnovers.
This appears to support the hypothesis that NBA teams constantly funnel the ball through their star players.
The final box-plot tested our main question: How did Lebron's efficiency change as his usage went up?
While the trend downwards is quite clear, it is worth noting that Kevin and Kyrie got hurt at different points in the playoffs, and the Cavaliers were facing tougher teams over the course of time.
While it would be natural to assume that Lebron's efficiency fell dramatically as his teammates got hurt, it is also possible that the increasing strength of his opponents was just as significant of a factor, if not more.
Conclusions
- The Python library BeautifulSoup allows a direct way to access the underlying HTML elements or “scrap” a web-page.
- The Pandas and Seaborn libraries provide convenient storage, manipulation, and visualization of data.
- When star NBA players get injured, their allotted minutes and usage are compensated for by the most prominent remaining player(s) on that team.
- Box-plots have limitations when it comes to small sample sizes. It is difficult to reach conclusions about the distribution of your data without performing more intensive analysis.
Future Goals
- Scrap the entire NBA for the last 10 years
- Apply machine learning algorithms to prediction of future NBA championship playoffs