Visualizing Professional Tennis Upsets: ATP 2012-2014 Men's Singles Matches
Contributed by Tyler Knutson. He is currently enrolled in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between July 5th to September 23rd, 2016. Code can be viewed in its entirety on GitHub.
Context
Men's professional tennis is unique in that despite the dominance of a select few competitors at the top of the ATP world rankings, upsets do occur regularly. How dominant are these top players? Consider this: over the last 10 years, just seven players have won the 39 available grand slam titles (2016 US Open still pending at the time of this writing). Of these players, three have won all but seven (82%) of the grand slams: Rafael Nadal (12), Novak Djokovic (12), and Roger Federer (8).
Furthermore, as a player enjoys success and accrues wins, he earns valuable ATP points. These points are critical as they set the player's world ranking, which in turn typically determines his seeding in each tournament. Given the single elimination format of most tournaments, the highest ranked players will face the easiest competition in early rounds and maximize their chances to advance in the tournament, thus helping them earn additional ATP points. Put differently: the rich get richer.
Current ATP World Rankings (as of 7/20/16, source: ESPN)
If these top competitors are so dominant, how then in 2012 at Wimbledon did Rafael Nadal as the #2 seed lose to Lukas Rosol in the second round? Rosol was ranked 100 in the world at the time and was listed as high as a 29:1 underdog at UK sportsbooks. Nadal was upset again the next year, again at Wimbledon, losing to Steve Darcis, a 26:1 underdog ranked 135th in the world. But it's not just Nadal, other top players have found themselves upset on the biggest stages in tennis: in 2013 Roger Federer lost at Wimbledon to Sergiy Stakhovsky in the third round, his earliest exit from the tournament in over a decade. Nadal, Federer, Murray, Djokovic, and many other top players have all lost matches when they were clear favorites, sometimes with short odds of only pennies on the dollar.
This raises a few questions:
- Just how unlikely are upsets?
- Does odds data from sportsbooks tell us anything interesting about upsets? Can we potentially predict upsets based on odds alone?
- What performance metrics best explain how an upset occurred? Did the favorite have an off day, or did the underdog play exceptionally well?
The data
Two main data sources were used to support this analysis:
- Odds data
- Source: http://www.tennis-data.co.uk
- Shows odds listed by five reputable UK sportsbooks for both players for each match
- Data exists for years 2001 - 2016, only 2012 - 2014 used for analysis
- Filtered on only "completed" matches, removing any resulting in retirement (typically in-match injury) or walkover
- Charted statistics data
- Source: https://github.com/JeffSackmann/tennis_atp
- Contains detailed point-by-point results of over 1,400 matches, charted manually by various contributors
- Overall match statistics calculated and used for this analysis (e.g., each player's winners to unforced error ratio per match)
So, just how rare are upsets?
Our odds data contains 7,509 matches over the 2012-2014 seasons. Of this, it appears that players expected to win (favorites) do indeed win 72% of their matches. The remaining 28% are considered upsets, of which half appear to be "Common" where player odds are relatively close. 2% of matches result in a "Major Upset" where an underdog with odds greater than 5.0 defeats the favorite.
How do odds work?
We can use sportsbetting odds data to determine not only who the favored player is an a match, but also the degree to which he is favored on a numeric scale. For purposes of this analysis I have used the British odds system which considers "even money" at 2.0. Odds greater than 2.0 denote an underdog while odds less than 2.0 represent the favorite.
To better understand what this number means, consider the following example: a bettor places a $100 wager on a favorite listed with odds of 1.9. Assuming this bet wins, the bettor will be returned with $190 (1.9 * $100), representing his original $100 bet along with $90 in profits. That same $100 placed on a 3.0 underdog would return $300 if the bet were to win. The general rule of thumb is the higher (longer) the odds, the lower probability of winning the bet; classic risk / reward tradeoff.
I've used odds data to classify the favorite and underdog in each match into a discrete grouping per the table below:
Relationship between odds and variance
One interesting finding in our odds dataset is that sportsbooks may list different odds on the same match. This could either indicate different betting patterns in each book (since odds will adjust based on how many bets are placed on one player vs the other), or potentially differing assumptions about the drivers of the match by the book's handicapper. It would be interesting to see how much variance between books is typical and whether this deviates significantly for matches that resulted in an upset. In theory, if odds variance is consistently higher or lower for upsets we might be able to use this fact as a predictive factor.
In the above chart, odds are listed along the x axis and standard deviation of the odds is set to the y axis. Blue dots represent matches in which the favored player won while black dots represent upsets. Unsurprisingly, the further to the right we move, the fewer black dots we see since probability of an upset decreases with increasing player odds. It is difficult to be sure by analyzing this chart, but it appears that typical standard deviation is nearly identical for both upsets and non-upsets. However there appears to be a clear relationship between player odds (i.e. how big of an underdog he is) and standard deviation. To make sure, we can drill down to where the majority of match volume exists at odds < 5.0.
Here we can clearly see the relationship between underdog size (average odds) and variance (standard deviation). Unfortunately, it seems we cannot use the variance alone as a predictor of upsets as the typical standard deviation is the same for upsets and non-upsets (i.e. the blue and black lines overlap). One practical use of these findings could be for bettors that tend to wager on underdogs: given that odds tend to vary more the larger the underdog, it stands to reason that one might need to be very selective in choosing a sportsbook. It is not uncommon for one book to list an underdog at 12.0 while another lists him at 10.0.
Why does this relationship exist between odds and standard deviation across bookmakers? One possibility could be how sportsbooks prioritize handicapping resources; given limited time, handicappers may choose to focus the bulk of their energy on setting the "correct" odds on the matches where the outcome is less certain (i.e. player odds are closer together). Whether an underdog should truly be listed at 20:1 vs 18:1 likely won't matter much to the casino since the probability of that player actually winning is so low.
Another possibility could be differing betting patterns for sportsbooks. For example, if two casinos list a favorite at 1.3 and an underdog at 3.5, but one casino sees significantly more wagers on the favorite, it may choose to adjust the odds to 1.2 and 4.5 in order to encourage more underdog betting to reach a balance.
How did he do it?
Our charted match data presents a number of interesting facts about each tournament. Using this information we can establish a baseline across each metric and then understand in cases of upsets -- particularly extreme upsets -- how each competitor varied from the typical performance expected. What metrics are included in the data?
Across these nine metrics, it seems that in upsets the greatest deviation from baseline rates manifests itself in two statistics: break points faced, and winners to unforced errors ratio.
Visualizing performance during upsets
Now that we've identified the top two metrics that vary the most during upsets we can look at their density distributions:
It is interesting to note that the number of break points faced changes dramatically for both players in the event of an upset. What this likely tells us is that both players experience performance changes since this statistic only applies to each player's service games. Specifically, if an underdog faces fewer break points than average on his serve, this does not technically have any direct bearing on the number of break points his opponent faces (since the favorite's service games do not overlap with the underdog's service games). Yet we do in fact see this same shift in the favorite's performance, which would imply either subpar serving by the favorite, or an improved return game by the underdog.
The winner to unforced error ratio is likely the most quoted statistic in tennis. Pundits often look for a ratio of at least 1.0, which appears reasonable given an average of 1.18 for favorites in our dataset. Underdogs do not typically fare as well with a ratio of 0.83, meaning they are committing more unforced errors than hitting winners. This statistic is particularly telling since winners and errors constitute approximately 70% of all points played (according to tennisabstract.com). This metric is unlike the other eight as it is a good measure of a player's overall game, not just how well he serves or returns.
Underdogs tend to improve their winner to unforced error ratio by a remarkable 48% during upsets, which outweighs the corresponding drop in performance by favorites of just 28%. This statistic would support a hypothesis that underdogs need to play particularly well in order to upset their competitors, rather than merely hoping for an aberrational "bad day" from the favorite.
Next steps
These datasets are extremely rich and filled with detailed match information, including true point-by-point results (e.g., how many backhands, forehands, volleys, smashes were hit during a point, how deep was the service return, whether the serve went out wide / up the T / to the body, etc.). This visualization of upsets has just scratched the surface of a potential abundance of unique insights that are likely buried in this data.
With additional time, there are a number of additional questions I would like to explore:
- What do these same visualizations look like for women's tennis (WTA) as well as lower levels of men's tennis (challenger, futures tours)?
- What bookmaker-specific insights might exist? E.g., do certain books predict upsets more frequently?
- Do underdogs deviate from their standard tactics in order to upset much better players? E.g., increased net points, more serves out wide
Conclusion
This data has shown a clear relationship between odds and variance, with increasing standard deviation across bookmakers as the "severity" of the underdog increases. We do not see any changes in variance for upsets vs non-upsets, suggesting we cannot use standard deviation of odds as a predictor of match results.
In addition, we were able to establish baseline performance metrics for underdogs and favorites and understand how they change during upsets. It seems that break points faced and winners to unforced errors ratios are the most telling statistics as both underdog and favorite measured performance changes the most across these metrics.