NCAA Tournament Prediction Model 2013

(You might be interested to see the new and improved 2014 model. Click here to check it out.)

I wrote a computer program to rate the strength of every NCAA men’s college basketball team based on the Iterative Strength Rating algorithm. Last post I previewed it and and now I am presenting my picks for the 2013 NCAA Tournament.


Peter Wolfe at UCLA has graciously provided scores for every single college basketball game (over 21,000), found here. I used this information to produce a rating for each team. I then produced a bracket by simply choosing the team with the higher rating.

My complete bracket is below: click to enlarge. Check my progress or lack thereof once the tournament starts by clicking here.


My Final Four is:

  • Midwest: Duke beats Louisville
  • West: New Mexico beats Gonzaga
  • South: Georgetown beats Kansas
  • East: Indiana beats Miami FL

with Duke beating Georgetown in the championship game. Notable upsets include Boise State defeating Arizona and Wisconsin, Bucknell defeating Butler, and Minnesota beating UCLA. The bracket is interesting in the sense that it is reasonable but the higher seed is not always selected.


Now, the gory details. I’ve based my rating on the Iterative Strength Rating by Boyd Nation. Here’s how ISR works. First, give each team an equal rating, say 1.0. Next, go through each game and give each time some points. The winning team gets the rating of the losing team plus a “winning bonus” of 0.25. The losing team gets the rating of the winning team minus a penalty of 0.25. Once all of the games have been scored, we can update ratings for each team by dividing the team’s total score by the number of games played. Now, we can rescore the games using the updated ratings again and again until the scores stabilize. The Net Prophet blog shows that this is a pretty good way to rate teams. (By the way: I highly recommend this blog. Scott Turner has done an amazing job evaluating a number of different approaches, all using freely available software. Kudos Scott!)

This year, I created my own variant of ISR. There are two main modifications. First, I am accounting for margin of victory. How? In a 2006 paper by Paul Kvam and Joel Sokol, the authors derive an expression for the probability that Team A will defeat Team B, given that Team A beat Team B by x points on Team’ A’s home court:

RH(x) = exp(0.292 x - 0.6228) / (1 + exp(0.292 x - 0.6228))

This function levels off as the margin gets higher: the values for x=21 and x=20 are almost identical and close to 1. This function is also an indirect measure of strength of victory. Given the score of a game, and taking into account the home floor, we can evaluate this function and scale the “winning bonus” – so a large margin of victory will result in a winning bonus greater than 0.25 and a smaller margin of victory will result in a smaller winning bonus.

The second variation is to weight games differently. I divide the season into three segments:

  • The first 10 games,
  • The next 10 games,
  • The rest of the season.

The segments have weights: [0.8, 1.0, 1.2]. Why? Because I felt like it: games later in the season are probably a better predictor. A better approach is to find optimized weights based on tournament predictive power. After each modified ISR iteration I renormalize team ratings so they are in the range [0, 1]. Effectively this means I compute three scores for each team instead of one, but I don’t think this screws up the predictive power of the model too much given the number of observations per team (around 30).

I ran the algorithm on the complete set of 2012-2013 college basketball games, found here courtesy of Peter Wolfe of UCLA. This list is exhaustive and includes NAIA schools, Canadian schools, exhibition games, the Washington Generals, cats and dogs living together, etc. I’m not sure the teams are fully connected, so I do a pass through all of the games once, excluding exhibitions to identify a cluster of top-tier teams (presumably all Division I and II). The algorithm is about 140 lines of Python including the code to read the data. No fancy stuff. I will post the code later.

I have been doing little NCAA models like this for a few years now, and this is the first one I am proud of. We’ll see how it does. The main difference, of course, is that I am looking at individual games rather than aggregate team statistics over a season. A colleague of mine sometimes quotes the Papa John’s slogan “Better Ingredients, Better Pizza” when referring to the use of more granular data in models. I hope this year’s pizza tastes as good as it smells. (No endorsement implied…)


Author: natebrix

Follow me on twitter at @natebrix.

3 thoughts on “NCAA Tournament Prediction Model 2013”

  1. Doh! I was hoping to crib from your picks but ESPN says “Other entries are only viewable after the second round of the NCAA tournament starts”.

  2. Very interesting, Nate. Thanks for posting some details. I’m currently on a plane headed to San Jose to watch games and the wifi is terrible, so I’ll save a more detailed comment for later. Right now I’ll just mention that Kvam and Sokol 2010 backed off that probability model from the 2006 paper and ended up putting the home court advantage closer to 5 than 10. So you might want to tweak RH so that it is close to one when X=10 and see how that affects your predictions.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: