Hoop dreams: Yale professor Ed Kaplan scores with OR-aided predictions for NCAA Basketball Tournament pools.
By Peter R. Horner
The college basketball season traditionally culminates in a frenzy of nail-biting competition known to hoops junkies everywhere as March Madness. Forget the NCAA tournament. We're talking about the office, bar or Internet pool, that rite of early spring when butchers, bakers and candlestick makers, and yes, even operations researchers, suddenly develop insight into the relative strengths of 64 men's college basketball teams.
Amazingly, millions of these instant experts are so confident in their perceived prognostication power that they dare to predict the exact outcome of a 64-team tournament, which, if math serves us correctly, has something like 9.22 quintillion (that's billion billion for those of you keeping score at home) possible outcomes. Utter madness. Perhaps even more mind-boggling, these same people are more than willing to put their money where their madness is.
The tournament is single-elimination; once a team loses, they're out. Teams are seeded No. 1 through No. 16, and evenly divided into four regions (see box). Since each game eliminates one team, a total of 63 games are played to determine the champion.
Operations researchers may not know anything about college basketball, but they can certainly appreciate the difficulty of correctly predicting the outcome of a random event such as a 64-team tournament involving teams of young men of varying basketball skills. Operations researchers do, however, know quite a bit about probability and optimization and how those methodologies can improve decision-making. You might say that when it comes to NCAA basketball office pools, operations researchers have a mathematical method to their March Madness.
Edward Kaplan of Yale University certainly does. Stricken with March Madness, Kaplan, a professor of management sciences and public health at Yale's School of Management, teamed up with colleague and deputy school dean Stanley J. Garstka to develop a series of tournament-specific probability and optimization models. Their stated goal: to demonstrate the value and power of operations research in a way that millions of Americans could truly appreciate. Their ulterior motive: to make a big splash in March Madness pools and garner any perks that went along with it (free beers at neighborhood bars, pats on the back, bragging rights, etc.)
For the record, Kaplan says the only thing he knows about college hoops is that the ball is round, "just like my head." Kaplan's "day job" involves far more serious work, such as his internationally acclaimed modeling of HIV and AIDS policies (see page 26). In fact, Kaplan first worked on the March Madness models in hotel rooms during breaks at overseas AIDS conferences. As for Garstka, he's a well-respected scholar and a leading expert on, appropriately enough, bankruptcy.
One could employ several strategies in attacking the tournament problem. For example, one could simply toss a coin for each of the 63 games. Heads, take the first team listed to win; tails, take the second team. Using such a strategy, one could expect to get 50 percent of the first-round games correct and 33 percent of the 63 tournament games correct. (At some downstream games, of course, you would have no chance of winning since none of your pre-tournament selections would have advanced to that particular game.)
Flipping coins is analogous to trying to predict a theoretical tournament in which every team has an equal chance of winning every game. Of course, that's never the case in the NCAA. Duke is a much better team than Lamar University, and anyone who would pick Lamar to beat Duke (the two teams played last year) would be laughed out of every bar in America.
Any system has to at least beat blind chance. Otherwise, what's the point? The most common office pool strategy is to simply pick the higher seeded team to win each match-up. Such a strategy, Kaplan says, would have produced an overall success rate of 56 percent over the last three years of NCAA and NIT tournaments, a significant improvement over pure chance. Of the 63 games played in the NCAAs, for example, one would expect to get about 36 of them correct just picking the higher seeded team in each round before the tournament began. (The NIT is a separate tournament composed of 32 teams that didn't qualify for the NCAAs. It's a long story, but trust me, TV revenue is behind it. From Kaplan's viewpoint, the NIT serves one purpose: it provides more data.)
Picking the highest seeds, however, doesn't give you an advantage over other participants, since the seedings are known to all pool players in advance. The NCAA selection committee has told you and everyone else who the favorites are in every game.
Still, questions arise. Do you think the selection committee has made mistakes in a few places? Just because four teams are seeded No. 3, are they all equal in strength? The $64,000 question, of course, is: Can a mathematical model employing OR techniques improve your chances of winning a March Madness pool? We thought you'd never ask. The answer is, yes and no.
If you're talking about a pool in which the objective is to correctly predict the winner of as many tournament games as possible, the answer is no. Kaplan's model produces a success rate of 59 percent (based on 188 tournaments games played in the NCAA and NIT in 1998 and 1999), only slightly better than you would get from simply picking the highest seeds. "That's to be expected since the optimization model basically tells you to pick the favorites when the objective is to maximize the number of games correctly predicted," Kaplan says.
However, if you're playing one of the March Madness pools that offer bonus points for correctly predicting upsets, as some office pools and many of the Internet pools do, then Kaplan's model offers a significant advantage. In the CBS Sportsline Pool and the Packard No. 2 Pool, for example, the objective isn't to correctly call as many winners as you can; the objective is to get as many points as you can. In upset pools like CBS and Packard, you get more points for predicting an upset winner than you do for predicting a favorite to win a given game. In other words, you can call fewer games correctly and still outscore someone who called more games correct but had fewer upset winners. In the CBS pool, if you correctly pick a No. 1 seed to win in round four, you get eight points. If you correctly pick a No. 8 seed to win in round four, you get 64 points. Clearly, there are incentives to abandon the highest-seed strategy and ferret out some likely upset winners, but which "underdogs" do you pick to win, and in what round do you pick them?
That's where Kaplan's models and operations research techniques earn their keep. College basketball is a data-rich environment, and Kaplan says the tournament structure, from a mathematical point of view, is "beautiful to model," so right away you've found an operations researcher's field of dreams. Everything, of course, starts with data: won-loss records, schedule strength, conference strength, conference standings, head-to-head meetings, margin of victory, home record, road record, offensive and defensive scoring averages, etc.
In addition, you've got several polls and sports rating services that incorporate some or all of these data to produce rankings. There are coaches' polls, media polls and computer polls. The computer-generated RPI system, for example, produces a specific "power rating" number for all 317 Division I men's basketball teams. Schedule strength weighs heavily in the RPI ratings, which explains how an 11-7 Georgia team (with the toughest schedule in the country) was rated No. 10 by RPI in mid-January, while a 17-1 Georgetown team (with one of the softest schedules around) couldn't crack the top-25.
Las Vegas oddsmakers chip in with additional data in the form of point spreads (the number of points the favorite is supposed to beat the underdog in a head-to-head meeting) and over-under point totals (the expected point total for a given match-up of two teams).
Kaplan downloads this mountain of data into a series of four probability models. The first focuses on won-loss records of the 64 teams in the NCAA and the 32 teams in the NIT. Kaplan then computes a composite team, or Mega Team as he calls it, based on the results of all the other Division I non-tournament teams lumped together.
"The reason you do this is to get connectivity," Kaplan explains. "Let's say you have two teams, Team A and Team F, in the tournament that haven't played each other. But I know A played B, and B played C, and C played D, and D played E, and E played F, so there is a path. I end up creating several paths. I want to take advantage of all that connectivity."
The second model is based on Las Vegas odds, specifically the point spreads and over-under lines for the first round of the NCAA tournament. "From those numbers I can infer what the market estimate is for the scoring rates of these different teams against specific competition," Kaplan says. "The difference in scoring rates is the point spread. The sum is the over-under line."
The third and fourth models are based on sports rating services headed by Jeff Sagarin and Ken Massey. Both services provide computerized ratings and scoring rates for each team. The probability that one team beats another is the probability that the first team scores more points than the second. Kaplan borrows some ideas from Poisson processes to refine the model.
Next, Kaplan takes the results from the four probability models and feeds them into another probability model that determines the likelihood that any given team wins whatever game it could play in any tournament round. This provides the input for an optimization model. (For those interested in the math behind the models, see Kaplan and Garstka .) For our purposes, suffice to say that dynamic programming ultimately produces slates of tournament predictions.
So how did Kaplan do? Again, we thought you'd never ask. Last March in its first live test, Kaplan's system outscored 99.97 percent of the competition in the CBS Sportsline Pool and beat 92.5 percent of the entries in the Packard No. 2 Pool. He beat every one of the system entries in the Packard Pool, and came close to winning both pools outright. The results reflect Kaplan's best-of-four slates entered in upset pools that awarded bonus points.
In contrast, in the ESPN pool that didn't award upset points, Kaplan's best pick could only beat 71 percent of the entries, which Kaplan admits is worse than chance. With four different entries, chance would have them beat 80 percent, 60 percent, 40 percent and 20 percent, respectively, of the competition. The ESPN pool drew 580,000 entries compared to 95,000 for the CBS pool. The Packard No. 2 Pool, a sort of World Series for March Madness system junkies conducted by a college math professor, had 40 tough entries.
Different models produce different slates for different pool objectives. For example, in the ESPN pool, Kaplan's "Massey Model" produced a "boring" Final Four composed of the four No. 1 seeds Michigan State, Duke, Stanford and Arizona. In the CBS pool, the "Massey Model" produced a Final Four composed of a No. 1 seed (MSU), a No. 6 seed (Indiana), a No. 7 seed (Tulsa) and a No. 8 seed (Wisconsin). When Kaplan used the "regular season record" model to enter the CBS pool, he had a Final Four of two No. 4 seeds (LSU and Syracuse), a No. 5 seed (Florida) and a No. 7 seed (Tulsa).
For the record, Michigan State (No. 1 seed) won the championship, beating Florida in the final. Wisconsin and North Carolina (a No. 8 seed) completed the Final Four. Tulsa lost to North Carolina in the regional final.
"Did the Massey Model believe that Wisconsin and Tulsa really had the best probability of making it to the Final Four?" Kaplan asks. "No. But what it did believe was that when you took into account probability and the upset points, the best expected reward came from picking those teams."
Feel the Power
Kaplan credits his strong showing in the upset pools to the power of operations research and the fact that his models take advantage of far more data than do other systems, and then optimizes. "When our model picks a team to win in the second round, it looks at all the possible competition, not just the team it picked to win the other first-round game in that bracket," Kaplan says. "What the model says is, 'Which second-round picks enable the greatest expected office pool score overall considering all possible opponents?' "
Most individuals and systems, on the other hand, make downstream game predictions based on the assumption that their early round predictions are correct. In other words, they assume a particular match-up when that match-up may never materialize. In so doing, they leave a lot of valuable information on the table. Thanks to dynamic programming and optimization techniques, Kaplan's models leave nothing to, well, chance.
Kaplan scores two final points at the buzzer: Point No. 1: OR is powerful stuff. "We did very well in the upset pools, better than anyone expected," he says. "The real story here isn't basketball. It's the power of OR modeling and the sense of confidence that if it works here, it's going to work in other applications that are far more important."
Point No. 2: March Madness can make you a little crazy. One of Kaplan's models called for Northern Arizona (a No. 15 seed) to beat St. John's (No. 2 seed) and St. Bonaventure (No. 12) to beat Kentucky (No. 5) in the first round for an upset pool. "People were laughing at us, but it almost happened," Kaplan says. "Both of those games were very close."
On the other hand, none of Kaplan's models predicted Pepperdine (No. 11 seed) to beat traditional college powerhouse Indiana (No. 6) in the first round, but Kaplan's six-year-old daughter did. "That's because the cat that lives across the street from us is named Pepper," Kaplan laughs.
Note to self: Find out the name of my neighbor's cat.
Peter R. Horner is the editor ofOR/MS Today.
OR/MS Today copyright © 2001 by the Institute for Operations Research and the Management Sciences. All rights reserved.
Lionheart Publishing, Inc.
506 Roswell Street, Suite 220, Marietta, GA 30060, USA
Phone: 770-431-0867 | Fax: 770-432-6969
Web Site © Copyright 2001 by Lionheart Publishing, Inc. All rights reserved.