FEB 10

Super Bowl 50: Expecting the Unexpected

Photo by thelittleone417. Licensed under Creative Commons.

We never expected the Broncos to win. The Panthers were practically universal favorites. In fact, out of 74 ESPN football contributors, only 19 expected the Broncos to win. Our own office discussion ran heavily in favor of Carolina going home with the Super Bowl trophy.

WSO2 Machine Learner seemed to agree: 57.4% Panthers. Sold.

Other companies using predictive analytics also sided with the Panthers. Microsoft, which brought Cortana/Bing into play, projected a 64% chance for the Panthers to win. Electronic Arts, which has been predicting games for a while by pitching teams against each other in its Madden NFL videogame, also believed the Panthers would nail the game.

Instead, what we got was an incredible win from the Broncos. Denver opened up with a 10-0 lead that the Panthers never recovered from. Peyton Manning was stellar. The Broncos made more plays and ran a better defense than the Panthers did. Result: a 24-10 Super Bowl win for the Broncos.

Well played, guys!

So what does that mean for predictive analytics?

The type of analysis used to predict the Super Bowl is known as probabilistic prediction. And it’s just that, a probability. A 57.4% chance for the Panthers is a 42.6% chance that the Broncos will win. If those two teams ran against each other for 10 games, the chances are extremely high that the Broncos would win four of those games.

Our home-grown experiment compared favorably with other probabilistic predictions. Notably, Neil Paine of FiveThirtyEight ran a superb, detailed analysis on the Panthers versus the Broncos defense, where he pointed out that this was one of the most even matches he’d seen; FiveThirtyEight called it at 59%-41% in the Panther’s favor. Our 57.4% to 42.6% prediction was very close, even running with only a fraction of the data FiveThirtyEight has. And we came up with closer numbers than Cortana’s 64% Panthers versus 46% Broncos.

The Wild Cards

Ironically, the weirdest analyses seem to have won the day. RiseSmart, which has something to do with the job market, predicted that the Broncos would win - based on the theory that states with lower unemployment rates had a better chance. No, we’re not entirely sure what to make of that, either.

eBay, which used consumers’ purchase data, also indicated a Broncos win. Since it was the fans doing the buying, we’d say this is a fantastic example of a prediction market. It’s likely the fans had information that we didn’t factor in. Maybe Peyton Manning’s presence carried the day. Who knows?

Scoring some great lessons

Looking back, we’re pretty proud of BigDataGame. After all, it’s not every day you get to beat Microsoft’s poster girl.

More importantly, it’s been an incredible learning experience, especially since none of us knew anything about sports prediction when we started. We now have a much better idea of what it takes, including algorithms; with a bit more thought, and a lot of fine-tuning, we might even be looking at a whole host of other sports in the future.

If you’d like to know more of the science and tech behind BigDataGame (and if you can help us figure out how we can do this better), check out our webinar on the subject. We’re planning to talk about everything - how we gathered the data, the algorithms behind the prediction, and more. We’d love to know what you think.

JAN 25

Here's to the Panthers!

By Yudhanjaya Wijeratne

January 24th saw the Panthers smash the Cardinals and the Patriots lose to the Broncos.

Prediction is a tricky business. A true Delphic-Oracle-style prediction would be binary in nature. One wins, the other doesn’t - end of story.

Most computational models - well, most accurate computational models - don’t work this way. Especially in sports. Big blogs like FiveThirtyEight (and ours) display probabilities, trying to figure out how much of a chance there is that one particular team will win. Because of the nature of probability, there’s no guarantee that a given team will win a given match. After all, a 70% probability means that if you repeat the experiment ten times, that underdog team will win 3 times out of the 10. There’s no telling which of those potential ten matches is the one playing tomorrow. That requires a human eye.

Unfortunately, there was no better reminder of this for us than the Jan 24 games. We predicted the Panthers having a 50.83% chance against the Cardinals. That’s technically not much, considering that a coin only has a 50% chance of turning up heads. You never really know.

The Panthers crushed the Cardinals that game, 49-15. They took first blood and by the second quarter were leading 17-1. Despite a brief spurt of activity in the second quarter, the Cardinals were bowled over by the end of it.

Our Patriots vs Broncos prediction, however, was 56.59% vs 43.41%. That’s a larger gap. And yet the Broncos won by 2 points. Excellent play from Von Miller, a great scramble from Peyton Manning and some remarkably puzzling decisions from the Patriots saw the Broncos through to the finals.

Based on this data, we ran the numbers again. Since it’s now the Panthers versus the Broncos, we lay out a 57.40% probability for the Panthers to win. Lay your bets, everyone.

And someone please get the popcorn.

JAN 18

Postmortem, Jan 17 predictions: Why the Broncos won, and what that means for us

By Yudhanjaya Wijeratne

We’ve had a great week here with BigDataGame.

We predicted that the Patriots had a higher chance of winning than the Chiefs. On January 16th, the Patriots beat the Chiefs.

We predicted that the Cardinals would beat the Packers and the Panthers stood a slightly better chance of beating the Seahawks than vice versa. They did.

One prediction, however, failed. BigDataGame showed the Steelers having a much better chance than the Broncos. That one turned out to be wrong. Well, not technically wrong - since these are probabilities we’re talking about, not simple statements - but it definitely threw us a bit.

What happened?

We looked at Twitter for an answer. What better way to understand something than to examine it through a thousand eyes?

Fitzgerald Toussaint and the injuries

Mistakes happen in sports. Toussaint’s was one of them. In the fourth quarter, Bradley Roby hit him well, making him lose the ball. Unfortunately, this seems to have set up the Broncos with an excellent position to play from for the rest of the game.

Two other things that swung the play were injuries. The Steeler’s Antonio Brown took a pretty brutal hit during the previous Bengals game. The subsequent concussion ruled him out of this game. The Steelers had no one to fall on save for Ben Roethlisberger, who had just recently suffered heavy damage to his throwing shoulder.

In sports, nobody can predict mistakes or injuries. Human analysis is capable of factoring in injuries - and while we could, say, build a point-based system to reduce winning percentages based on player injuries, it’s quite a time-consuming task: we’d need many years worth of data, analyze performance and probabilities at the level of individual players, and then try to analyze the impact of injuries by correlating injury with game performance.

ESPN’s prediction graph

ESPN, being close to the game, does a more detailed analysis. In their blogpost titled ‘Broncos didn’t get control until late, but that was enough’, they discussed the winning probabilities for both teams and what moments changed them. Here’s their win probability chart over the course of the game:

It’s interesting to note how close their initial predictions for the match were to ours. Our estimate ran as follows: Pittsburgh Steelers with a 55.76% win probability, Denver Broncos with 44.24%. Their numbers are 59% and 41% respectively. It must have been quite a surprise when the Broncos won.

JAN 10

Bengals Playoffs: [almost] called it!

By Yudhanjaya Wijeratne

As we explained on the BigDataGame page, we’ve been testing our prediction machine for a while. Nevertheless, January 9’s games was its first real test at predicting an upcoming match. On Saturday, we saw the Chiefs go up against the Texans and the Bengals pit themselves against the Steelers. Three days prior to the match, BigDataGame had given us the following numbers:

Saturday
KCC
58.33%
VS HOU
41.67%
CIN
56.87%
VS PIT
43.13%
Sunday
SEA
52.08%
VS MIN
47.92%
GB
51.89%
VS WAS
48.11%

Big Game winning chance predictions:

  • NE 9.86%
  • CIN 9.86%
  • CAR 9.52%
  • KC 9.52%
  • ARI 9.18%
  • SEA 8.50%
  • DEN 7.99%
  • MIN 7.82%
  • PIT 7.48%
  • GB 6.97%
  • HOU 6.81%
  • WAS 6.46%

Those were some pretty close numbers, but the final prediction was that the Chiefs, Bengals, Seahawks and Packers would win.

We were almost entirely correct. KCC went 30-0 against HOU. Despite a 13-0 lead for the Chiefs, the teams played fairly evenly - until the Texans’ J.J. Watt took a hit and went off in the third quarter after a tackle from Eric Fisher (Watt, who had previously been dealing with a groin injury and a broken hand, is apparently now in need of groin surgery).

Following this, the Chiefs quickly extended their lead to 27-0 and subsequently a clean 30-0 win, though not without injuries of their own: Jeremy Maclin went down with what most people feared was a torn ACL, but is now confirmed to be a sprained ankle. His participation in the upcoming Patriots match was very much up in the air at the moment.

Our CIN vs PIT prediction failed. Everyone knows what happened - a brutal back-and-forth with two strange fouls early on and one of the most incredible catches ever seen made by Martavis Bryant. For a moment there it felt like the Olympics gymnasts had invaded the game. We gave the Bengals a 56.87% chance of winning. The Steelers clinched it, with Roethlisburger walking off with a shoulder injury.

The other two were on point. The Seahawks took the game 10-9 in a closely fought match with the Vikings. And despite a weak start, the Packers absolutely destroyed the Redskins, winning 35-18. Adam’s knee injury was revealed to be minor, so we will see him again.

Final tally: we got 3 out of 4 predictions correct on our first run - a 75% success rate. As you can see, it’s still too early to tell who’ll go on to win the season.

This set of matches highlighted a flaw in our (and indeed, most) prediction systems: injuries. While injuries cannot be predicted, we should have some method of including them into our pre-game calculation - perhaps as a mathematical offset that acts as a filter on the predictions. So, while the players ready themselves for another clash, we’re working on factoring in broken bones and surgeries for an even more accurate result.

To make your own predictions, visit WSO2 BigDataGame. The program is online and freely available.

To understand how BigDataGame works, click here.

JAN 9

WSO2 BigDataGame: How This Works

By Yudhanjaya Wijeratne

“Here's a crazy idea...a really crazy idea in fact…”

It all started when Saliya fired off an email about American football’s Big Game.

In true WSO2 fashion, it had graphs. It had charts. It had heaps and heaps of data. And at the end of it was a question: given everything we can find on football teams and the Big Game, can we use our software to predict the winners?

While this was happening, one team at WSO2 was working on one of our newest products - WSO2 Machine Learner, meant for pretty much the same purpose. While not specialized for sports, Machine Learner is all about generating predictive models.

Right, said Nirmal and Thamali of the Machine Learner team. Let’s do this.

The result is WSO2 BigDataGame, your friendly neighborhood football predictions machine. We’ve had to drop some trademarks (and change the team names) due to legal reasons, but that doesn’t impact the quality of the predictions in any way.

One: finding the data

American football basically has three seasons. Preseason, regular season and playoffs. The first question was: how do we find the data, and what do we use? Kern (perhaps the only one of us who actually paid attention to American football), stepped in at this point, pointing out that:

  1. Pre-season data should not be considered. Because they mostly are used to determine what players will make the team for the coming season, the best players don’t play much and coaches don’t really care about winning these; they’re more interested in seeing who’s a better fit for the team. That’s one huge chunk written off, then. Regular season data it is.
  2. Injuries, are very common and really skew the data, especially if it’s a quarterback who gets hurt. And apparently the reverse happens; an average team may be revived by the return of a hitherto injured hero. This is a problem, because injuries don’t make it into most datasets.
  3. Teams that have won the Big Game have usually had a great defense.
  4. Some teams start off the season slow and then begin playing better to make the playoffs.

Armed with this insight, we started hunting data.

Now there’s no lack of sites hosting data on football games, but most sites don’t have the historical data for all teams. We wanted everything. Eventually, we came across pro-football-reference.com, which had the data on all the teams for many years.

If you’ve used that site, you’ll know that it looks like a giant set of spreadsheets put together. Getting the data into the format we needed required a bit of filtering, but eventually we had a nice CSV (Comma Separated Value) file with 2012, 2013 and 2014 regular season historical data.

Now the question was, what do we use to predict the future?

Two: the right algorithm

The next task was to pick the right algorithm for the job. Or, as Thamali puts it, to build an accurate model to predict the winning probability (that’s the politically correct way of saying it). How Machine Learner works is by taking a chunk of data and generating a model that can be used to accurately predict the next chunk. By tweaking hyper parameters, we’re able to manually tune the accuracy of the model. The goal was to build something that could take data from 2012, 2013 and 2014 and predict the winning probabilities for the 2015 regular season.

The first model we built used linear regression. It turned out to be only 50% accurate. Pass.

After a couple of tries, we hit on using Random Forests - generally one of the best classes of classification algorithms out there. We paired this with stacked autoencoders, which is basically a layered neural network where the outputs of each layer are connected to the inputs of the next layer.

This worked. After comparing it with actual values, we realized the algorithm was highly accurate: 96.7% accuracy, to be exact. Unfortunately, the model only generated 1s and 0s; if both teams had the same value (1 or 0), we were simply unable to make the prediction.

This went on in this vein for a while. The logistic regression algorithm, minus the encoding, was okay (96% success rate), but only gave us boolean outcomes. We mucked around with the features for a while, including trying out multi-class classification algorithms, which were, unfortunately, not as accurate as we wanted for this exercise.

We’d reached the limit of what we could do with our current crop of Machine Learner algorithms.

This wasn’t surprising, because everything in Machine Learner was geared more towards business and enterprise data prediction than, well, sports. But rather than let it die, we decided to build a pure Apache Spark program for the random forest regression algorithm we needed. We then ran that to try and predict the 2015 dataset.

By remarkable coincidence, we were smack in the middle of rolling out a new WSO2 Machine Learner release. We threw in the algorithm, cranked the right handles, and voila!

Team1

Team2

Prediction Correct(1)/wrong(0)

Predicted WL Team1

Predicted WL Team2

Winning percentage-Team1

Winning percentage Team2

Predicted

Actual

Min

GB

0

0.4378

0.6814

39.11722659

60.88277341

GB

MIN

TB

CAR

1

0.5628

0.7252

43.69565217

56.30434783

CAR

CAR

STL

SF

0

0.2814

0.263

51.68993387

48.31006613

STL

SF

SEA

ARI

1

0.7126

0.6752

51.34745641

48.65254359

SEA

SEA

SD

DEN

1

0.4064

0.6378

38.91974717

61.08025283

DEN

DEN

OAK

KC

1

0.5002

0.725

40.82598759

59.17401241

KC

KC

WSH

DAL

1

0.4626

0.4064

53.23360184

46.76639816

WSH

WSH

TEN

IND

1

0.3626

0.4502

44.61122047

55.38877953

IND

IND

PIT

CLE

1

0.6252

0.2626

70.42126605

29.57873395

PIT

PIT

PHI

NYG

0

0.3626

0.5628

39.18305598

60.81694402

NYG

PHI

Accuracy

70%

This is what we got for the last games of the 2015 season - held on the 3rd of January, 2016. As you can see, it works. We did a little bit more calculation and arrived at a mathematical 76.5% accuracy rate. While not as high as some of the previous models, this one isn’t binary; wins are predicted as percentages in themselves. That’s very important, because this system of odds is how humans also think about sports. This gives us very high hopes for the model.

Three: The Final Word

Now, this program isn’t perfect. It’s very much a hobbyist project at WSO2, and we’re still in the process of tweaking it. Right now, we’re basically predicting probabilities of success - and while we have faith in our product, there’s a whole lot of things that are impossible to account for, injuries in particular. There’s also no predicting the effects of morale on a team; that stuff is sorcery.

The math, however, is sound. Stay tuned - we’re keeping this project updated. Before every game, we’ll be running our predictions and blogging about what we think will happen next. After every game, we’ll be taking the match data, casting that into our match history page and adding it to the BigDataGame dataset so that our predictions constantly use the latest data.

And of course, all through, you’re free to run any two teams you want against each other and see which one stands the best chance of winning. To steal a line from the Hunger Games - may the odds be ever in your favor!