Election 2016:  A Real-Life Example of the Importance of Understanding Your Model
Zach Brogadir

Election 2016: A Real-Life Example of the Importance of Understanding Your Model

Zach Brogadir November 22, 2016 Posted in: Blog Posts, Predictive Analytics

The 2016 Presidential Election was the most divisive of my lifetime. The candidates presented very different outlooks, sparking fierce debate among American citizens. Interestingly, it wasn’t just voters who were divided by this election. Predictive modelers were likewise involved in their own election-driven debate.

Is it possible to predict the results of a presidential election? Many statisticians attempted to do so by building a model using polling data and outputting a probability of each candidate winning. Two models with a large following were FiveThirtyEight.com’s Polls-only forecast (538) and the Princeton Election Consortium (PEC). Though these models had the same polling data at their disposal, they arrived at very different conclusions.

After analyzing hundreds of polls, both models agreed on one thing: Hillary Clinton was the favorite to win the election by an expected overall vote margin of 3-4%. Of course, polls are not equivalent to votes and cannot perfectly predict the outcome of an election. Polls could be off for many reasons. Perhaps an individual poll over-sampled a certain demographic or did a poor job of identifying actual voters. The key question is, how likely is it that the polls in aggregate could be systemically biased in one direction or another? In other words, what is the standard error of the aggregated polling results? This is where the two models began to significantly diverge.

PEC analyzed past elections and noted the expected outcome from polling aggregates was consistently within 1-2% of the actual outcome and therefore used a distribution with a low standard error for 2016. In contrast, the 538 model reflected that this year had an increased risk of divergence from the polling aggregate due to the higher-than-usual undecided and third-party supporters in the polls. With little historical data on how undecided and third party supporters actually vote on Election Day there is much more uncertainty. Accordingly, 538’s model fit a distribution with a much fatter tail. This issue was debated in the statistical community prior to the election. A reasonable case could be made for both sides.

The win probabilities of the two models reflected this difference in approach. PEC indicated Trump had about a 7% chance of winning, while 538 indicated the probability to be about 29% based on their final predictions published on Election Day. This difference resulted in two conclusions. The output from the 538 model showed a probability of Trump winning which was significant, roughly equal to the probability of a baseball player getting a hit during his next at-bat. The PEC model, on the other hand, showed Clinton winning with over 90% confidence. 

As we now know, Donald Trump won the election. It may also be a victory for 538. It seems the model did a better job of reflecting the increased volatility of this election. Though many pundits and those in the media believed Clinton’s victory was a “sure thing,” followers of the 538 model should not have been surprised by this outcome. It’s important to note that we can’t definitively determine the best model based on just one outcome. Based on the PEC model, this result could be described as an outlier, not an impossibility. 

We regularly use models to make projections and to discuss with our clients the likelihood of certain events occurring. Our clients rely on these projections in order to make informed decisions. It’s crucial to think critically about the assumptions used in a model and the potential impact those assumptions can have on our projections and our clients.

Zach Brogadir is a Consulting Actuary with Pinnacle Actuarial Resources, Inc. in the Chicago, Illinois, office and has over six years of experience in the property/casualty industry. He has considerable experience with the analysis of unpaid claim liabilities and ratemaking/funding indications for a variety of personal and commercial lines of business. Zach is a Fellow of the Casualty Actuarial Society and a Member of the American Academy of Actuaries. He serves on the CAS Examination Committee, CAS University Engagement Committee and is the Vice President of the Midwest Actuarial Forum.
«November 2017»
SunMonTueWedThuFriSat
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789