Be skeptical of data and models

Say hello to your fellow comrades

Lessons from Masterclass: Interior Design

Dark Light

All models are false, but some are useful. This is a concept that I have come to appreciate after studying Statistics for five years. For those with little exposure to any form of statistics, it can be easy to just treat predictions made from models as fact. I would not blame anyone for adopting this view, that is how it comes across in popular media. But we need to be more skeptical of data and the models that come of them.

The 2016 U.S. presidential election demonstrated this to a tee. In the lead-up to voting day, all the major news stations put Trump at a 1% chance of winning. This mean’t that Hillary had 99% chance of victory. These figures were blasted onto the airwaves in the weeks leading up to the election and the entire country felt like it was a sure thing. Of course, we now know that the pollsters from the different news stations were mistaken. Trump won in a somewhat controversial manner, but that does not matter. 

What is more interesting is how shocked the country was at the Republican’s victory. 99-1 seemed like good odds. But if we present those odds in another context, our attitudes quickly change. Imagine you received the following email from your airline-of-choice: Dear Sir/Madam, We are writing to inform you that due to an unexpected change in the composition of Earth’s atmosphere, the risk of crashing on all planes have increased. Our best models suggest that for every 8 hours of flight time, there is a 1% risk of the engines malfunctioning …

Would you fly ever again? Probably not. So why the shock when Trump won? There are two reasons that could have led to this.

1. The models that predicted a 99% vs 1% chance of winning for Hillary and Trump respectively were inaccurate or flawed i.e. the model is false. People’s expectations were, therefore, diverged from reality and contributed to a feeling of shock and disbelief.

2. If we were to assume that the predictions were accurate, it suggests people are unable to comprehend that a 1% chance of winning still offers a chance of winning. Many likely treated that 1% prediction as practically being 0%, contributing to the ensuing disbelief. 

The first point exemplifies the idea that all models are false, and some are useful. What I have intentionally left out until now is that there was one source that gave Trump a fighting chance: FiveThirtyEight. Find their predictions for the 2016 election here. Nate Silver and his team were laughed at when the predictions were broadcast initially, mainly because of the magnitude in which it diverged from every single other mainstream broadcaster. He was vindicated when Trump won, and offered an explanation of why his model gave Trump a better chance. The reason why is complex and involves diving into the core of their model. It essentially modelled the Electoral College better than the others, which leaned towards modelling a popular vote (which Hillary won by a significant amount). 

The second point is not so much an issue of the modelling, but more how people interpret it. It is easy to treat 1% as 0% when you are only concerned with one game e.g. one election. But what if you scale that, and now you are look at 1 million games e.g. 1 million elections. With that 1% chance, Trump would win 10,000 elections. Covid-19 has helped educate the population to appreciate that 1% of something can be significant. If given a 1% chance in one game, it is very easy to just take your chances

What does this mean to the everyday consumer of statistics? Be skeptical, don’t trust anything the big news stations say, and reserve judgement until there is a consensus amongst experts and / or academics. 


Related Posts

Reading my way around the world

Like many people, Covid-19 struck at a time when I had intended to do long-term travelling. During my…