Once I learned enough statistics, I realised that being true for most of the population doesn’t mean it’s true for all folks. Most algorithms are approximately wrong. Considering both of these statements, all of what we know could be false. Improbable but not impossible.
Scientists considered everything that could’ve impacted our survival today to calculate our probability of existence. They considered the solar storms that the earth faced, the asteroids that killed dinosaurs, the wars we all survived. The chance of us being alive today is one in four hundred trillion.1 That is, 1:400,000,000,000,000. With fourteen zeros. A perfect example of improbable but not impossible.
Naseem Nicholas Taleb calls these black swan events — events that are nearly impossible to predict. Everyone thought there were no black swans until someone caught them sunbathing in Australia. Algorithms are even worse in catching them. The over-reliance on prediction accuracy steals attention from their likelihood of happening and our confidence in them. In his thesis work with Marvin Minsky, Patrick Winston concluded that the difficulty of machine learning is that it’s only possible to learn something it nearly already knows.2
Then there are issues of reproducibility as well. Everyone is different from one another, and that’s a universal fact. However, once in groups, this is much easier to model. I can’t say if Sarah would eat from McDonald’s today, but I know at least sixty million people will.3 Statistics and central limit theorem are great friends.
What about individual behaviour? That’s too wild to predict. Or is it? Internet companies are doing it so well. Facebook and Google have personalised services just for me. But that’s still based on group patterns. The system looks for users like me and tries to make me like them.
The real trouble is when we forget how inaccurate they are; when we fail to acknowledge their intelligence of a carrot and over rely on them. Situations like these result in false positives and fatal causalities. We need to provoke future statisticians and engineers on these humanely biased instincts. What do they think about it and why? The biggest lesson of education is not how to think but what to think about.
The good thing is there is a solution. Just be a little more aware. How? Paul Graham has an idea.4
You can also take more explicit measures to prevent yourself from automatically adopting conventional opinions. The most general is to cultivate an attitude of skepticism. When you hear someone say something, stop and ask yourself “Is that true?” Don’t say it out loud. I’m not suggesting that you impose on everyone who talks to you the burden of proving what they say, but rather that you take upon yourself the burden of evaluating what they say.
He further adds:
Treat it as a puzzle. You know that some accepted ideas will later turn out to be wrong. See if you can guess which. The end goal is not to find flaws in the things you’re told, but to find the new ideas that had been concealed by the broken ones. So this game should be an exciting quest for novelty, not a boring protocol for intellectual hygiene. And you’ll be surprised, when you start asking “Is this true?”, how often the answer is not an immediate yes. If you have any imagination, you’re more likely to have too many leads to follow than too few.
The general goal is to understand the limits of what we know and how confident we are about it; if we are not, the maturity to consider being wrong as a possibility. Building up from that maturity is much easier.
Someday, when I teach the future stalwarts, a thought-provoking ethical question would be part of the exam. Everyone would get full credits but will have to answer thoughtfully.