Prediction is a big, big business these days, and even those of us who aren’t explicitly in the prediction business probably do all we can to make sense of the future. For example:
- Does your company do marketing research? (If it’s a business of any size and sophistication, the answer is probably yes.)
- Do you track the financial pages?
- Do you keep abreast of the latest innovations in your industry (or any industry, for that matter)?
- Have you factored in economic considerations when trying to decide whether or not to buy a house?
- If you have an IRA, have you factored in where you think the damned economy is going in making fund decisions?
- If you don’t have an IRA, is it possible that your view of the market was so dire that you decided to dump all your money into savings (or hide it under the mattress)?
- Have you ever moved to a particular city in part because it had a better job market than another city?
- Did you make (or are you now making, if you’re still in school) curriculum/major/grad school decisions based on your expectations of what the job market was/is going to be like?
- If you’ve been lucky enough to have a choice of job offers, did you spend some time evaluating the prospects of the competing businesses (and your prospects in them) before accepting an offer?
If so – and most of you probably answered yes to at least one of these questions – then that’s all part of what I’m calling the prediction business. In some cases we’re talking about companies that are directly about predicting, and in others we’re personally making decisions based on our ability to predict – an abiilty that often hinges on data produced by companies in the prediction business. In all cases, the more we know about the future (or, put more precisely, the better our information on factors that will shape future events and the more accomplished our faculties for evaluating that information), the more likely we are to make decisions that succeed in the present and the future, and we all want that.
So, how good are we at predicting? How much of what we think we know is accurate, and how reliable are our techniques for predicting? Perhaps not as good as we’d hope.
So Close, Yet So Far Away
Consider a recent BBC story on efforts to detect terrorists, which was forwarded along by my colleague Whythawk. It starts out with an intriguing premise: what if you had a method that was 90% effective at telling whether or not someone was a terrorist? Not bad, right? But then the analysis takes a nasty left turn.
You’re in the Houses of Parliament demonstrating the device to MPs when you receive urgent information from MI5 that a potential attacker is in the building. Security teams seal every exit and all 3,000 people inside are rounded up to be tested.
The first 30 pass. Then, dramatically, a man in a mac fails. Police pounce, guns point.
How sure are you that this person is a terrorist?
The answer is C, about 0.3%.
The article goes on to explain the math:
If 3,000 people are tested, and the test is 90% accurate, it is also 10% wrong. So it will probably identify 301 terrorists – about 300 by mistake and 1 correctly. You won’t know from the test which is the real terrorist. So the chance that our man in the mac is the real thing is 1 in 301.
My guess is that very few readers guessed C – I know I didn’t – and the fact that most of us aren’t in the terrorist hunting business is no solace. The problem is that unless we’re serious math types, we probably rely, at least occasionally, on techniques that are actually less effective than we think they are.
Our Pathological Need to Know
One of the hottest business books out there right now is Nassim Nicholas Taleb’s The Black Swan. Taleb, who is equal parts philosopher, math whiz and trading savant, wreaks havoc with the world of financial analysis, and in light of our current economic condition and the factors that helped us get here, you can imagine how a book of this nature might strike a nerve.
Taleb’s central thesis is that a small number of unexpected events – the black swans – explains much of import that goes on in the world. We need to understand just how much we will never understand is the line. ‘The world we live in,’ he likes to say, ‘is vastly different from the world we think we live in.’
When it comes to finance, collective wisdom has shown itself to be close to astrology – based on nothing. But according to Taleb, unpredictable events – 9/11, the dotcom bubble, the current financial implosion – are much more common than we think.
He spends a lot of time, for obvious reasons, on finance, but the sum total of Taleb’s thesis is much broader: our need to know blinds us and leads us to rely on tools that can’t be trusted.
The Butterfly Effect
Toward the end of the book we discover that Taleb was a disciple of Benoit Mendelbrot, the father of fractal geometry and the man who introduced to the principle of sensitivity to initial conditions – better known as the “butterfly effect.” Stated simply, this principle says that even very small changes in a system can lead to huge changes in the results, and the implications for most kinds of research and modeling are huge. To wit, the popular assertion that a butterfly flapping its wings here today can lead to a hurricane next year in China.
Much research – did I say “much”? How about nearly all – assumes that we can control for non-relevant factors. A variety of sampling methods (randomization perhaps being the most popular) are used to assure that the only difference between test groups is the factor being tested, but Mandelbrot’s work calls that assumption into question. We may assume that we have controlled for external factors, but we cannot demonstrate it as fact. (I’m mostly beating up research on humans and human systems here – research in the hard sciences is far more precise.)
It is far, far harder to predict than we might suspect, and this goes for those in the business of selling prediction, as well.
How Can We Improve Our Chances of Getting it Right?
So, if we can’t know or predict anything, what can we do? Pack it up and go home?
Not exactly. I’m not here to suggest that the task is hopeless, that it is impossible to know or predict. A few strategies are recommended to those who’d like to nudge their confidence levels up a bit, though. Taleb offers some very useful advice – abd again, read the book. In addition, here are a few more ideas to think about.
First, there’s value in diversifying your sources. If you rely on one tool, one model, one expert, one information source, well, that’s like going to Vegas and putting your life savings on 32. We’re not talking about predicting anymore, we’re talking about praying for blind luck.
Second, there’s value in diversifying the type of source. We’re a culture with a rage for quantification – we believe that numbers don’t lie and that the only way to measure and evaluate is with statistics. To be sure, stats can tell us a lot, but the problem is that the knowledge you gain is a mile wide and inch deep (and this assumes that you’ve managed to construct a reliable quant instrument – note the observation above about these sorts of assumptions). The most effective research programs in my world (marketing) also rely on qualitative methods – focus groups, interviews, observation, case histories, etc. The value to using multiple techniques is twofold. On the up side, you get a much richer picture of the reality surrounding your research question. On the down side – that is, if something is a little off – the more independent tools you’re using, the better your chance of catching errors.
Finally, there’s no substitute for a critical mind. Never accept any claim or data point at face value and be as rigorous in your assessments of methodology as you are results. And especially, go in fear of people who are married to one method. All too often, as Taleb demonstrates, these people are ideologues who value the beauty and symmetry of their theories above the messiness of reality.
So we probably don’t know as much as we think we do, but if we approach the task of learning and predicting critically, we have a lot better shot.
Categories: scholars and rogues
I like science (and its accompanying language, mathematics). I can’t say that i really do much of either, so i suppose that i’m a mere believer…like the difference between a monk and someone who goes to church on Sunday.
Most of us, minus the whack-job religious fundamentalists, believe in science…or more specifically, we believe in technology, the tangible product of science. But i’ve been disturbed by a facet of this for a long time. While we believe in science, the majority of us have an 18th or early 19th century understanding of it. How many people can explain the theory of relativity? Never mind quantum mechanics.
Statistical proof and predictions are common to the point of being ubiquitous, and they’re accepted like the revealed word of God (except when we disagree with them, which generally leads to producing contradictory statistical proofs). But how many modern Americans even know that a sensitivity to initial conditions exists? Of those, how many factor it into the thought process of examining statistics or predictions?
We like to think that we’ve entered a secular age, but i’m not so sure. Too often it looks like we’ve simply supplanted one set of terminology with another.
Lex: I’ve written elsewhere about a big part of the problem. You’re certainly right about how badly we understand science and stats, and that puts us at the mercy of those who’d misrepresent them to us. It’s also true that our colleges are doing a horrific (nay, non-existent) job of training future reporters in this area. As best I can tell, even a lot of science reporters can’t tell the difference between an r-square and a T-square. Confidence intervals? Fuhgeddaboutit. They basically write down whatever the person pitching the story tells them, and this results in some ridiculously bad reporting.
When is no reporting at all the better option?
Edit: I meant 19th or early 20th century science.
A fascinating take, Sam. Thanks.
It’s interesting that your advice on what needs to happen to “get it right” is precisely the problem in our cut-rate journalism business model these days.
Thousands of reporters have been bought out or laid off. It’s hard to diversify sources (and types of sources) if fewer reporters are chasing more stories and leaning on one source for each instead of several. That minimizes points of view. Reporting that shallow cannot be considered “critical thinking” either.
At least we have Web tools that let us sample more broadly, for what that’s worth. There isn’t one source I feel safe trusting, but since my aggregator brings me dozens of sources (hundreds, maybe) I can at least get a broader view than I might otherwise.
Of course, given how many of those sources are pulling wire reports, it’s not REALLY dozens of sources, I suppose…
This is a very nice synopsis of the challenges we confront in a highly complex world. In the spirit of broadening the discussion, I’ll add another complicating factor that inhibits our ability to make successful predictions about the future – our conceptual models for how the world works are constructed through past experiences and, therefore, are ill-suited to the task of projecting into a future that diverges in any significant way from the past.
Consider the way energy policy analysts develop “energy solutions” for the future. By and large, the proposals they come up with are scarcely more than a re-hashing of past solutions. While it is important to build on past success, there remains the inherent blindness to significant systemic change. So, for example, most solutions posed by energy policy “experts” these days (with the folks at the Post Carbon Institute as noteworthy exceptions) treat our economy as though it is basically the same with a replacement of one energy source with another. In truth, as the era of cheap oil comes to an end, our entire civilization structure will become profoundly unstable and go through massive transformation… with one possible trajectory being global societal collapse (which we hope to avoid!).
This inability to envision alternative systems (and system changes) makes predictions much more difficult to make reliably.
Director, Cognitive Policy Works
Good point, Joe. It’s sort of analogous to the military preparing to fight the last war.
As a guy who has tried to build a career around legitimate innovation (and I won’t waste a lot of photons tooting my own horn here, but it’s more than true that I’ve wasted years by being “too far ahead of the curve”), let me add that our systems are simply not constructed to encourage and act upon new thinking. On the contrary. If it’s something they haven’t seen and something that doesn’t privilege the old models, it’s DOA.
Which sucks, in more ways that I’m prepared to rant about right now….
A few responses and observations:
Sam and Denny: I think I once told both of you that I created a training product to teach journalists the basics of social statistics and how they could be duped by not knowing how statistics work, or even how to ask the right questions about stats. This was during a time when newspapers were awash in money. I got not a single taker. I was told, again and again, that the job of journalists is not to place a critical eye on statistics, but to report that a particular study saying a particular thing was released. THAT, it appears, is news. Any critical thinking and questioning is “commentary,” I suppose.
Joe — Yep. I agree entirely. You beat me to it, using the words “conceptual models.” I was going to try to revive the word “paradigm,” that got so badly discredited by being overused and underdefined during the late 80s and early 90s. I’ve seen so many examples of people’s bending reality to fit preconceived notions. If I were to coin a new phrase, it would be “caught remachining the orrery.”
Sam — The military fighting the last war is a great analogy I will certainly steal from you ;-). Billy Mitchell proved conclusively that naval aircraft would function as extremely-long-range, guided artillery to do the same thing to enemy ships as deck-mounted guns once did — only better. His contention that modern battleships, which were built along the same principles as gunpowder ships had always been built, would become obsolete as aircraft far outranged their guns, wasn’t accepted until, really, Pearl Harbor. And this DESPITE the evidence of the brass’s own eyes as Mitchell sunk a ship for them.
People will willfully ignore evidence that would require a change in conceptual models, as consistency theory has demonstrated again and again. though no one seems to listen ;-).
And right on cue, we have an example in the news. It appears that the Schaflys are at it again. This time, Andrew (and what a mama’s boy HE must be) is spearheading a “retranslation” of the Bible to remove all that liberal bias pinko commie fag James I had put in there. Seems it would be difficult for Andrew to change his belief system based on … well … his belief system, so the one he loves less has to change.
Too good. Really. http://www.usnews.com/blogs/god-and-country/2009/10/06/conservapedia-launches-effort-to-translate-the-bible-into-conservativespeak.html
I’m not going to say that education is the only problem here, but it’s certainly one of them. Basic statistics (which is all you need to understand how badly most climate disruption deniers are screwing up the math and data), observational skills, critical thinking, and even relativity and the basics of quantum mechanics can be made understandable and taught to high school students. But it takes a pretty bright teacher to figure out how to teach them, and most bright people who could teach quantum mechanics or relativity to a high school student aren’t attracted to teaching.
It also doesn’t help that younger kids aren’t always being taught the skills they need to be able to understand quantum mechanics when they hit high schools. These days, some aren’t even being taught the required skills for them to learn quantum mechanics in college. “Math is too hard” and “when will I ever use this?” are two phrases that need to never again be uttered in public schools.
Yeah, like that’ll happen….
Being a close personal friend of Taleb and having a more than nodding acquaintance with Mandelbrot, I have gotten them to both agree that there is no such thing as market predictability that is accurate and complete. Both are very fascinating people.
I’ve taught quantum field theory to young children… and they LOVE it! I remember when I was active at the Center for Complex Systems Research at UIUC that one of the most enjoyable things was to demonstrate complex pattern formation with “table top” experiments that a child can tinker with and understand. Understanding complexity, ironically, is not really all that hard. It’s just not taught in our schools.
A paradigm-shift in thinking about causality is key. Only when educational policymakers realize that learning itself is a complex system… so simple “direct causation” metaphors for learning – and the “standard metrics” that supposedly capture them – are off the mark. Learning is nonlinear and dynamic. So to must be the metrics for assessing and evaluating it.
JS: i saw that. First it made me laugh and then i puked a little in my mouth. As i understand it, the project doesn’t really have a problem with James I; it’s the New International Version that the project sees as the problem.
And from what i’ve seen of it, “translating” is a bit of a stretch unless we’re considering “modern conservative” its own language. It’s basically cleaning up the KJV and making it more modern/easier to read (i.e. dumbing it down). And of course the bits about removing all the liberal bias, which so far means taking difficult concepts and making them propaganda simple and politicizing them heavily.
I believe it, Joe. Children are the most amazing beasts. My experience is in how well they pick up language; it’s astounding. Europeans aren’t multilingual because they’re smarter, they’re multilingual because they start learning new languages at a very young age. (and the experience gives a lifetime ability to adopt new languages)
We could easily develop children to be capable of making the necessary paradigm shift in thinking (whatever it may be) when they reach adulthood. Teach them languages and quantum theory and as much as their sponge like consciousness will hold and then let them loose.
Here’s a good essay and video from my friend. It’s from my favorite website and is very illuminating.