Complexity Wins Again

Today, I want to use the election to illustrate just how complex a seemingly simple situation can be. This matters not just politically, but economically as well.

Assumptions and Guesswork

The first challenge is our complex method for electing a president. We don’t have a national election. Each state gets a certain number of Electoral College votes, which go (all or nothing) to the winner of that state’s popular vote. This means the national popular vote, while interesting, has no corporeal bearing on who becomes president. Pollsters still measure it, though, because people are curious and it’s relatively straightforward.

Except, it’s not straightforward at all. You can ask people how they will vote, and they may tell you, but it won’t matter unless they actually vote. Pollsters try to control for this with “likely voter” models. Sometimes they assume you will vote this time if you voted last time (which they can confirm from public records). Or they may ask questions to gauge your intent. Regardless, these models are still guesswork.

State officials introduced yet more complexity with new voting methods, expanded mail-in voting, etc. These vary by state, and in some cases, changed at the last minute. No one knew, or could know, how this would affect turnout. The fact that we have the largest turnout since 1900 (percentage wise), with a significant number of first-time voters, skewed the polls even more.

We all now know about Florida. Biden was on average +7, yet lost by more than three percentage points. If you look at Susan Collins in Maine you get another skewed example. She was basically down in every poll anywhere from a few points to 10+. So even if we did have a single national election, measuring it was going to be tougher than ever this year. But in fact, you have to multiply this complexity 50+ times to account for the state-level silos in which the elections are held.

Then there’s the geographic element. Pollsters try to measure choice and turnout by area, both because it’s important to local races, and because the electoral college system forces them to. It doesn’t really matter if a candidate wins a state by 50.1% or 75%. They get the same number of electoral votes. So high turnout in one state can’t offset low turnout somewhere else. You have to estimate it separately everywhere.

But pollsters have a more basic problem. Before any of the above matters, they need people to answer the phone.

"The Tiniest Inconsistency"

As you are no doubt aware, technology has changed the way we communicate. Does your home still have a landline phone? If so, do you answer it? Particularly in campaign season? If you have an iPhone, have you activated the “Silence Unknown Callers” feature? Mine now happily tells me about spam risk, likely sales calls, and so forth.

These are serious problems for pollsters. They need to reach a certain number of people, and it’s getting harder. I saw an interesting article in The Sydney Morning Herald this week.

In the age of the mobile phone, very few people answer calls from unlisted numbers, and even fewer want to talk to a pollster — who, for all they know, may be a fraud in disguise. The Pew Research Center reports that its response rates have plummeted from 36% two decades ago to just 6% now. And Pew is a not-for-profit outfit that doggedly attempts to contact every sampled phone number at least seven times. Commercial polling firms don't have that luxury.

No major commercial polling company is brave enough to reveal its response rate. Rumors are that they're down to about 3%. That's a very thin foundation on which to predict a presidential election. The tiniest inconsistency between the characteristics of that 3% and those of the electorate as a whole could invalidate the entire industry.

The pollsters do their heroic best to model the likely behavior of the masses from the self-reports of a few phone-answers, but all such models are approximations. They inevitably introduce error. Model error may be even bigger than the sampling error that goes into calculating the "error margins" that are often reported alongside polling data. Or it may not be. No one knows but the pollsters, and they're not saying.

He also talks about “social desirability bias,” which is basically a reluctance to reveal your vote choice to a stranger. We heard stories before the election of “shy Trump voters.” I imagine there were also shy Biden voters. It’s understandable in a nation so polarized and bitter. But it makes accurate polling even more difficult.

But zero in on this line: “The tiniest inconsistency between the characteristics of that 3% and those of the electorate as a whole could invalidate the entire industry.” We have now seen in successive elections inconsistencies well beyond tiny. What other industry could survive such failures? I can think of at least one.

Moving Parts

If measuring voters is complex, measuring the economy is even more so. Think of all the moving parts just in the US. Millions of companies, hundreds of millions of workers and consumers, buying and selling billions of different goods and services under sharply different conditions in different places, and all of this subject to change at any time.

Just think of jobs data. How many Americans are unemployed? It is certainly a big number. We who are fortunately still employed all have jobless friends and family. But is it 10 million, 20 million, 50 million? Over 21 million people are still claiming some type of weekly unemployment insurance as we arrive into the weekend.

The numbers we have come from surveys, not unlike the political surveys, and with similar limitations. When a stranger calls you on the phone to inquire about your job status, will you take the call? And if you do, will you answer honestly? And even if you do, exactly what does it mean to be “unemployed” now? Maybe you lost your full-time job, but you spent a few hours last week helping someone move. You made $100 and you will count as “employed,” but your situation is not remotely like it was when you went to an office every day.

Apply that same level of complexity to all the other economic numbers: trade flows, retail sales, savings rates, manufacturing output, real estate, bank lending, and everything else. Much of it is questionable at best. Yet economists still include it in models which are themselves filled with assumptions about the relationships between various inputs. They show their models to government leaders, CEOs, and central bankers, who then use them to make important decisions that affect the common people.

Is this good? That’s also unclear. I’ve told the story of the World War II weather forecaster (an officer named Kenneth Arrow, who later became one of the most famous Nobel laureates of the last century) who knew his forecasts were error-prone, and worried the generals would rely too much on them. But the generals knew this. They demanded forecasts anyway. Why? Because they needed something, even if it was wrong.

Every writer knows that “blank page” feeling. Getting started is the hardest part. I may end up deleting that first paragraph I struggled to write, but it was still useful. I suppose these models have similar value to decision-makers.

On the other hand, there are limits. If the weather forecast said partly cloudy and you got a thunderstorm instead, it may ruin your day. But it matters a lot more when you expect partly cloudy and you get a Category-5 hurricane.

The real problem, with both political polls and economic models, is when users rely too much on them. They give us an (often false) feeling that we know the future, which gives us comfort. We can see the margin of error, but on some level we want to believe. It’s human nature that’s hard to avoid. And it is especially hard when those models tell us something that we want to believe. It is called confirmation bias, and is one of the most difficult of all the emotional baggage that we bring to our investment decisions.

The word “presuppositionalism” typically refers to a particular theology, but I use it in a broader context. We all start out from a beginning point in our thinking. We believe our eyes see the real world. We believe some of what we read and what we hear from friends. These shape our thought patterns and what we presuppose to be true. Without these presuppositions, it would be extremely difficult to communicate with other people.

Here’s the problem. Every person who creates a model does so with specific presuppositions in their head. You can try to get those presuppositions out of your models, but it is very difficult.

This Bias in Models

Let’s first talk about mainstream economic models from large government and major investment companies. I talked about a CBO model a few weeks ago and have used and abused it over the years. First, they never predict a recession. But it is not just the CBO. I wrote this some seven years ago and nothing has changed since.

In one of the broadest studies of whether economists can predict recessions and financial crises, Prakash Loungani of the International Monetary Fund wrote very starkly, "The record of failure to predict recessions is virtually unblemished." He found this to be true not only for official organizations like the IMF, the World Bank, and government agencies, but for private forecasters as well. Loungani concluded that the "inability to predict recessions is a ubiquitous feature of growth forecasts." Most economists were not even able to recognize recessions once they had already started.

In plain English, economists don't have a clue about the future.

Take the record of Wall Street strategists. The clear average of blue-chip economists always predicts a positive year for the S&P 500. Federal Reserve economists are basically 0 for 300 on their predictions about the direction of the economy and only slightly better on interest rates. The record has improved somewhat since they now plan not to raise rates for anything, even if we have inflation. That makes predictions a great deal easier.

Retirement Meltdown

This problem with models and predictions may be personal, too. Your retirement likely depends on some kind of model. It tells your financial planner how much you can withdraw from your savings without running out too soon. But that advice depends on questionable presuppositions like “stocks always go up over time.”

My friend Ed Easterling at Crestmont Research notes there have been numerous 20-year periods where stock market returns were below zero, especially when taking inflation into account. Ed’s website has one of the best data treasure troves anywhere:

"A number of advocates and studies provide for 5% withdrawal rates: 'I only want $50,000 from my million dollars' and have it last for 30 years. The calculated success rate for that rate of withdrawal is 73%. Pretty good odds -- except when we consider the impact of valuation."

“SWR” stands for Safe Withdrawal Rate, and the safe amount varies considerably depending on market valuations when you start. The table shows that if you are in the top 25% of valuations and each year withdraw 5% of your $1 million retirement savings (to generate $50,000 for living expenses), you would run out of money 53% of the time. On average, you have less than 21 years of retirement before you run out of money.

With valuations in the top 10%, like they are today? It is even worse. If your financial planner says you can take out 5% per year “safely” based on a 60/40 (stocks to bonds) portfolio, then you should walk out the door.

Furthermore, many planners use a total return model which starts in the 1920's and shows that over time markets will give you an 8 to 9% return. They simply plug in that 8–9% number for each and every future year, assuming that time will take away the effects of a bear market and recession. And that is probably true if you have 80 to 90 years. If, however, you are retiring when the markets are at a very high valuation, like now, your model will likely give you untimely advice.

Pension funds are going to get devastated in this decade. So are many retirees. And it all comes from bad models on top of more bad models. It’s a big problem. But maybe technology has a solution.

AI to the Rescue?

Last week I mentioned I had been thinking a lot about the artificial intelligence field. This election gave me even more food for thought. The latest AI systems, paired with powerful supercomputers can process massive data sets that are incomprehensible to humans. Complexity doesn’t scare them. They can dig in and make sense of it.

Now, combine that thought with our political polling and economic modelling challenges. Could AI be the answer? Can the machines process these complex data sets well enough to make them not just a starting point, but useful and accurate? AI will contribute to radical changes in the way we make important decisions. This may be the biggest technology trend of our lifetimes.

Disclaimer: The Mauldin Economics website, Yield Shark, Thoughts from the Frontline, Patrick Cox’s Tech Digest, Outside the Box, Over My Shoulder, World Money Analyst, Street Freak, Just One ...

more
How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.