Is Financial Research A Sham?

Whoever said research was glamorous? OK, maybe it wasn’t you. It was probably a geek like me. But people can get Nobel Prizes for it, at least if they come up with something interesting enough. Lately, though, glory and honors seem to have been taking back seats to concerns about whether this stuff is worthwhile, or even worse, whether it’s trustworthy.

This is about all kinds of research – medical, psychological, scientific, etc. – and with particular focus on studies published in high-end journals that tend to prefer submissions that claim to discover new ideas (as opposed to refuting hypotheses) often submitted by researchers who need to get published if they expect to advance professionally.

Finance is very much part of the controversy. On August 17, 2015, the web site Adviser Perspectives published an article entitled Why You Shouldn’t Trust Most Financial Research,and in its Summer 2015 issue, Journal of Portfolio Management published a set of recommended improvementsto enhance the credibility of empirical financial research.

It’s Important To Everybody

Obviously, anything that purports to teach investors how to invest more successfully is crucial, and good research does that. The 1920s-30s is a great example of what the financial markets can look like if participants just wing it, and 2000-02 reminds us what can happen if we allow ourselves to feel too hip to worry about what we can learn from the ivory-tower crowd. On the other hand, 2008 stands as a sad reminder of what can happen if we go to the other extreme and act as if all research is good research.

Now, with venture capital pouring billions into robo-investing (expecting, obviously, fees paid by you to supply their returns) and with robo marketers trotting out big-pedigree research names and fancy “white papers” to demonstrate the credibility of their offerings, you have an especially important stake in knowing how to differentiate the terrific from the inept. But don’t fret. Even if you don’t have advanced degrees in this field, you’ll still be able to recognize what’s what.

Your Polygraph

Self-protection hinges on two oft-repeated but incredibly valuable platitudes:

  •  If it sounds too good to be true, it probably is.
  •  Past performance does not assure future outcomes.

I know these sounds clichéd, especially the past-performance thing that’s been repeated constantly often under the it’s-not-on-me sections of documents labeled “Disclaimers.” Isn’t it wonderful how lawyers make excuses for providers before things even get started, much less waiting for things to actually go wrong. I get it. I’m also turned off by the boilerplate. But as irritating as it sounds, it’s really needed and I’ve written it many times. It’s for real. So please, please, please don’t get so jaded that you gloss over statements like these. To give you incentive to brush past the annoyance and take these saying seriously, consider this: The common element in bad research, the stuff you should avoid, is that it violates these principles.

Too Good to be True

We don’t know the future. Investing is risky. Sometimes we’re going to lose money. Period. If anybody even hints at anything to the contrary, that’s a clue you’re dealing with a lemon.

Nowadays, you’re most likely to encounter we-got-this type rhetoric that uses words, acronyms and names like efficient frontier, optimization, MVO, asset allocation, Markwotiz, Black-Litterman, covariance matrix, volatility, correlation, and so forth. This is heady stuff. The goal, a very worthy goal, is to offer you a portfolio, tailor made for you, that will give you the most bang for the buck while losing as little sleep as possible. In financial jargon, this means balancing asset classes and/or securities in such a way as to combine the maximum expected return given your personal tolerable standard deviation (risk). The robos that offer this tend to do so via portfolios of ETFs with the percentage of money allocated to each determined by their models.

It’s great theory. It taught us how to respect diversification. It taught us the ins and outs of the risk-reward tradeoff. It got Harry Markowitz a Nobel Prize back in the 1950s. And it got me an MBA Thesis topic back in 19-oh-whatever-you-don’t-really-need-to-know.

One problem: It doesn’t work in the real world. (By the way, I dissed it pretty heavily in my thesis and nothing I’ve seen since I left school makes me question what I wrote. In fact, I’m jealous of researchers years later who’ve more recently have been describing it as an “error maximization” model.) In fact, the robos I’ve seen and which addressed the topic aren’t even trying to implement the basic Markowitz model. (However hip they want to be, they know better.) The Black-Litterman approach I mentioned above was created to try to fix Markowitz’s biggest problems, and I’ve seen reference to that. But I’m unconvinced that’s such a great solution and robos rightfully seem ambivalent about that too.

I’ll address this more at another time. For now, suffice it to say that these too-good-to-be-true risk-controlled asset allocations come from models that violate the second of the above-quoted platitudes, to wit, the past-performance future-outcomes caution. They require assumptions (inputs) that are impossible to reliably estimate without a whole lot of spit and chewing gum or historic data that is not likely to persist on trend into the future. And that brings us to the granddaddy of legal slogans.

Past Performance

I love James O’Shaughnessy and the contribution he made to practical useable financial research. Unlike many in the field, he worked with real ideas, such as value, growth, financial strength, return on capital, momentum . . . things we can see, things we can measure, and things we are very relevant to identifying more promising stocks, and which make for a far more sensible approach than listening to gossip or watching TV.

But I absolutely positively hate the title he gave to his now-classic several-times-updated book that demonstrates all of this: “What Works on Wall Street.”

If you look at the title (as presumably everyone who uses it can’t help but doing somewhere along the line) and skim the text impatient as so many are to see the good stuff (the tables and summaries in which he actually tells readers “what works”), you might get the idea that research is a treasure hunt, a wide ranging search for data characteristics in common among stocks that did well in the past.

In other words, if stocks that did well in the past had low PEs, then low PE “works” and you should look for stocks that have low PEs. If shares of companies that had high returns on equity (ROE) did well in the past, then high ROE “works” and you should look for shares of companies that had high ROEs. If stocks whose tickers begin with a vowel did well in the past, then TSWV (Tickers Starting With Vowels) works and you should look for stocks with TSWV.

The first two ideas, low PE and high ROE, sound pretty reasonable. Most knowledgeable investors recognize this so research based on such items need not plague editors with fluff describing how they connect to the ideal that equates the fair price of a stock with the present value of future expected dividends. They assume readers understand why such factors make sense, and know there’s a vast array of investor education content available to coach novices who need assistance. In other words, when you see research talking about things like PE or ROE, you immediately get it, or you know you can easily get up to speed any time you wish.

So what about TSWV? Does that make sense? Do you really need an MBA , MS, or PhD to recognize that it’s idiotic?

But what if TSWV “works?” What if it can be demonstrated to have worked during the entire course of a very long study period, and also in a whole bunch of smaller sub-sets of the complete period? What if the statistical work is checked carefully and, based on well-established tests, the model is blessed as being “robust?” Guess what. Nothing changes. The idea is still dumb.

There’s one tiny piece of “What Works on Wall Street” many readers seem to have missed. It’s at Kindle Locations 1254-55 (I’m gave up books; I’m modern). That’s where O’Shaughnessy says “if there is no sound theoretical, economic, or intuitive, common sense reason for the relationship, it’s most likely a chance occurrence.”

This is why O’Shaughnessy never bothered to test TSWV. There is no “sound theoretical, economic or intuitive common sense reason” why equity returns should be influenced by it. So even if wonderfully robust statistical analysis proves that it did work in the past (who is to say, the world is full of coincidences), he wouldn’t care. He would reject the past because in this case, there is no logical reason to assume what happened before will continue into the future.

Don’t laugh at TSWV. While I haven’t actually seen anybody use that specific item, I’ve seen plenty of people who use items that make no sense at all and are justified entirely by “empirical research,” which is a fancy label for a study of the past. That’s the sort of thing being (rightfully) denounced by those who are bashing financial research. And that’s what makes asset-allocation models, even the most thoroughly optimized robust ones, so dangerous.

Take the big asset-allocation question; fixed income versus equities. What about those 60%/40% allocations that produce such wonderful performance charts? Trash them. We’ve had more than thirty years of steadily, often dramatically falling interest rates. Going forward, the best we can expect of rates is that they stay as is, but eventually, we need to consider what our portfolios would look like if they rise. What historic data could anybody possibly be using that would support any reasonable steps that should the taken by an investor now?

This is the most glaring example, but there are lots more. For example, so much standard equity risk measurement is calculated with reference to historic stock pricing data, which cannot be projected forward because such assumptions don’t take into account the continuance or lack thereof of the factors that caused the price to behave as it did. Ditto correlation. (Historically, commodities were poorly correlated with U.S. stocks, suggesting they should be part of any reasonably diversified portfolio. But is this going to persist into the future? We’re already seeing that correlations are rising as more of the word industrializes, causing demand for commodities to track more closely to general economic activity.)

Domain Knowledge

So yes, there is a lot of bad stuff out there. Still, I took strong issue with the Adviser Perspectives article and posted a vigorous rebuttal (reprinted here). Michael Edesess, the lead author and I were in full agreement on the methodology problems (known in the field as data-mining) discussed above. The gist of my objection was the portrayal such so-called research as the mainstream of financial research. It isn’t. It’s what I referred to as a “dark disreputable dysfunctional corner of the field.”

The challenge for the uninitiated is that the bad research is, to quote the author in his reply to me, “not of overriding importance but it is unwarrantedly regarded as sophisticated, just because it is so full of math.” Along similar lines, in the aforementioned Journal of Portfolio Management paper, Marcos López de Prados, Senior Managing Director at Guggenheim Partners, urged financial researchers to “overcome physics envy.” He pointed out that “the scientific method was devised to study immutable laws of nature. It was not devised to study the mutable phenomena of human institutions.”

The rise of the quant (the quantitative researcher) can be a good – actually a great – thing in finance in general and the investment portion of finance in particular. I consider myself a quant. But I do so, not because I know a lot of heavy-duty mathematics and statics (I know what I need to know, but I’m nowhere close to being a league with the engineers, physicists, mathematicians, etc., who’ve entered the field in recent decades) but because I use quantitative techniques to analyze and take action based on the core fundamentals at the heart of what we do.

I start with theoretically sensible ideas that I believe have the potential to cross from a historic time period into the future. Personally, I revere the likes of Albert Einstein and Stephen Hawking. Professionally, however, I learn more from Graham and Dodd, Warren Buffett, and in academia Charles M.C. Lee, S. Ramu Tjiagarajan, Baruch Lev, Messod Beneish, Richard Frankel, Rafael La Porta, Andrea Frazzini, Clifford Asness, Bhaskaran Swaminathan, William Gebhardt, Pratha Monhanram, Rob Arnott, Joseph Piotroski, Richard Sloan, et. al. They may be quants, but (several are actually professors of Accounting rather than Finance), but all are top-notch in their understanding of and reverence for financial theory. In other words, they all have “domain knowledge” in investing.

The Good News

So there’s a lot of terrific work being done in financial research. I’ll cover plenty of it here as warranted by the topics I’m addressing. You can also Google the above names.

We’ve actually been getting a lot better at equity research (the real kind, not the circa 2000 Sell-Side marketing machines). I already provided one example of my work in the form of the Smart Alpha Cherry-picking the Blue Chips model described here. Also, beneath its silly marketing-oriented label, Smart Beta is an important product of the right kind of financial research. So, too, is the emerging, promising and potentially very important area of low-volatility ETFs.

There’s plenty more out there and a lot of it is likely to find its way into robo-investing, particularly on the equity side. In theory, this could all fade (become “arbitraged away”) if everybody catches on and does the same thing. Realistically, though, anyone who thinks that will happen any time soon needs to get out of the house or office more.

When it comes to stock selection, I expect success to come from using the machines for what they do best, gathering, crunching, analyzing and generating ideas from huge amounts of data freeing humans from the potentially money-burning consequences of emotion, bias, gossip, hype, or just-plain T.M.I. (too much information) and free us to do what we do best (come up with and evaluate ideas).

Asset allocation (among stocks, fixed income, commodities and real estate) is likely to be a much tougher nut to crack (implementation problems with MVO et. al. diminish the extent to which robos can supplant humans). In this area, be careful of pitches that suggest the computers have solved it.

At the end of the day, it’s about each side doing what it does best and pooling efforts. To determine what’s legit, look at the explanation: Look for recommendations and conclusions that focus on understandable common-sense fundamentals, not obscure math.

Disclosure: None.

How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.
Or Sign in with
Rick Pell 9 years ago Member's comment

Knowing what type of knowledge is useless is an answer to only half of an investor's equation but it certainly should be considered as useful knowledge nonetheless! Thank you for the heads up sir.