Crowd Sourcing Versus Playing The Crowd

Last week, I was tough on CrowdInvest Wisdom ETF (WIZE) which invests in stocks based on Apple fan-boys who somehow are able to stumble onto sponsor’s iOS app (no Droid, no Windows – an interesting approach to sampling) and who are sufficiently non-apathetic to vote bull or bear. I’m not feeling at all guilty for having bashed the idea. Actually, I’m doubling down. But I can’t help but keep thinking there has to be a more sensible way to detect and invest based on what the crowd really is saying.

I Know, I Know, The Rabble and All That

I made it clear last week that I’m fully aware of the dangers of following an ignorant Wall Street mob and in fact, have dedicated much of my career to finding stocks the crowd isn’t seeing, or is seeing but ignoring. But the day I stop learning is the day I stop living, so I’m always on alert for indications that what I once believed may no longer be quite as iron clad as once seemed to be the case.

The world of the stock market is a world of supply and demand, so even if the reasons for surging demand are dumb, we don’t necessarily help ourselves by being blind to them. And the fact that you’re reading these on-line words, and might even share them via Twitter (hint, hint), are just a couple of ways information is revolutionizing the investment process to the point, perhaps, where one might dare suggest that Mr. Market could be innocent until proven guilty. (I love you Warren B but come on, we really do have to wonder if Ben Graham would have at least adapted edited his iconic tale of the archetypical manic-depressive investment idiot known to us as Mr. Market had he been ware of what the information age has become.)

That said, I part company with the folks at WIZE because I refuse to naively accept that anybody should ever buy any stock because some unknown (possibly wise, possibly stupid, possibly expert, possibly clueless, possibly honest, possibly crooked, possibly diligent, possibly frivolous) folks vote. My intent, however, is to not merely whine. I want to try to more effectively identify some crowd shortcomings of the naïve voting approach and see what we can do about moving toward solutions.

Reimagining Crowd Sourcing

Let’s start with the frivolity problem. We don’t know if any votes on the CrowdInvest app are for real. I know the three I cast were bogus; the first two involving unfamiliar tickers as I indulged my desire to learn the app, and the third, an erroneous short vote on Apple because my finger went to the wrong spot and I couldn’t figure out how to remove the bad vote). What about the other voters?

After I published last week, CrowdInvest founder Martin Mikus and CEO Annie Wyatt (who may or may not have seen my post) told another Forbes contributor Ky Trang Ho their objections to market cap weighted indexes and active mangers. That’s all well and good. I’m with them on that. (Although about that active manager thing: What if the sub-microscopic number of voters whose votes generated the actual WIZE portfolio consisted entirely or largely of tech savvy curious active equity managers? In that, case, the WIZE portfolio would be attributable to the very same kinds of people you told Ky Trang Ho investors should steer clear of.)

The REAL question isn’t whether alternatives are needed – I agree that they are – but the difference between sensible alternatives versus bad alternatives, or in this case, potentially effective versus nonsensical ways to measure and harness the investment opinion of the crowd. At the very least, I believe CrowdInvest should withdraw its index and liquidate WIZE pending re-launch after development, introduction and user adoption of Droid and Windows apps. If you talk about the crowd, then measure the crowd, if not the full crowd (I’m not convinced any app approach can truly do that) then at least an arguably less-narrow and potentially less unrepresentative sliver of it).

Ultimately, though, rather than using apps, I’d prefer to ignore what the crowd “says” (talk is cheap, tapping on smart phone screens is cheaper) and focusing on what it actually “does“ using real money, something that can be measured using price and volume data.

The other crowd-information problems involving knowledge, experience, ethics, etc. can all be lumped together under the heading of credibility; whether or not the opinion, once effectively identified, is worthy of being taken seriously.

It’s critical that we do this. I’d have thought that after the turn-of-the-century Wall Street conflict-of-interest scandals, it would now be common practice to vet anyone who delivers investment opinions. Obviously, though, that is not the case. I’ve sadly come to recognize that many investors continue to crave ideas without showing any interest in vetting the legitimacy of the speaker. (Hence the idea for my in-progress novel, in which a Madoff-like character does something I believe would have kept the real-life guy out of prison; disclose the details of the Ponzi scheme in a formally filed registration document. My guess is few clients, if any, would have read it or cared.)

I believe this challenge is likewise manageable. If we understand how stocks are priced and understand the dynamics of the various factors that go into it, we can independently assess the likelihood that crowd has or lacks legitimate reasons for doing as it does. And fortunately, in this regard, we know exactly how stocks are ideally priced (the present value of cash flows expected to be received as an inc ident of ownership) and we have many proxies and workarounds that allow us to work with this ivory tower concept.

Let’s Try Building a Model to Play The Crowd

First things first: Let’s make sure the crowd is really focused. I’m going to confine my efforts to stocks that are S&P 500 constituents. Yes there’s an obvious large-cap bias. Yes I’m forfeiting the opportunity to discover obscure names. But if you want the crowd to tell you want to buy, those shortcomings come with the territory.

Next, I created a simple Portfolio123 ranking system that I named “Crowd Love” in order to measure, you guessed it, the crowd’s love for particular stock as measured by objective data regarding what it has actually been doing. There are countless ways this can be done. Here are the four (equally weighted) factors I chose to consider on this occasion:

  • The number of Wall Street analysts covering a stock (higher is better – more eyeballs who make the big bucks by trying to genuflect to important segments of the crowd)
  • Share returns achieved in the past three months (higher is better – demand outstripping supply)
  • Change in Short Interest in the past month (lower is better –indicates haters are chickening out)
  • Increased trading activity; dollars traded as a percent of market capitalization, latest month versus previous month (higher is better – the crowd is trading more aggressively which, combined with the second and third factors, suggest it’s the buyers that are more motivated)

I’m going to build my portfolio based up to 20 stocks that get the highest scores under this ranking system.

But we’re not yet finished. We need to address the credibility issue. Again there are countless ways to do it. For this exercise, I’m going to use the noise-measurement model I discussed here on 3/3/16. It’s based on the notion that in the stock market P (price) is not equal to V (Value). Instead, P = V + N (price equals value plus noise). You can refer to the 3/3/16 article for a discussion on how I quantify noise; noise as a percent of market cap or of the stock price. The less noise there is, the more appealing the stock price since we can succeed not only if valuation rises (e.g. due to rising earnings, cash flows, etc.) but also if, as so often happens, if the market gets excited causing increasing levels of noise to drive the stock higher.

Table 1 summarizes the results of simulations I ran on Portfolio123 for the two portfolios. One, which I call the Smart Crowd portfolio, consists of the top 20 stocks (per my Crowd Love ranking system) drawn from among S&P 500 constituents for which noise accounts for 25% or less of stock price. The other Dumb Crowd portfolio consists of stocks for which noise accounts for 50% or more of stock price.

Table 1

% Returns Smart Crowd Dumb Crowd
2006 16.72 1.27
2007 11.79 1.31
2008 -34.15 -37.71
2009 33.65 21.64
2010 20.44 19.15
2011 8.23 4.86
2012 15.41 5.83
2013 31.90 19.12
2014 16.25 13.16
2015 -12.70 3.48
YTD 2016 22.93 7.35
     
10 Yr Annualized 8.93 3.88
10 Yr Stan. Dev. 15.58 16.04
10 yr Annl Alpha* +3.82 -0.85

* relative to S&P 500 Equal Weight Index

Well what do you know! The Smart Crowd portfolio beat the daylights out of the Dumb Crowd portfolio over the full test period. The only time the Dumb Crowd portfolio was significantly better was in 2015. This has nothing to do with crowd measurement but instead, with my having chosen to measure the reasonableness of what the crowd was doing based on Valuation ideas. We all know Value had a rough time last year (but the Smart Crowd portfolio has come roaring back more recently.)

Given the mind-numbing varieties of ways in which one can measure crowd activity and the wisdom thereof, this is not by any means the last word on the topic. I’ll continue to track the Smart Crowd approach quietly on Portfolio123’s Smart Alpha platform and won’t decide until after expiration of the new-model “incubation” period if it’s worthy of public release. Whether it is or isn’t, one thing seems clear: This approach to crowdsourced investing seems a lot more promising than what we’ve seen elsewhere so far. If and as I develop other models, I’ll share them here.

The Stocks

Tables 2 and 3 show the stocks in the Smart Crowd and Dumb Crowd portfolios (each of which is equally weighted). Note that only 18 stocks qualify art this time for the Dumb Crowd portfolio. And, by the way, there’s no reason to feel flustered at seeing two seemingly contrary crowd portfolios. By definition, the true crowd is always neutral (shares bought always equals shares sold). That’s one more reason why app voting can only sound good if you say it fast enough. We can’t fool ourselves into thinking we’ve legitimately measured the sentiment of “the” crowd. All we can ever do is choose which sub-crowd we want to use as our source.

Table 2 – Smart Crowd Portfolio

Ticker Company
AES AES Corporation
AVY Avery Dennison
BBBY Bed Bath & Beyond
BBY Best Buy
BWA BorgWarner
CSX CSX
EMR Emerson Electric
FLR Fluor
FLS Flowserve
FOXA Twenty-First Century Fox
GWW Grainger (W W)
HOG Harley-Davidson
PH Parker-Hannifin
PHM PulteGroup
PWR Quanta Services
R Ryder System
UHS Universal Health Services
URBN Urban Outfitters
URI United Rentals
YUM YUM! Brands

Table 3 – Dumb Crowd Portfolio

Ticker Company
ADP Automatic Data Processing
CERN Cerner
EFX Equifax
EL Estee Lauder Companies
FAST Fastenal
FOX FedEx
HP Helmerich & Payne
HRL Hormel Foods
KHC Kraft Heinz Co
MNST Monster Beverage
ORLY O’Reilly Automotive
PAYX Paychex
PWR Quanta Services
RAI Reynolds American
REGN Regeneron Pharmaceuticals
ULTA Ulta Salon Cosmetics & Fragrance
V Visa
VMC Vulcan Materials

As suggested above, I’m not quite ready to buy the stocks in Table 2 or sell the ones in Table 3. But it should be instructive, and perhaps fun, to look at them and see if they make sense as the results of crowd messaging.

That is something we need to be doing. Crowd sourcing (especially crowd sourcing sans naïve fan-boy apps) is in its early days, so let’s consider this an ongoing process rather than an accomplishment. (So assuming I continue to experiment, I may have to rename the Smart Crowd portfolio Smart Crowd Quiet Value, or something like that. Maybe I can even imitate CrowdInvest and change Smart Crowd to SmartCrowd. Who knows!)

Disclosure: None.

How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.