## E Quantum Algorithms: The New Black Box In Portfolio Optimization. What Connect The Dots?

The quantum algorithm is a new fun field emerging in the finance town. Such a field is capitalizing on developing Artificial Intelligence “AI” software to spot arbitrage opportunities in portfolio optimization.  Hence, it is gaining wide popularity nowadays among asset managers especially hedge funds. Currently, many firms utilize quantum algorithms in stock and bonds trading. They consider their applied algorithms, the firm’s most valued black box. Even more, they treat the algorithm success recipe as ultimate confidential. But why are algorithms in portfolio optimization that important? What connect the dots?

“Creativity is intelligence having FUN” – Albert Einstein

A quick and simple answer, it saves transaction costs and time. Ok, wait just a second, does it? Actually, this depends on the statistical and mathematical framework applied in the trading algorithm. In other words, it depends on the data sample size and formula embedded in the algorithm to calculate the optimum return/risk bundle in the investor’s portfolio. Theoretically, Mean-Variance (MV) is the most popular framework used to calculate the optimum return/risk trade-off. MV theory assumes the stock returns and variance are normally distributed; preferences are quadratic functions estimates and investors risk-averse homogeneous rationale. Generally speaking, theory losses its importance if assumptions are violated in real life scenarios. Statistically, in today’s fast-paced world, most of the stock returns and variance depict non- normality distribution and investors' preferences are heterogeneous. Investors could be categorized into risk-averse, neutral and risk seekers. It is true that AI software could process data sample size of millions and even billions of stocks returns and variance to overcome the violation of non-normality distribution. This AI pro could be considered con from the statistical perspective. It is not always the bigger the better, as sometimes big data samples may cause statistical biases especially p-values problems. Hence, accepting or rejecting null hypothesis while it is genuinely true or false. Consequently, processing big size of data in stock prediction could be time-consuming and expensive instead of reducing costs and time.

How did you like this article? Let us know so we can better customize your reading experience. Users' ratings are only visible to themselves.