KDnuggets Home » News » 2016 » Feb » Opinions, Interviews, Reports » New Tools Predict Markets with 99.9% certainty ( 14:n32 )

New Tools Predict Markets with 99.9% certainty

Tags:

Predicting financial markets is a relatively new field of of research, it is cross-disciplinary, it is difficult and requires some insight into trading, computational linguistics, behavioral finance, pattern recognition, and learning models.



By Lars Hamberg.
New Tools Predict Markets with 99.9% certainty

Tools produce long/short signals across asset classes

Live predictive signals have been published in an ongoing public prospective study: whatismonitor.com

As per 2016-01-15:
3634 trades and counting.
Hit ratio: 52.6%
Average profit trade: 3.3%
Average loss trade: -2.4%
Return: 34%

Some lessons learned:
i) Vast streams of online language data contain a convincing source of alpha
ii) Positive and negative online sentiments do not work, with regard to predictive models for financial assets, generally speaking.
iii) The statistically predictive property in an online-sentiment signal decays very rapidly

All lines in the ongoing public prospective prediction study run on different models. What do all models in the study have in common?
- They operate on massive amounts of text data and learn and improve with more data.
- They don’t rely on the same basic assumptions as other models out there. That makes them unique and that’s why they work.

The performance of the prediction study speaks for itself: Less than 1/1000 probability of getting equal or better by chance.

Here are the impressive result of the live predictions you have received in your own mailbox, and the performance of those predictions, had you traded on them on the subsequent close, as indicated. This is an over-hyped field of research with a lot of noise. The results that keep being produced in this ongoing public study are second to none, in my view. The performance of the prediction study speaks for itself: As per 2016-01-15: 3634 trades in total, of which 1911 profitable, including transaction costs. Less than 1/1000 probability of getting equal or better by chance. Average profit trade: 3.4%. Average loss trade: -2.5%. VIX stands out in terms of profitability: Note that in the case of VIX the simulated trading is based on the index and not on the futures. If we exclude the VIX line: Average profit trade: 3.3%. Average loss trade: -2.4%. Average return over the whole period: 34%. In January 2015, it was pointed out that the signals didn’t perform on US benchmark equity indices. Over 2015, a solution to that problem has been validated. The new models for US equities are live in the study from the start of 2016.

What is a prediction?
A prudent definition of the term prediction is reflected in the testing methods. Many attempts are stuck in the rut of testing whether market direction is systematic under the testing condition that market direction is random, which may surprise many practitioners within quantitative finance. It runs contrary to reason but this methodological mistake is prevalent - even in the most famous study in this field of research: Using a more prudent definition of the term, the accuracy in the world's most famous prediction study could have been as low as 47% (7 out of 15) instead of 87% (13 out of 15%). An accuracy rate of 47% would not have produced worldwide media attention and more than 1600 academic citations, in my view. See my critique here: https://www.kdnuggets.com/2016/01/sentiment-analysis-predictive-analytics-trading-mistake.html

Prediction models vs. trading models
There is a difference between prediction accuracy and profitability and these two are not necessarily related under real-life conditions. More importantly, you can't build a trading model on the assumption that they are related. Prediction models - let alone models for directional prediction - and trading models are two different things altogether. There are aspects of risk, tradability, liquidity and transaction cost. The study is strict: it’s the change in the signal that suggest whether you go long or short on the next (subsequent close) and it is assumed that you can open and close trades on the next official (Bloomberg) close, on average. It is the appropriate method for studying a raft of different models for assets that are traded in different time zones. Daily mails containing live predictions continuously go out to all followers of the study, in order to avoid discussions about whether results are "too good to be true".

What is the task at hand?
Using advanced analytics in order to predict price formation for financial risk assets is a tickling idea for many people. I am no exception. The general idea: to make money on some credible alpha source in the big data stack. The hypothesis is that by reading more information and by understanding it better and faster than others you may get an information advantage that you can trade on. Since 2009, I have been experimenting with state of the art tools - some even in their earliest laboratory versions. I was lucky to come across – and to start playing with – new technologies for mass monitoring of language data. One of the emerging technologies was used for profiling and convicting a famous serial killer. While that was impressive and fascinating, my area interest was of course the financial markets. The tools in question read - and have read - vast amounts of unstructured language data and operate with modern language technology and unsupervised learning, in order to capture meaning and signal out of the vast streams of unstructured language data in different languages, which constitute the large – and rapidly growing – bulk of what is called Big Data.

A non-trivial task with special challenges
At an early stage it was evident - from my perspective - that most attempts in the field of online sentiment and predictive analytics for risk assets were going nowhere. Why? The models in common use were based on a number of questionable assumptions and they didn’t work. One such assumption had to do with causality: Proxy populations were modeled and - successfully - used to predict parliamentary elections, sales of consumer products, churn rates, distribution of TV-viewer votes, and so on and so forth. Numerous attempts were made, using the similar approaches, for predicting short-term price formation (or direction) in traded financial assets, without realizing that causality between proxy populations (tweeters and investors/market operators and price formation) is fundamentally different, since price formation (the price) of a traded financial asset is - needless to point out - capital weighted. In that sense, the price of a traded financial asset is very different from the price of a mobile subscription, a book or a cinema ticket. This inherent difference - and the particular challenges it poses in terms of causality and modeling - has still not been widely understood, in my view.

Other lessons that have been learned over the last 7 years
This is a relatively new field of of research, it is cross-disciplinary, it is difficult and requires some insight into trading, computational linguistics, behavioral finance, pattern recognition, learning models and so on. An extremely rapid technology development adds to the complexity. The buzzwords, the noise and the hype are additional distractors. Computer scientists are venturing into the world of trading, and traders are venturing into the world of computational semantics. Big quant outfits working with bad sentiment data and getting nowhere.
Among other things, the ongoing study reconfirms that:
i) Online language data contains a convincing source of alpha
ii) Raw-frequency of co-occurrences of terms is not a viable approach with regard to financial assets, generally speaking.
iii) Positive and negative sentiments from pre-defined wordlists is not a viable approach and positive and negative sentiments do not work, with regard to predictive models for financial assets, generally speaking.

Moreover, there is a trade-off between the time horizon of the statistical prediction and the time resolution, i.e. the minimum amount of sentiment data per observation. As a consequence, there is - with a few rare exceptions - not yet enough useful sentiment data generated online for doing meaningful sentiment analysis on single stocks. Also, the statistically predictive property in an online-sentiment signal decays rapidly and reaches zero within 48 hours, generally speaking. Having said that, reading signals continuously (on a rolling basis), will lead to a new trading position every three weeks, on average, with large variations between assets: in the study, the EURUSD has traded 224 times, while the XAU/Gold has traded 19 times in the same time period. Lastly, all lines in the study run on different models. What do all models in the study have in common? They operate on massive amounts of data. They use very granular and rich sentiments that are tailor-made for the task. They don’t rely on the same assumptions as other models out there. These models learn and they improve with more data and with more activity in the learning process. With a rigorous definition of prediction, there is a 52.6% accuracy rate on prediction in terms of profitability and an average return of 34%. The probability of achieving an accuracy of at least 52.6% by chance is just 0,09% (less than one in a thousand) for the large sample size in the study (3634 trades of which 1911 profitable). Still, the distribution of profitability between profit trades (3.2%) and loss trades (-2.4%) is more important and even more impressive, in my view.

What’s next?
The stream of online language data contains a convincing alpha-source. The ongoing study is the convincing proof. The number of observations keeps piling up as the live predictions are piling up in your mailbox. It is important to know what do look for, how to measure it and how to build a trading model around it. This study has increased our understanding of causality in this field. It’s been a truly fascinating journey and it’s still just the beginning. My prediction is that this area of research will have a fundamental impact across financial services. The recent emergence of machines that are learning to read and understand human language on an Internet scale is a game changer that will allow us to further leverage from patterns far beyond human cognition. Read more here: http://www.campaignasia.com/Article/404827,2016+A+giant+leap+for+big+data.aspx

Original.

Related:


Sign Up