With Julien Messias, Founder, Head of Research & Development, Quantology Capital Management
Briefly describe your firm, and your own professional background?
Quantology Capital Management is a leading French asset manager specializing in quantitative finance. We manage three listed equity-based strategies; our investment philosophy is focused on capturing outperforming stocks by analyzing investors’ decision-making processes.
Our aim is to exploit behavioral biases (over/under price reactions on corporate events) in a systematic way, in order to generate alpha. Our trading/R&D desk is composed of four experienced people with engineering and actuarial science backgrounds.
I am a fellow at the French Institute of Actuaries and I run the R&D/trading team at Quantology. Previously I ran vanilla and light exotic equity derivatives trading books at ING Financial Markets.
How does Quantology use machine learning?
The purpose of machine learning at Quantology Capital Management is to improve our strategies in a “non-intuitive” way, i.e., to test the dependency to new factors or to exhibit new execution (high frequency) patterns.
It is important to note that cleaning the data takes up 80% of data scientists’ time. This process requires four steps. First, one needs to ensure the data is clean and complete.
Second, the dataset must be debiased: the informative filtration must be adapted, for which we use exclusively either point-in-time or real-time market data. We create and feed our own databases continuously. The data can be quantitative, which is usually structured, and we have recently added qualitative alternative data, which is usually unstructured. Finally, we must ensure that the data is easily available and readable.
This process enables Quantology Capital Management to exhibit the best proxy of the “collective intelligence” of the market, which is one of the strong principles that we rely on. For that, the more data, the better. But the more data, the messier as well. It is a perpetual trade-off between the quantity of the data, and its precision.
What are the challenges of implementing AI/ML on a trading desk?
When running a hedge fund, on one hand you must be continually focused on applying new techniques and using new data. On the other hand, a manager must maintain steady investment principles and axioms — which are at the heart of success.
That said, you cannot have your whole business from A to Z relying only on ML. One of the most well-known issues is overfitting. This denotes a situation when a model targets particular observations (too much emphasis on the outliers, for example) rather than a general structure based on certain parameters. The recommendations lead to losses being removed consciously — or subconsciously — by not sufficiently challenging the results.
How can machine learning be a competitive advantage for a hedge fund?
Machine learning is a wonderful basket of tools that can be used to sharpen your trading, which can be a significant competitive advantage.
Today, we notice several initiatives on different avenues. You have the “explorers,” researchers focused on grabbing more and more data, versus the “technicians”, people who are working on traditional market data and trying to improve current processes. The latter group evolves in a well-known environment, eager to apply techniques to their traditional structured datasets.
How does Quantology work with technology solutions providers?
The infrastructure complexity has to be handled properly. To achieve that, one must focus on the business relation one creates with the technology solution providers. It takes a lot of time for an asset management firm to deal with such partners, as the consistency, the accuracy and the format of the data has to be constantly challenged. A provider has to be much more than a data vendor — it must think as long-term partner interested in its client’s success, and it must learn about the feedback from users.
What are future threats to machine learning and artificial intelligence processes?
Quantitative and systematic strategies are commonly criticized for suffering from “time-decay”, to speak as an option trader. They are challenged as well from a perceived lack of adaptability.
The main drawback of machine learning is how it suffers during non-stable financial markets. It is very challenging to find a strategy that can be an “all-road” or “all-weather”, and a strategy that can be sample-independent.
The best way to address and fix this topic is by splitting the database into three sub datasets: one dedicated for training, the second for testing, and the third for validation.
More than the algos themselves, innovation happens in the data storage field with data lakes or data warehouses, which enable researchers to gather data from different sources, as well as different formats of corporate data. The issue with such solutions is the cost of calculation when grabbing the data, as it is raw and not sorted – and thus the lack of visibility in the dataset makes it unsuitable for high-frequency decisions.In the near term, all asset managers, from the smallest boutiques to the biggest asset managers, will include standard machine learning tools in their process. Thus, obtaining alpha sources from machine learning will require more and more investment, capabilities and unique sets of data. Having said that, we have noticed recent efforts are less on algos – which are getting public sooner — and more on the datasets. The algo can be considered as the engine, the data as the gas: in the long run, which is more expensive? The industry needs to answer that question.
This article first appeared in the Q2 issue of GlobalTrading, a Markets Media Group publication.