Lioba Heimbach, Eric Schertenleib, Roger Wattenhofer
Trade execution on Decentralized Exchanges (DEXes) is automatic and does not
require individual buy and sell orders to be matched. Instead, liquidity
aggregated in pools from individual liquidity providers enables trading between
cryptocurrencies. The largest DEX measured by trading volume, Uniswap V3,
promises a DEX design optimized for capital efficiency. However, Uniswap V3
requires far more decisions from liquidity providers than previous DEX designs.
In this work, we develop a theoretical model to illustrate the choices faced
by Uniswap V3 liquidity providers and their implications. Our model suggests
that providing liquidity on Uniswap V3 is highly complex and requires many
considerations from a user. Our supporting data analysis of the risks and
returns of real Uniswap V3 liquidity providers underlines that liquidity
providing in Uniswap V3 is incredibly complicated, and performances can vary
wildly. While there are simple and profitable strategies for liquidity
providers in liquidity pools characterized by negligible price volatilities,
these strategies only yield modest returns. Instead, significant returns can
only be obtained by accepting increased financial risks and at the cost of
active management. Thus, providing liquidity has become a game reserved for
sophisticated players with the introduction of Uniswap V3, where retail traders
do not stand a chance.
Market Making via Reinforcement Learning in China Commodity Market
Junshu Jiang, Thomas Dierckx, Duxiang Xiao, Wim schoutens
Market maker is an important role in financial market. A successful market
maker should control inventory risk, adverse selection risk, and provides
liquidity to the market. Reinforcement Learning, as an important methodology in
control problems, enjoys the advantage of data-driven and less rigid
assumption, receive great attentions in market making field since 2018.
However, although China Commodity market, which has biggest trading volume on
agricultural products, nonferrous metals and some other sectors, the study of
applies RL on Market Making in China market is still rare. In this thesis, we
try to fill the gap. We develop the Automatic Trading System and verify the
feasibility of applying Reinforcement Learning in China Commodity market. Also,
we probe the agent behavior by analyzing how it reacts to different environment
Carole Bernard, Silvana M. Pesenti, Steven Vanduffel
The robustness of risk measures to changes in underlying loss distributions
(distributional uncertainty) is of crucial importance in making well-informed
decisions. In this paper, we quantify, for the class of distortion risk
measures with an absolutely continuous distortion function, its robustness to
distributional uncertainty by deriving its largest (smallest) value when the
underlying loss distribution has a known mean and variance and, furthermore,
lies within a ball - specified through the Wasserstein distance - around a
reference distribution. We employ the technique of isotonic projections to
provide for these distortion risk measures a complete characterisation of sharp
bounds on their value, and we obtain quasi-explicit bounds in the case of
Value-at-Risk and Range-Value-at-Risk. We extend our results to account for
uncertainty in the first two moments and provide applications to portfolio
optimisation and to model risk assessment.
Mack-Net model: Blending Mack's model with Recurrent Neural Networks
Eduardo Ramos-Pérez, Pablo J. Alonso-González, José Javier Núñez-Velázquez
In general insurance companies, a correct estimation of liabilities plays a
key role due to its impact on management and investing decisions. Since the
Financial Crisis of 2007-2008 and the strengthening of regulation, the focus is
not only on the total reserve but also on its variability, which is an
indicator of the risk assumed by the company. Thus, measures that relate
profitability with risk are crucial in order to understand the financial
position of insurance firms. Taking advantage of the increasing computational
power, this paper introduces a stochastic reserving model whose aim is to
improve the performance of the traditional Mack's reserving model by applying
an ensemble of Recurrent Neural Networks. The results demonstrate that blending
traditional reserving models with deep and machine learning techniques leads to
a more accurate assessment of general insurance liabilities.
Hey there! I recently completed a tutorial on training a BERT model to predict movie review scores from a written review. You can find the tutorial and interactive demo by following the link! (r/DataScience)
Prices are very, very far from (geometric) Brownian motions. Return distributions are fat-tailed, volatility is “rough”, i.e. clustered in a scale invariant way, negative returns increase future volatility and trends, either up or down, also increase future volatility. Some mathematical models, such as the recent Multifractal Random Walk or the Quadratic Rough Heston Model, come close to reproducing all these stylized facts.But another, data driven approach is also possible. Look below at the wavelet representation of the S&P500 in the years 2000-2018. Both intriguing and beautiful no?Wavelet transforms are the appropriate tool to study scale invariant objects, such as turbulence flows, fracture surfaces, galaxies and financial time series. Using only low moments, non-linear correlations between wavelets, Rudy Morel has come up with a parsimonious representation (that rely on scale invariance) which requires only log^2(T) coefficients for a time series of length T. Such a compressed representation can then be used to generate synthetic time series that faithfully capture all the above stylized facts – see our recent preprinthttps://lnkd.in/gJKzae6ZWavelet analysis can be seen as an educated way to deep learn the data, in a way that leverages a priori knowledge -- here about financial time series. The next step will be to compare with brute force deep learning techniques. #wavelets #volatility #syntheticdata #scaling #stockmarket
Capital Fund Management | Chairman and Head of research
*** Market Microstructure - Quantitative Research ***I am happy to share my last paper on market impact which completes the previous ones that I published on equity and options market during the last years (links in comments). This third paper completes the two previous ones and closes this market impact trilogy I started 6 years ago. In this paper, we propose a theory of the market impact of metaorders based on a coarse-grained approach where the microscopic details of supply and demand is replaced by a single parameter ρ ∈ [0, +∞] shaping the supply-demand equilibrium and the market impact process during the execution of the metaorder. Our model provides an unified explanation of most of the empirical observations that have been reported and establishes a strong connection between the excess volatility puzzle and the order-driven view of the markets through the square-root law.I would like to thank Marcos Lopez de Prado and Prof. Alexander Lipton for their comments on the preliminary version of this paper. I am also particularly grateful to Charles-Albert Lehalle for his careful reading, comments and the many interesting discussions we had about it.
venez échanger le mardi 31 mai à Paris sur les stratégies d'investissement systématiques avec Marcos Lopez de Prado, Gautier Marti, Marie Brière, Hugues Langlois, Jeroen VK Rombouts et votre serviteur à ce workshop organisé par l'ILB. Au programme: Open Problems in Quantitative Asset Management, Alternative Data et Machine Learning.
Abu Dhabi Investment Authority (ADIA) | Quantitative R&D Lead
Continuing my previous post, I'm happy to present the latest episode of Quantcast with the quant finance editor at Risk.net, Mauro Cesa, https://lnkd.in/dYp4hA2r). In addition to stablecoins, we also discussed my recent paper with Artur Sepp of Sygnum Bank on automated market-making in FX, https://lnkd.in/ddVw7RnR. It describes "how central bank digital currencies or stablecoins can be exchanged through a smart contract on the blockchain while retaining pricing consistent with a traditional centralized market." I also argue that "the approach would make FX markets more transparent, retaining the monetary incentives for market-makers while improving efficiency for other players." It also allows direct exchange of relatively illiquid currencies without the need for US dollar transactions.Index: 00:00 Introduction 05:00 Automated FX market-making 09:10 On-chain and off-chain interaction 11:25 Applications of the framework 12:25 The problems with algorithmic stablecoins 20:00 The collapse of TerraUSD 22:55 Viable blockchain applications in finance 29:30 The limits of DeFi 34:35 Non-financial applications#blockchain #stablecoins #fx
In 1972, the future Nobel Prize winner Philip Anderson published an article in "Science" entitled "More is Different", in which he explained the concept of emergence in a remarkably clear manner: the behaviour of assemblies of interacting particles cannot be understood as a simple extrapolation of the behaviour of isolated particles. On the contrary, original and surprising behaviours can appear, and their understanding requires specific concepts and new tools. The whole is not greater than the sum of its parts, it is different. Anderson had in mind, in particular, the surprising collective behaviours in condensed matter systems such as superfluidity, which does not exist at the atomic level, and can only appear at the macroscopic level. This phenomenon of emergence concerns many fields outside physics: collective behaviour of neurons (memory, consciousness), starling murmurations, social unrest, economic crises, financial panics... I thought that a conference on this theme would be a perfect epilogue to my course on the links between statistical physics - which is precisely the science of emergence - and the social sciences. My hope is to bring together, on June 2nd and 3rd 2022 at the Collège de France, physicists, economists and mathematicians specialised in these ideas, to allow, who knows, the emergence of new research directions. https://lnkd.in/eG4Y4_9E#research #collective #emergence #crises #panic #collegedefrance
Capital Fund Management | Chairman and Head of research
I will talk about modeling the dynamic of implied volatility surfaces of crypto assets (#btc, #eth ) at Imperial College in London on Wednesday the 18th May. I will present an arbitrage-free framework equipped with a stochastic volatility model for the arbitrage-free dynamics and valuation of different types of crypto options. I will illustrate applications for calibrating and modeling of Bitcoin and Ether volatility surfaces using Deribit options exchange data.This talk is based on joint work with Parviz Rakhmonov.
Sygnum Bank | Head Systematic Solutions and Portfolio Construction
come and join us in Paris Thursday the 31st of May in Paris for this workshop organized with the Finance and Insurance Rebooted (FaIR) initiative of the Institut Louis Bachelier.It will be great to discuss scientific investment with Marie Brière, Marcos Lopez de Prado, Gautier Marti, Thierry Foucault and Jeroen VK Rombouts. A very interesting Paris + Abu Dhabi meeting!
Abu Dhabi Investment Authority (ADIA) | Quantitative Research & Development Lead
Tuesday, May 24, 5:30PM-6:45PM (Eastern Time): Join us for Igor Halperin's online talk "Combining Reinforcement Learning and Inverse Reinforcement Learning for Asset Allocation Recommendations" in NYU Courant's Mathematical Finance & Financial Data Science Seminar. This online event is open to the public, but requires registration. Registration & more details:https://lnkd.in/dzF832kmAbstract: We suggest a simple practical method to combine the human and artificial intelligence to both learn best investment practices of fund managers, and provide recommendations to improve them. Our approach is based on a combination of Inverse Reinforcement Learning (IRL) and RL. First, the IRL component learns the intent of fund managers as suggested by their trading history, and recovers their implied reward function. At the second step, this reward function is used by a direct RL algorithm to optimize asset allocation decisions. We show that our method is able to improve over the performance of individual fund managers.#nyucourant #nyu New York University Courant Institute of Mathematical Sciences NYU Courant Institute of Mathematical Sciences M.S. in Mathematics in Finance, NYU Courant New York University #reinforcementlearning #machinelearning #trading #inversereinforcementlearning
I am very happy that three years after the preprint, our paper "Transaction cost analytics for corporate bonds" with Renyuan Xu and Xin Guo is now published in Quantitative Finance.We started this work long ago at UC Berkeley, convinced that Corporate Bonds deserves systematic and quantitative Transaction Costs Analyses at the level of Equities. Nowadays, with the low yield environment, controlling the cost of buying and selling them is more important than ever.
Abu Dhabi Investment Authority (ADIA) | Quantitative R&D Lead
I am looking forward to presenting at the Machine Learning and Quantitative Finance Workshop on June 1 at the Oxford-Man Institute of Quantitative Finance, University of Oxford. More info here:https://lnkd.in/gr9NhMBk#nyu #nyucourant #machinelearning #financialdatascience #trading
Courant Institute of Mathematical Sciences | Clinical Full Professor of Mathematics
Our white paper on default modeling is on https://lnkd.in/eUMAPYHU. Nick Costanzino, Albert Cohen and I found that looking at stopping times in terms of their conditional survival curves yields a general framework for default modeling that encompasses both structural and reduced form models. Moreover, whether the stopping time is predictable or totally inaccessible can be directly read off of the survival curves.#default #creditrisk #stoppingtime #predictable #inaccessible #whitepaper
ML-Quant now has 6 additional web scrapers, for a total of 30 scrapers working 24/7 to 'find' trending content in the quant finance and machine learning space. Today, I added a script to find the top general ML papers as per their social media metrics (twitter, news, YouTube etc.) https://www.ml-quant.com/
For me personally, one of the holy grails is to tie machine learning and finance to climate change - that is using ML in finance to tackle some of the most challenging problems of our time. In addition to ESG investing, providing financial protection against climate risk is another key ingredient - we need to protect society while figuring out our sustainability transition plan. Natural disasters could otherwise wipe out budgets set aside for R&D, e.g. Hurricane Katrina devastated New Orleans. That, perversely, over time gives fossil fuel companies the upper hand.Destruction -> poverty-> lack of education-> poor personal sustainability hygiene and lack of local opposition to fossil fuel companies (fossil fuel jobs>perceived environmental damage)While not a climate solution, there is a growing contingent claim product area for managing climate change and natural disaster risk and one area where richer datasets (e.g. satellite imagery) and ML can ultimately make a significant difference in ensuring fair pricing and more robust uncertainty quantification. Our latest technical paper on embedding clustering into a unified Hierarchical Bayesian modeling framework for catastrophe and interest rate risk premia adjusted CAT bond pricing is available on arXiv: https://lnkd.in/dQivqGXW This is joint work with co-authors Chatterjee and Domfeh. #climaterisk #machinelearning #environmentalfinance #bayesianstatistics #insuretech #quantitativefinance #weatherforecasting Peter Adriaens Dixon Domfeh Morton Lane Runhuan Feng, PhD, FSA, CERA Todd Ringler Swami Sethuraman Joydeep Lahiri Maura Feddersen Larry Eisenberg Dr. Sebastian Rath Richard Matsui Lawrence Habahbeh
Coinbase said in its earnings report Tuesday that it holds $256 billion in both fiat currencies and cryptocurrencies on behalf of its customers. Yet the exchange noted that in the event it ever declared bankruptcy, “the crypto assets we hold in custody on behalf of our customers could be subject to bankruptcy proceedings.” Coinbase users would become “general unsecured creditors,” meaning they have no right to claim any specific property from the exchange in proceedings. Their funds would become inaccessible.Coinbase is down 70% YTD.FUD?
Intech Investment Management LLC | Senior Vice President. Chief Data Scientist.
Section 8.4.5 of my and Adrien Treccani, Ph.D.'s book Blockchain And Distributed Ledgers: Mathematics, Technology, And Economics, https://lnkd.in/dPuCJjxf published in 2021, is called Dynamically Stabilized Coins. It is exclusively devoted to critiquing dynamic stabilization schemes despite VC's keep throwing money at them. (Why not simply make an effort to read the book instead of believing an army of salespeople and techies without the slightest idea of the whole concept?) We clearly stated, "This stabilization scheme represents yet another pure theoretical construct that cannot and does not work in practice and will collapse..." We even compared these algorithms to Baron Munchausen, who famously described to the trusting crowd how he pulled himself and his horse out of a mire by his hair. To those who did not want to listen, enjoy the Terra ride now! In the interest of the community, please see this section in full#blockchain #economics #dynamicallystabilizedcoins #terra
We released a new paper : " Deep Signature models for Financial Equity Time Series prediction" along with Sonam Srivastava and Himanshu AgrawalWe explore in this paper the use of deep signature models to predict equity financial time series returns. First, we use signature transformations to model the underlying shape of the input equity returns; further assuming the underlying shape remains the same, we predict future values based on that shape. Finally, different neural networks are used to process the output from signature transformation to predict equity returns: Long Short Term Memory Networks, Signet Model, and Deep Signature Model. Feeding signature transformations to a neural network brings significant improvement in prediction. Using signature transformation and Long Short Term Memory Networks proves to be the best performing model in accuracy and precision. In contrast, on RMSE terms, all three models offer very comparable performance.You can download the paper here:https://lnkd.in/gWqHeBn5www.aifinanceinstitute.com AIFI - Artificial Intelligence Finance Institute
Artificial Intelligence Finance Institute - AIFI | Founder at Artificial Intelligence Finance Institute
I am not sure I know enough about blockchains to say anything relevant at all. But sometimes silly ideas turn out to be useful, so here we go and bear with my naiveté, as I know I am out of my depth here. One of the main problem of the proof of work protocol is the uncanny amount of energy used to solve (by brute force computation) difficult but totally pointless mathematical riddles. On the other hand, scientific computation for a purpose (academic or industrial) also require tremendous amounts of CPU time.So would it be possible to create a kind of market where snippets of real computation tasks of different complexity are submitted and “solved” by miners, thereby *simultaneously* validating transactions and performing something else that is useful? Of course, the difficulty would be to be sure that the task has actually been performed, which is easy when the solution of the riddle is already known. One would have to come up with a way to embed an easy auxiliary task that can be quickly checked but that can only be completed when the main (useful) task is done.OK perhaps this is totally unpractical, and other protocols with (much) less carbon footprint may turn out to solve the problem. But if proof of work is somehow here to stay, I would find some comfort in thinking that all this computer power isn’t only burnt on trivia.#blockchain #bitcoin #computing #carbonfootprint #scientificresearch
Capital Fund Management | Chairman and Head of research
For the latest Quantcast, I’m joined by Prof. Alexander Lipton, global head of quantitative R&D at ADIA. He talks about his latest work with Artur Sepp on their automated market making framework for currencies that uses smart contracts and central bank digital currencies or stablecoins. From stablecoins, the conversation inevitably moves to the flaws of algorithmically stabilised coins like Terra UST, and why their design cannot work.
“In this period of dramatic structural change, I do not care about technology. Tech is the least important part of the next 20 years.” Paul Donovan, Chief Economist at UBS Global Wealth Management, said at Calcalist’s Meet &Tech event. “The economic transformation, the economic change, does not come from technology, but how we use it.”
New paper "Algorithmic Collusion in Electronic Markets: The Impact of Tick Size". This is joint work with Patrick Chang (Oxford-Man Institute) and Jose Penalva (Universidad Carlos III de Madrid & Associate member of the Oxford-Man Institute). All comments welcome. #algorithmictrading #machinelearning #machinelearningalgorithms #microstructure #regulation #reinforcementlearning #gametheory #quantitativefinance #artificialintelligence
If you enjoy doing quantitative financial research, have an advanced degree in a highly quantitative field, and have proven experience working with fundamental, market, analytics, and/or alternative data, come and join ADIA and its formidable quant team at https://lnkd.in/eYtHTVuq #quantitativeresearch #datascience
Is volatility "rough"?A series of papers in the last few years have suggested that 'volatility' is best modeled using fractional processes with Hurst exponent H< 0.5, so with paths 'rougher' than Brownian motion.Prior to that, many in the 1990s and early 2000s had advocated modeling volatility with fractional processes exhibiting long-range dependence, so with Hurst exponent H > 0.5 ....A slightly confusing situation, to say the least.We revisit these claims by applying a nonparametric, model-free method for estimating the roughness of a signal. Using detailed simulation experiments in rough and non-rough stochastic volatility models, as well as high frequency market data (in fact, the same data sets used in 'rough volatility' studies),we find that the origin of the apparent 'roughness' observed in realized volatility time-series lies in the estimation error rather than the volatility process itself.A subtle point is that 'volatility' is not a directly observed quantity and needs to be estimated, and the estimation error seems to be all but nicely behaved...#volatility #quantitativefinance
Financial Risk and Engineering, NYU School of Engineering | Adjunct Professor
I might have long forgotten what a time-changed Brownian motion is, but I still remember the image of an energetic scholar riding his bike on campus; and I still remember his last words to a graduating class of students who would inevitably enter the capitalist world: that we should not forget, we owe more to society than to the company we work for. This, I will not forget. Goodbye.
Micro-prices as better estimator of price dynamics in a limit order book #lob #obi and order book imbalance framework
One fav papers, of the few that changed my views, is Amromin and Sharpe 2005. They show that households have exactly the -opposite- view on expected returns and recessions that I was taught in my Ph.D. This paper is also the saddest example of the Matthew effect I know of. https://t.co/at3CQ3YCWH
"Sentiment Analysis of Economic Text: We use [a] dictionary to construct a measure of economic pessimism...It captures the business cycle and correlates with economic and financial uncertainty." https://t.co/AV6Nhe48O5 https://t.co/UGodTEt5dI
"While investors demand a premium to hold stocks with high illiquidity...they underreact to stock-level liquidity shocks... A long-short [idiosyncratic] liquidity shocks strategy earns significantly high returns during abnormal market states." https://t.co/SCslAvJyN5 https://t.co/EHwNgPIlp1
1/2 Shakespeare on diversification (Merchant of Venice): “My ventures are not in one bottom trusted, nor to one place; nor is my whole estate upon the fortune of this present year. Therefore my merchandise makes me not sad”
"We...review recent advancements in research on predictive models of [equity] earnings and returns...[including] statistical, econometric, and machine learning advancements." https://t.co/TBljeOfBjE https://t.co/riavSpZixA
Paper applys signature transformations to model the underlying shape of the input equity returns; further assuming the underlying shape remains the same, predicting future values based on that shape.
"A Machine Learning Framework for Asset Pricing": "Building on [mathematical] representations of asset prices…we develop a solution strategy using neural networks and further machine learning techniques." https://t.co/PRi9bVza7k https://t.co/1JAajXFkm0
2/2 Though funnily enough there’s also a lesson about the dangers of apparent diversification. All of Antonio’s supposedly uncorrelated ventures failed at the same time (too much exposure to the shipwreck factor?) and he almost got his heart cut out as a result!