Happy New Year 2023!My yesterday post on using ChatGPT to write a primer on #quant #trading using #machinelearning got an unexpected reception: 250,000 views! with lots of engagement as well.To continue on the same theme, below some nerdy rap lines on #quant #trading by #ChatGPTVerse 1:I'm all about the numbers, trading with precisionModeling market conditions, with mathematical precisionI leave the emotions at the doorMy strategy is based on algorithms and moreChorus:I'm a quant, trading with my mindData-driven, leaving the emotions behindI let the numbers do the talkingMy portfolio always on the rise, never stallingVerse 2:I analyze market trends and patternsUsing advanced techniques, my returns are never flattenin'I'm constantly learning, always improvingMy portfolio growing, my strategies always movingChorus:I'm a quant, trading with my mindData-driven, leaving the emotions behindI let the numbers do the talkingMy portfolio always on the rise, never stallingBridge:Some may call it cold and calculatedBut to me, it's just how I navigatedThe markets, with a data-driven approachMy success speaks for itself, beyond reproachChorus:I'm a quant, trading with my mindData-driven, leaving the emotions behindI let the numbers do the talkingMy portfolio always on the rise, never stallingMikołaj Bińkowski How is your text-to-speech #AI progressing? Can it turn that to music?
The #NFT to the MIT Sloan School of Management Quant & Ai Conference created by the awesome Samson Qian !One last huge thanks to the many brilliant speakers that made it such a special experience!!Igor Halperin Matthew Dixon Gary Marcus Ruchir Puri Carson Boneck, CFA Tony Berkman Raphael Douady Gordon Ritter Alfred Spector Jim Kyung-Soo Liew, Ph.D. Sondra Campanelli Lisa Huang Gilbert Haddad Joseph Simonian, Ph.D. Saeed Amen Christina Qi Yevgeniy Vahlis Philip Ndikum Carol Alexander Dhagash Mehta, Ph.D. Nikita Fadeev Gregory Pelts Bart BaesensJeff Adams Swagato Acharjee George Mylnikov Larry Tabb Mika Kastenholz Mike Agne Kathryn Zhao Arik Ben Dor Peng Cheng Daniel NewmanHan CuiGene Ekster Divya Narendra Joe Marenda
How much does ChatGPT know about quant trading?Let's figure it out!All of the attached book content was generated by ChatGPT from title, to chapters, sections and sub-sections, and the illustration Python/ML/Quant code as well!Even this abstract:"Machine learning has revolutionized the field of quantitative trading, enabling traders to develop and implement sophisticated trading strategies that leverage large amounts of data and advanced modeling techniques. In this book, we provide a comprehensive overview of machine learning for quantitative trading, covering the fundamental concepts, techniques, and applications of machine learning in the financial industry.We start by introducing the key concepts and challenges of machine learning for quantitative trading, including feature engineering, model selection, and backtesting. We then delve into the various machine learning approaches that are commonly used in quantitative trading, including supervised learning, unsupervised learning, and reinforcement learning. We also discuss the challenges and best practices of implementing machine learning models in the live market, including the role of data quality, the importance of risk management, and the need for ongoing model monitoring and validation.Throughout the book, we provide numerous examples and case studies to illustrate the concepts and techniques discussed, and we also include practical tips and resources to help traders and practitioners get started with machine learning for quantitative trading. This book is an essential resource for anyone looking to gain a deeper understanding of how machine learning is transforming the world of finance."#quant #trading #chatgpt #book #ai #ml #machinelearning #learning #finance #data
Really excited to share this work with Bruno Dupire on functional expansions. Bonne lecture! Abstract: Path dependence is omnipresent in social science, engineering, and finance. It reflects the influence of the past on the future, often expressed through functionals. However, non-Markovian problems are often infinite-dimensional, thus challenging from a conceptual and computational perspective. In this work, we shed light on expansions of functionals. First, we treat static expansions made around paths of fixed length and propose a generalization of the Wiener series−the intrinsic value expansion (IVE). In the dynamic case, we revisit the functional Taylor expansion (FTE). The latter connects the functional Itô calculus with the signature to quantify the effect in a functional when a "perturbation" path is concatenated with the source path. In particular, the FTE elegantly separates the functional from future trajectories. The notions of real analyticity and radius of convergence are also extended to the path space. We discuss other dynamic expansions arising from Hilbert projections and the Wiener chaos, and finally show financial applications of the FTE to the pricing and hedging of exotic contingent claims.#quantitativefinance #mathematics
As an #NLP and #AI enthusiast, I am always eager to learn about the latest developments in large language models (#LLM) like #chatgpt. I already interviewed #GPT3 (#davinci-002) about #climatechange last September. Now #openaccess at #Finance Research Letters: https://lnkd.in/eMGC-aXQ But we must remember what exactly these models do and how they work. Sometimes, it reminds me of my time as a student before the big exams, when we had to memorize tons of material (admittedly) without having the time to understand everything. I guess it's kind of like that with LLMs. Also, certain classification tasks can't (yet) be solved as well with LLMs as with, e.g., a fine-tuned BERT model. So I'm excited to see where LLMs will take us in the world of NLP, and I look forward to exploring their potential even further in the future. Happy New Year!#research #future #work #language #climatechange University of Zurich - Department of Banking and FinanceSFI Swiss Finance Institute
just making my way through Housing is the Business Cycle by Edward Leamer, and thanks for the recommendation Francois Trahan.I will reproduce a few results from the paper over the weekend. Attached is a quick chart showing residential investment as percent of GDP, with recessions shaded pink. The time series goes back to 1950. This time series tends to peak in advance of recessions and trough at the end of recessions. If the current trend continues downward, it's further confirmation of a recession in 2023. This chart wasn't on my radar until now. Highly recommend the Leamer paper!
We are doing #AI the wrong way. It is a dead-end road.Terminators are not coming anytime soon.Having learned about the basics of the human brain (#neuroscience) and its mechanisms, I strongly believe that what we know about Artificial Intelligence is all wrong.I also believe that digital computing is not the way. Probably, an unknown technology (but not digital).Clearly, we are way behind on the available technology (probably 1000s of years)Computing AI and Deep Learning could be way faster than a human being's brain... but by no means smarter. This is not a recommendation, is just my humble opinion.#technology #artificialintelligence #deeplearning #computing #digital Tesla
My paper "Portfolio Optimization Based on Neural Networks Sensitivities from Assets Dynamics Respect Common Drivers" has been accepted for publication in the Machine Learning with Applications (MLWA) Journal. After almost a year since the first version called Hierarchical Sensitivity Parity was posted on Arxiv. The paper will be open access soon together with the code+datasets capsule by codeocean.We developed a new method for portfolio optimization that allows us to incorporate assets and portfolio dynamics in terms of causality via sensitivity information into the search for new sources of diversification. Portfolio constituents' causal dynamics are approximated by neural networks and sensitivities obtained with AAD. Optimal portfolio drivers are shown to be the most commonly selected in terms of causality and persistence for all constituents following Reichenbach's Common Cause Principle, the only case in which correlation could be used to approximate causality. The casual geometric space formed by the sensitivities of portfolio constituents with respect to their most common casual drivers allows a conformal map that retains most of the idiosyncratic diversification from the traditional mean-variance setup while adding systematic diversification. We could replace statistical factors with publicly listed common drivers that allow forming this casual space reducing computational costs and money. We add a directionality component to the search for diversification not seen before by means of the casual sensitivity dynamics' metric space in which we minimize risks. We show outperformance with respect to other out-of-sample methods and soon the published version will include an interactive code to run multiple experiments across different datasets, market conditions, and methods to learn more about our method and its impressive performance. Special thanks to #miraltabank
Me and Vincent Zoonekynd released a new paper exploring the use of Machine Learning Models in an Equity Factor set-up. One of the most important problems in finance.You can download the paper here:https://lnkd.in/d8KmMETGAIFI - Artificial Intelligence Finance Institute
Not proven yet, but Oxford-Man Institute of Quantitative Finance, University of Oxford #quant teams under leadership of prof Álvaro Cartea use #gametheory to investigate if & how the #algos work together (not by design, by accident) to inflate prices. The algorithms consistently learned to charge elevated price without communicating with one another. Critically, they learned to push prices up, but not so far as to crash their own market.For algorithmic collusion to take hold, experts say, the algos must learn to punish each other for breaking the collaboration.Losers: pension funds, insurers, Hedge Funds, Wealth Funds paying higher trading costs or misallocating portfolios/capital.Winners: market-makers Thank you, Nora Szabo, for flagging this fascinating article (by Faye Kilburn, Risk.net) to me!
hello #rstats and #macroeconomics friends, let's close out 2022 with some daily updates. today, inflation and a possibly unpopular opinion about the magical 2% inflation number! Last week we got the numbers on Personal Consumption Expenditures (PCE), which underpins the preferred inflation measure of the Federal Reserve. In one of Chair Powell's recent speech (link in comments), he spoke extensively about core PCE, goods versus services and housing. Our first chart shows the historical trend in year-on-year percent change in PCE and core PCE. The accompanying table shows CPI, PCE and core CPI/PCE year on year changes since August of 2021. The chart shows us that prior to 2021, core PCE year on year change hadn't exceeded 3% since 1992. We've had 30 years of price stability by this measure. The most recent reading on core PCE was 4.68% in November, while the headline grabbing but not part of the Fed's decision function CPI was at 7.12% for November. That 4.68% is the lowest PCE core reading since October of 2021. Things are trending how the FED wants and the labor market is still very tight (we'll do a labor market deep dive later this week). Let's dig into PCE components a bit, just a Powell does in his speech. The next table shows PCE components and YoY changes. Durable goods came in at 2.72% for november. It's services that remain stubbornly high at 5.25%.We can break services down into several categories, with housing being the largest. The next table shows the quarterly change in PCE service components, with Q3 Housing at 8.41%. As Lance Lambert has pointed out, housing costs are forecasted to fall and continue falling through 2023. In short, core PCE indicates that the Fed is doing it's job (don't worry we'll cover the Fed more tomorrow) and bringing inflation under control. Not much controversy there. My potentially unpopular opinion is that the Fed and real
From the last NeurIPS conference: Why do 𝘁𝗿𝗲𝗲-𝗯𝗮𝘀𝗲𝗱 𝗺𝗼𝗱𝗲𝗹𝘀 still 𝗼𝘂𝘁𝗽𝗲𝗿𝗳𝗼𝗿𝗺 deeplearning on typical tabular data? First interesting point: 𝘄𝗵𝘆, not if. Practicioners know that tree-based models are still the way to go for tabular datasets. We don't know why, and we don't know for how much longer.The authors create a benchmark of 𝟰𝟱 𝗱𝗮𝘁𝗮𝘀𝗲𝘁𝘀, involving both regression and classification tasks.They use tree-based models, and deep learning models, including ResNet and basic transformers, adopting extensive 𝗵𝘆𝗽𝗲𝗿𝗽𝗮𝗿𝗮𝗺𝗲𝘁𝗲𝗿 𝘁𝘂𝗻𝗶𝗻𝗴.They found that tree-based models have the upper hand, by a 𝗹𝗮𝗿𝗴𝗲 𝗺𝗮𝗿𝗴𝗶𝗻.Why?Two main reasons. Deep Learning Models tend to create predictions functions which are 𝘁𝗼𝗼 𝘀𝗺𝗼𝗼𝘁𝗵. And the real relationship between the features and the target in tabular datasets is often 𝗶𝗿𝗿𝗲𝗴𝘂𝗹𝗮𝗿. Deep Learning Models are very sensitive to 𝘂𝗻𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝘃𝗲 𝗳𝗲𝗮𝘁𝘂𝗿𝗲𝘀, while tree-based models are more robust.Interesting as it is, the study leaves many questions unanswered. How can we improve Deep Learning models to make them competitive? How do the size of the dataset influence the effectiveness of the models?Can't wait for follow-up research.#machinelearning #deeplearning #research
Lie Groups Based Machine Learning- Lie Group Forced Variational Integrator Networks for Learning and Control of Robot Systemshttps://lnkd.in/e-r-ChbX- Structure preserving deep learninghttps://lnkd.in/gPghU6x- Lie Group Cohomology and (Multi)Symplectic Integrators: New Geometric Tools for Lie Group Machine Learning Based on Souriau Geometric Statistical Mechanicshttps://lnkd.in/dCPaWBUmore information at SEE GSI'23:https://gsi2023.org/
Christmas Reading List:1. Finish the brilliant Advances in Financial Machine Learning by Marcos Lopez de Prado.2. Microprediction: Building an Open AI Network by Peter Cotton.3. Shantaram by Gregory David Roberts, a novel about a convict who escapes to India and finds enlightenment.Merry Christmas and Happy New Year!
ChatGPT query: "Why do people share the output from chatGPT on linkedin?"Ans: It is possible that people may share the output from chatbots such as ChatGPT on LinkedIn for a variety of reasons. Some possible reasons could include:As a form of entertainment or to engage with others on the platform.To demonstrate the capabilities of the chatbot and showcase its capabilities to others.As a way to spark discussion or conversation about artificial intelligence and chatbots.To share examples of the chatbot's output as part of a larger conversation or discussion about AI or chatbots.It is also possible that people may share the output from chatbots on LinkedIn for other reasons that are specific to their individual motivations and goals.#ai #machinelearning #stupidity
Why spend time writing API docs if the API doesn’t follow them? When I talk to someone in support and they say, “Oh, ya. It hasn’t worked that way in over a year,” or “We deprecated that in 2021.”It takes such a small amount of effort to keep the documentation current, and the number of Fortune 500 companies and unicorn startups that don’t is hard to believe.It’s like they’re punishing their customers for subscribing to their services or trying to implement solutions using their products. I’m paying you money. Why don’t you like me?All I want for Christmas is better API docs. Please?#datascience #analytics
happy holidays to our #macreconomics and #rstats friends! let's give ourselves some holiday cheer, data visualization style Sudheer Chava and I close out our interest rate section with a look at valuation, via the Shiller PE or CAPE - an acronym that stands for cyclically adjusted PE ratio. The CAPE is price per share of a stock or index divided by the inflation-adjusted average of the past ten years worth of earnings per share (EPS). The inflation adjustment is to correct for earnings increases that might simply be the product of inflation and the ten-year averaging is an attempt to capture a complete business cycle (and avoid using data from a particularly good or bad time period). In short we can view the CAPE as a measure of valuation. When it's high, valuations are high. The first chart below shows the history of the CAPE for the SP500 with shading for bear markets, and we can see that the all time high was reached during the dot com bubble and the second highest CAPE was reached in 2021. To bring this back to returns, our last chart shows the CAPE versus the 5 year future returns on the SP500. Buying at peak CAPE ratios isn't great, so we guess the good news is that eventually a plummeting CAPE rewards investors who are ready to deploy.To tie this back to interest rates, we highly recommend a look at the recent essay "Sea Change" by Howard Marks (link in the comments) which walks through a brief history of interest rates and how their level affects, among other things, asset valuation. Here's is one snippet: "[low interest rates] increase the fair value of assets...as interest rates fall, valuation parameters such as p/e ratios and enterprise values rise..." The eponymous Sea Change is occurring as we move from an era of low/declining rates that lasted from 1980-2021 to an era of increasing rates from 2022 - ???? (there's more to it than that obviously, go rea
A simple solution to the joint SPX/VIX calibration problem? Since the financial crisis of 2008, derivatives on volatility became increasingly popular for hedging purposes and for directional trading especially when combined with the underlying stock index. On the US market, the VIX index introduced by the CBOE became one of the most widely followed volatility index and refelects the volatility of the S&P 500 (SPX). By joint SPX-VIX calibration problem, we mean the calibration of a model across several maturities to European call/put options on SPX and VIX together with VIX futures. Such joint calibration turns out to be quite challenging with a multitude of instruments to be calibrated and across several maturities. Together with Camille Illand and Shaun Li, we propose a very simple solution based on a quintic Ornstein-Uhlenbeck volatility model. The dynamics of the stochastic volatility process are given by a polynomial function of degree five of a single Ornstein-Uhlenbeck process with fast mean reversion and large vol-of-vol. The model is able to achieve remarkable joint fits of the SPX-VIX smiles with 6 effective parameters and an input curve that allows to match certain term structures. Even better, the model remains very simple and tractable for pricing and calibration: the VIX squared is again polynomial in the Ornstein-Uhlenbeck process, leading to efficient VIX derivative pricing by a simple integration against a Gaussian density; simulation of the volatility process is exact; and pricing SPX products can be done efficiently and accurately by standard Monte Carlo techniques with suitable antithetic and control variates. ⚽️ Although it is challenging, but possible, for another model to achieve similar fits, it would be very difficult to do so with a simpler continuous model than our quintic Ornstein-Uhlenbeck volatility mode
Employing a fully quantitative (systematic) approach to managing credit portfolios has gained significant traction in recent years. The interest comes from both quant equity shops looking to replicate their offering in credit and from discretionary credit investors who want to adopt a more systematic approach to managing their portfolios. The rise in electronic trading and portfolio trades in credit markets has facilitated the execution of systematic credit strategies based on quantitative signals - many of which are based on data from other markets such as Equities. The recent 2022 Special Issue of the Journal of Fixed Income celebrating its 30th Anniversary, includes an article that provides a detailed overview of many relevant findings and techniques that investors interested in the topic will find useful.#systematicinvesting #quantitativeresearch #quantitativecredit #JFI #electronictrading #journalfixedincome #creditportfolios
On the Future of #quantitativeinvesting: An interview (in German), together with David Blitz (Robeco), Dr. Michael Heldmann, CFA (Allianz Global Investors), Andreas Gintschel (Perpetual Investors GmbH), and Jan Anton van Zanten (Robeco), and myself:https://lnkd.in/eADZWFPP University of Zurich - Department of Banking and FinanceSFI Swiss Finance Institute #finance #banking #quantitativefinance #quant #investing #ai #nlp
As 2022 comes to a close I can’t help but think about the #quant community’s enormous loss in the passing of New York University’s Peter CarrPeter was incredibly kind to me for decades and I couldn’t have respected him more than I did
My working paper "Autoencoder Market Models for Interest Rates" has been posted to SSRN at https://lnkd.in/d5E5KT8f. It develops highly efficient VAE training to historical curve shapes and proposes to use its latent variables as IR model state variables. VAE-based counterparts to multi-factor forward rate and short rate models in Q-measure and dynamic Nelson-Siegel and dual-measure models in P-measure are described.By aggressively eliminating invalid curve shapes from its latent space, VAE prevents them from appearing within the model without intricate constraints on the stochastic process used by the classical models for the same purpose. This makes VAE-based models more robust and simplifies their calibration.A curious byproduct of VAE training is a "world map of latent space" where neighbors have similar curve shapes, and distant lands have disparate curve shapes. Each arrow in the image below is a Q-measure path in that space, with color representing the currency.#aemm #compatibl #ml #machinelearning #quant #quantitativefinance #interestrates #vae
Unfortunately even after their total WorldCup failure, Oxford University even now is not able to accept the simple fact that predicting the future is principally not possible from past data in an open and evolving world. Even after the complete #WorldCup failure of their model and of the updated version before the Round of Eight. It is not about good or bad models. These #statistical models failed since WorldCups began ... take e.g. a look at the model of models for 2018. It is simply not possible....there are no #crystal balls ... not math-based and even not when made in Oxford and claimed to be math or AI-based.Tragic to find the #Maths Department of #Oxford #University in "Camp #Superstition"...EDIT From discussion below 1) My original criticism was published weeks ago, i.a. here on LinkedIn 2) The important point is indeed "Predicting the future from past data is impossible in an open and evolving world". Of course we can e.g. calculate the Newtonian equations for not too complex systems and are certain what will happen, e.g., to the apple falling down ... But the case for World Cup predictions is totally different. Many things change compared to the past data: Players change, participating nations change, circumstances change, referees are different, location changes, even a World Cup is hosted in winter season. 3) I've been a big Messi fan for years...and will always be regardless who will win today's final.
As the author of "How to Measure Anything," I am always on the lookout for new tools and techniques that can help organizations to make better decisions and allocate resources more effectively. Recently, I had the opportunity to learn about ChatGPT, an artificial intelligence language model developed by OpenAI, and I was impressed by its potential to improve decision-making in a wide range of contexts.One of the key challenges in decision-making is dealing with uncertainty and complexity. ChatGPT is particularly well-suited to addressing these challenges, as it is able to process and generate text based on patterns learned from a large dataset of human language. This enables it to provide insights and recommendations that would be difficult for humans to generate on their own.In my work, I have found that ChatGPT can be a valuable tool for generating scenarios, analyzing trade-offs, and developing contingency plans. It can also be used to help organizations identify key drivers of success and assess the potential risks and opportunities associated with different courses of action.Overall, I believe that ChatGPT has the potential to be a powerful tool for improving decision-making in a wide range of contexts. Its ability to process and generate text based on patterns learned from human language makes it a valuable resource for organizations looking to make better decisions in the face of uncertainty and complexity.Note: Everything above this sentence was written by ChatGPT. I told ChatGPT "Write an article about ChatGPT as if it were written by Doug Hubbard, the author of How to Measure Anything." It is not how I would have written it. I haven't used it in my work, as the response states. Compared to what I think I would write, this seems like hand waving, salesy, cheesy prose without much substance. But could you tell? Actually, I found it to be remarkably coher
Our book “Trades, Quotes & Prices” came out in print almost 5 years ago. Our aim was to discuss empirical facts of financial markets and introduce a wide range of models, from the micro-scale mechanics of individual order arrivals to the emergent, macro-scale issues of market stability. We rooted all our discussions in the empirical behaviour of real markets, with the hope to provide a new perspective on topics as diverse as optimal trading, price impact, the fragile nature of liquidity, and most importantly, why and how prices move and why people trade at all. The widespread availability of high-quality, high-frequency data has indeed revolutionised the study of financial markets. By describing not only asset prices, but also market participants actions and interactions, this wealth of information offers a new window into the inner workings of the financial ecosystem. Just as the atomic hypothesis allowed Maxwell and Boltzmann, 150 years ago, to understand how the macroscopic world is described by thermodynamics, trades and quotes are the elementary units from which price dynamics emerges. I am actually particularly pleased with the way the ideas developed in our book perfectly dovetail with the recent `Inelastic Market Hypothesis’ of Xavier Gabaix and Ralf Koijen, which is in my view one of the most interesting and disruptive piece of work in financial economics. Several of my past posts already touched upon this topic, but I cannot resist mentioning it again, not least because IMH provides a clear and convincing interpretation for a host of interesting "anomalies" for which Efficient Market Theory is totally clueless. The IMH is the long-term epitome of what we called in our book the order-driven theory (ODT) of markets: prices move primarily because people trade, whatever the reason they are trading, and much less because of unexpected news that change the elu
Very happy (and proud!) that my long fight for a better and fairer format for the 2026 World Cup is finally paying off! I contacted FIFA several years ago to alert them about the problems posed by groups of 3, and thank God they now seem ready to move to a better format.My 2018 New York Times piece is here:https://lnkd.in/eyG42-bJThank you The Athletic for a great article and for citing my Journal of Sports Analytics article: https://lnkd.in/epzzGYtT#sportsanalytics #worldcup2026 #fifa #football #soccer #fairnessinsports
a lot of talk about the #yieldcurve these days, let's fire up the #rstats machine and dig in To set the stage, the yield curve is a snapshot of the interest rate on various Treasury instruments, so we can visually compare them.Sudheer Chava and I build a function in our book that let's us plug in a date and get back the yield curve as of that date. For example, `yield_curve_plot("2005-08-01")` produces the first chart below, the yield curve as of August 2005. Notice how it's upward sloping, and is considered normal: the rates paid on shorter term treasuries is lower than than those paid on longer term treasuries. The next plot is from a call to yield_curve_plot("2022-12-01") - the yield curve as of now, December of 2022. This curve is inverted: one year and two year yields are higher than ten and thirty year yields.That is traditionally taken as a sign of a recession to come, under the following logic. When a recession occurs, GDP growth goes negative and the economy contracts. When the economy contracts, the Fed typically responds by lowering interest rates to stimulate lending, credit creation and economic growth. If we see 10-year rates below 2-year rates, the market is anticipating lower rates in the future, meaning the market is anticipating the Fed to lower rates to counterbalance an economic slow down. In a previous post we got the same signal from the copper gold ratio (see link in comments) - that ten year rates *should* be headed lower. The next chart shows us the historical trend of just one piece of the yield curve: the two-year ten-year differential. When that goes below zero, it's considered inverted. We can add on shading for recessionary periods and see how this tends to go negative ahead of recessions - and it's currently quite negative (no recession shading currently because the NBER hasn't declared us in recession yet - yet). Our final chart is a
I am excited to announce that my liveProject series Math for Machine Learning has just been published.https://lnkd.in/eFZkBYi2 During the development of the liveProject series, it was a pleasure to work with the amazing team at Manning Publications Co., who made my life so much easier.It was a pleasure to working with you, Troy Dreier, Erika Hankins-Prantil, Isidora Isakov, Mihaela Batinic, and Valeria Szabo. Many thanks for your assistance and support!I would also want to thank Cass Petrus and the other testers for their input, which helped shape the liveProject series into what it is today. #math #machinelearning #deeplearning #nlp #datascience
It was a pleasure speaking at the MIT Sloan School of Management along with the brilliant minds Tony Berkman, Ernest Chan, Lisa Huang and Saeed Amen on the panel. A big thanks to Alexander Fleiss, Samson Qian, Rebellion Research and MIT Quant Finance Club for organizing a wonderful event. I had the pleasure of learning from the best Alfred Spector Gordon Ritter Igor Halperin Matthew Dixon Carson Boneck, CFA Larry Tabb Jim Kyung-Soo Liew, Ph.D. Arun Verma William J. Kelly, CAIA Gary Marcus Ranjan Bhaduri Divya Narendra Gilbert Haddad. To all the quant enthusiasts out there, I highly recommend attending this event next year, you'll meet some wonderful folks and learn about the latest research areas and get new perspective on applications of AI in the investment industry. #MITQuantandAIConference
A closed-form solution for optimal mean-reverting trading strategies “When prices reflect all available information, they oscillate around an equilibrium level. This oscillation is the result of the temporary market impact caused by waves of buyers and sellers. The resulting price behavior can be approximated through an O-U process. The parameters of the process might be estimated using historical data.”“Market makers provide liquidity in an attempt to monetize this oscillation. They enter a long position when a security is priced below its estimated equilibrium level, and they enter a short position when a security is priced above its estimated equilibrium level. They hold that position until one of three outcomes occurs: (A) they achieve a targeted profit; (B) they experience a maximum tolerated loss; (C) the position is held beyond a maximum tolerated horizon.” By Prof. Alexander Lipton & Marcos Lopez de PradoLink https://lnkd.in/dzMmDx-5
Such a wonderful day. I would use the word enlightening. To be in the company of such brilliance all day was really just such a life’s experience! Huge thank you to the Superwoman Christina Qi such an impressive & brilliant mind! My Co-Chair & brother Samson Qian !!Huge thank you for the help from one of my best friends Han Cui Also a huge thank you to the other speakers today who were just so wonderful and honestly so very special I felt blessed!Lisa Huang Sahana Athreya Gary Marcus Tony Berkman Carson Boneck, CFA Daniel Newman Gilbert Haddad Igor Halperin Gordon Ritter Alfred Spector Raphael Douady Matthew Dixon Ruchir Puri William J. Kelly, CAIA Kathryn Zhao Sondra Campanelli Jim Kyung-Soo Liew, Ph.D. Divya Narendra Larry Tabb Mika Kastenholz Sudip Gupta Zachary Squire Ranjan Bhaduri James Koutoulas Mike Agne Gene Ekster sebastien bossu Riti Samanta Kathryn Kaminski, PhD, CAIAYevgeniy Vahlis Qaisar Hasan Jeff Adams David Murdock, CFA Jimei Shen Paul Krueger Hamza Khan Qian Pan
“If you want to impress someone with your ML model, just choose a stupid benchmark” What an amazing and insightful conference!Learnt about the future of AI in investing, usage of alternative data, augmenting quantitative strategies, algorithms, data & quant engineering, importance of fitting a neural network in a financial framework such as non-arbitrage, future of Reinforcement Learning in finance, and much more.Thank you Alexander Fleiss for the opportunity to listen and interact with the smartest minds in the industry! #ai #ml #finance #investing #quant #conference #mit
Thanks everyone who came to hear me speak about how Turnleaf Analytics forecast inflation using machine learning on an AI panel at Massachusetts Institute of Technology Sloan Quant Club Conference Rebellion Research!
Amid the rising cost of living and higher taxes, top hedge fund talent is moving away from financial centres like London and New York "Unless drastic tax changes happen in the UK and EU, talent will be driven towards the Middle East and North Africa," says Austen Smart"Every big player in the hedge fund industry will have an office in Dubai by 2025," Seán Sweeney told Financial News
Data moves communities forward! As we advance the underlying technology, we can bring more credit invisibles into the system! Looking back on 2022, the TechBridge, Inc panel on financial inclusion was one of the most meaningful dialogues I had the honor of joining. A shout-out to my fellow panelists who are driving change across #atlanta and beyond: Reginald M. of Google, Jessica Washington of the Federal Reserve Bank of Atlanta, Adam Roseman of Steady and Sudheer Chava of Georgia Institute of Technology. This is such an important issue. Check out the recap, and share your perspectives. #financialinclusion#data#cloudnative
Research paper on Automated Market Makers by Bergault, Bertucci, Bouba and Guéant.The authors analyze the performance of Liquidity Providers #LPs providing liquidity to different types of Automated Market Makers #AMMs. This analysis is carried out using a mean / standard deviation viewpoint à la Markowitz, though based on the PnL of LPs compared to that of agents holding coins outside of AMMs. They show that LPs tend to perform poorly in a wide variety of CFMMs under realistic market conditions. They then explore an alternative AMM design in which an #oracle feeds the current market exchange rate to the AMM which then quotes a bid/ask spread. This allows them to define an efficient frontier for the performance of LPs in an idealized world with perfect information and to show that the smart use of oracles greatly improves LPs’ risk / return profile, even in the case of a lagged oracle.
The head of QUANT centre, Prof. Guillaume Coqueret, hosted the seminar for Dr. Huihui Chi at emlyon business school on Dec 8th. The topic of her presentation was “#Self-#cannibalization #effect in #Sharing #Economy”. EMLYON professors, Ph.D. students, and researchers attended the seminar and communicated with the speaker in both on-site and online formats. Participants found the conclusions about different competition scenarios in value depreciation market and value appreciation market interesting. Huihui is a post-doc researcher at emlyon and has published papers in journals such as European Journal of Operational Research and Decision Support Systems. Her research interests are IT-enabled operations and sustainability development. Topic: Self-cannibalization of Durable Goods with Sharing: A Provider PerspectivePresenter: Huihui ChiHost: Dr Guillaume Coqueret Time: 13h15, 8 DecemberPlace: Amphi 1076 (Building B, first floor), EMLYON, Écully(Edited Yeming (Yale) Gong) Frederic Delmar, Dr Tessa Melkonian, Nicolas Péjout, PhD, Bruno Tallent, Aurelien Baillon, Anne-Laure SCHWARTZ, Elsa Opitz, Emilie Guichard, Renaud HENRION, Gaël CHAUVEL DER HAGOBIAN, Yacine Rekik, Bruno Versaevel, Imène BRIGUI, PhD., Benoit Loeillet, Eric André, Myriam Lyagoubi, Ivan Guitart, Clement Levallois, Monica Grosso, rousset isabelle, Ajay Kumar, Dr. Benoît Chevalier-Roignant, Moritz Jörling, Jean Louis MAGAKIAN, Ruiqi W., Vincent Opoix, Romain Bayout, Marylou Boissely, Géraldine Hottegindre, Julie Guillot, Valérie Jobard, Céline FALLOT BUCLET, Delphine Billouard-Fuentes, Franck JAOTOMBO, Gaetan Bakalli, Imen Ben Mohamed,Rodreck David
McKinsey’s survey found that the adoption of AI has dropped from a high of 58% in 2019 to this year’s 50% and provides little evidence that “AI leaders are pulling ahead,” despite the title saying so. It surveyed 1,492 participants representing the ‘full range of regions, industries, company sizes, functional specialities, and ensures’. Of those respondents, 744 said their organizations had adopted AI in at least one function and were asked about their organizations’ use of AI. Since first survey was done in 2017, McKinsey has seen AI adoption increase from 20% of respondents to 50% in 2019 before dropping to about 50%. The functions using #AI the most has changed from manufacturing and risk in 2018 to others in 2022: marketing and sales, product and service development, and strategy and corporate finance, and respondents report the highest cost benefits from AI in supply chain management. The number of organizations devoting more than 5% of their digital budgets has risen from 40% five years ago to more than 50% in 2022. 5% is not a big percentage of a digital budget and some consultants tell me that this figure itself suggests AI is not diffusing rapidly. It’s evidence of “AI leaders pulling ahead” is weak. The article says: These ‘AI high performers’ are defined as organizations that are seeing the biggest bottom line impact from AI adoption - that is, 20% or more of EBIT from AI use. The proportion of respondents falling into that group has remained steady at approximately 8%. Other recent surveys also show disappointing results. Accenture’s survey reports only 12% of AI adopters are currently using AI "to outpace their competitors," while 63% are still in the experimentation phase—"barely scratching the surface of AI’s potential." Even more troubling, automotive was listed as second most successful industry after pharmaceuticals, probably because the industry
An Introduction to the Sequencer WorldGreat article by Aiden PestellSequencer is a single threaded UDP multicast middleware messaging bus that essentially multicasts all messages into a sequence of numbers. The architecture is fully deterministic, meaning that every node sees all messages in the same order and all events can be replayed on a single stream producing the exact same state through finite-state machines. It is seen as a simple design yet still today is the fastest architecture that firms go for even if it’s over 20 years old.#softwareengineer #softwarearchitecture #infrastructureinvestment #investing #electronictrading #computerscience #cpp #java #javaprogramming #lowlatency