It's hard to stay up to date with the daily deluge of quant finance research. Top institutional firms filter before they read. With quants being paid upwards of (you know what), their time can't be wasted trawling twitter and linkedin. A reasonably good filtering mechanism is conferences and internal seminars.
My first experience with this was in 2019 when Cubist systematic invited me to a seminar after publishing my first paper on predicting earnings surprises on SSRN.
"We run a regular semi-monthly seminar series at which professors present recent research to our team of Portfolio Managers and analysts. The format is a 1-1.5 hour interactive session followed by dinner... we are happy to cover travel costs if you would like to make a special trip to NYC and we will of course accommodate your schedule."
It doesn't always involve external speakers, almost every firm I have worked with have some form of internal seminar sometimes weekly, sometimes monthly, were an employee has to discuss a new topic of interest.
However, this is not the only format anymore. A lot of discussions have morphed into podcasts like Jane Street’s Signals and Threads or Putnam’s Active Insights. Podcasts recover costs more effectively due to their marketing allure.
More recently Quant funds have become the top sponsor at prestigious machine learning conferences like NeurIPS. The list includes firms like DE Shaw, PDT Partners, HRT, Two Sigma, Jane Street, and others. Once more, this is not just an opportunity for employees to obtain complementary tickets to listen to state of the art research, but a great recruitment drive. It signifies “we are great, join us”.
The search for alpha versus recruitment intent is getting somewhat blurred. For example, it is commonly thought among Kaggle data science participants in challenges set up by Winton, Two Sigma, Jane Street, and G-research, that the firms are there to mine the collective crowd-sourced alpha. In fact no, I have been part of developing such a challenge, the purpose is almost purely a recruitment drive.
Some firms have taken a further step, they have concluded that in addition to giving money to other conferences, they might as well set up out own. There are countless examples, more recently see the G-Research Distinguished Speaker Series or The Discovery: Two Sigma PhD Symposium.
If I had to rate each one of these on the recruitment to alpha continuum, it would probably be: (1) data science competitions, (2) podcasts, (3) conference sponsorship and attendance, (4) conference development, and (5) then internal seminars.
Of course, that list doesn’t stop there, the most interesting part of the continuum is from 6 onward. There is a small industry dedicated solely to the capture, curation, and internal dissemination of public research.
It's an open secret in the quantitative finance community: the volume of research and discussion generated daily is both a treasure trove and a potential time sink.
When powerhouse names like Acadian Asset Management elucidate on harnessing the disposition effect for a momentum strategy, or AQR delves into the intricate dance of deep learning for identifying optimal lags, it isn't just their direct audience that perks up. Portfolio Managers at other funds are equally, if not more, invested in these insights.
So, the question stands: How do they achieve this level of efficient information assimilation?
Here's is how I approach it for ML-Quant:
Preemptive Filtering: Before anything even reaches a quant's desk, it's passed through layers of filters. These aren't just keyword-based, but often employ sophisticated algorithms that understand context, ensuring only the most relevant pieces make the cut.
Tooling & Infrastructure: The digital age has blessed us with a suite of tools designed to curate and present information. For instance, libraries like Scrapy, BS4, and Selenium form the vanguard of data extraction. These are not run on traditional setups but on serverless infrastructures, optimizing for both speed and cost.
Hidden Treasures: Not everything requires the heavy machinery of web scraping. Often, a hidden API or even an RSS feed can provide a direct line to the insights. For the discerning quant, this is akin to stumbling upon a gold mine, ensuring real-time updates without the overhead of web crawlers.
In essence, the world of quantitative finance has evolved. It's no longer just about devising the most sophisticated model or algorithm but ensuring that the pipeline of information feeding into these models is both relevant and efficient. In a world where milliseconds can mean millions, can we really afford to be anything less than optimal?