Home

Marcos Lopez de Prado

Likes
219
Date
2022/09/11
ML Score
3
Job
Abu Dhabi Investment Authority (ADIA) | Global Head - Quantitative Research & Development
Content
From the Twitterverse: Recent results by DeepMind show that the OpenAI 2020 scaling results for massive LLMs were wrong. OpenAI argued that there was a sweet spot for parameter tuning for each data set. Turns out, the real story is, that whoever gets the largest high-quality text dataset -- wins.Why did OpenAI make this error? In the original 2020 paper, OpenAI used a fixed learning rate schedule on all models. DeepMind found that the LR schedule should scale with the dataset size.(OpenAI should have used weightwatcher)Read more about this here:https://lnkd.in/ghnHsQmc
Property
Integromat
Comments
12
Type
this
TOP