Home

Alexis Marchal

Likes
116
Date
2022/10/27
Image Link
https://media-exp1.licdn.com/dms/image/C4D22AQGMuG1fIkpEUQ/feedshare-shrink_2048_1536/0/1666728069827?e=1669852800&v=beta&t=DIxPQorOlLdzffgVlnt5WyUUHQOgMhz6KpKqwaMFtoE
ML Score
3
Job
SFI Swiss Finance Institute | PHD Candidate
Content
#machinelearning #machinelearningalgorithms #artificialintelligence #ai #deeplearning You might not know it, but you are using dimensionality reduction: a lot! In fact, you are using it right now. When reading those words, you are probably looking only at the first and last letters of each word. If your brain chose to remember this post, it would first compress it to a few key concepts and ideas. Without knowing it, you deal with reality by creating a low-dimensional encoding and decoding it to get a representation.This method is known in #machinelearning as "auto-encoding." Unlike you and I in our everyday life, #datascientists are using dimensionality reduction on purpose. But just like you and I: they have no idea why. Why does auto-encoding work? How does it create good data representations? And how does one build an efficient auto-encoder? This is just one of those things that we know works without truly knowing why, or at least it was one of those things!In our paper, for the first time in almost forty years, we develop a theory of optimal autoencoders and prove that they work! Predictions based on autoencoders work better than those based on original data. In fact, for any machine learning problem, there exists an autoencoder that improves model performance. Want to know how to build it? Read our paper, https://lnkd.in/dK3xn-NM , and use our simple, explicit, theory-based algorithm to improve your models!
Property
Integromat
Link
https://www.linkedin.com/feed/update/urn:li:activity:6990892929787523072
Comments
0
Type
this
Profile
https://www.linkedin.com/in/alexismarchal/
TOP