#machinelearning #machinelearningalgorithms #artificialintelligence #ai #deeplearning You might not know it, but you are using dimensionality reduction: a lot! In fact, you are using it right now. When reading those words, you are probably looking only at the first and last letters of each word. If your brain chose to remember this post, it would first compress it to a few key concepts and ideas. Without knowing it, you deal with reality by creating a low-dimensional encoding and decoding it to get a representation.This method is known in #machinelearning as "auto-encoding." Unlike you and I in our everyday life, #datascientists are using dimensionality reduction on purpose. But just like you and I: they have no idea why. Why does auto-encoding work? How does it create good data representations? And how does one build an efficient auto-encoder? This is just one of those things that we know works without truly knowing why, or at least it was one of those things!In our paper, for the first time in almost forty years, we develop a theory of optimal autoencoders and prove that they work! Predictions based on autoencoders work better than those based on original data. In fact, for any machine learning problem, there exists an autoencoder that improves model performance. Want to know how to build it? Read our paper, https://lnkd.in/dK3xn-NM , and use our simple, explicit, theory-based algorithm to improve your models!