Eric J. Michaud
@ericjmichaud_
PhD student at MIT. Trying to make deep neural networks among the best understood objects in the universe. 💻🤖🧠👽🔭🚀
ID: 3013822602
http://ericjmichaud.com 09-02-2015 01:11:31
174 Tweet
1,1K Takipçi
876 Takip Edilen
How does a model "choose" which representation to learn when many different ones are viable? In my paper with the Max Tegmark group, we formulate a "Survival of the Fittest" hypothesis and empirically examine it on toy models doing modular addition. A🧵(1/10):
David Deutsch In ML, we know that scaling laws hold across many orders of magnitude, but we don’t know *why*. It’d be amazing to have scale-invariant principles which explain them. The closest thing I’ve seen so far: