Eric J. Michaud (@ericjmichaud_) 's Twitter Profile
Eric J. Michaud

@ericjmichaud_

PhD student at MIT. Trying to make deep neural networks among the best understood objects in the universe. πŸ’»πŸ€–πŸ§ πŸ‘½πŸ”­πŸš€

ID: 3013822602

linkhttp://ericjmichaud.com calendar_today09-02-2015 01:11:31

174 Tweet

1,1K Followers

876 Following

Carl Guo (@carlguo866) 's Twitter Profile Photo

How does a model "choose" which representation to learn when many different ones are viable? In my paper with the Max Tegmark group, we formulate a "Survival of the Fittest" hypothesis and empirically examine it on toy models doing modular addition. A🧡(1/10):

How does a model "choose" which representation to learn when many different ones are viable?

In my paper with the <a href="/tegmark/">Max Tegmark</a> group, we formulate a "Survival of the Fittest" hypothesis and empirically examine it on toy models doing modular addition. A🧡(1/10):
Richard Ngo (@richardmcngo) 's Twitter Profile Photo

David Deutsch In ML, we know that scaling laws hold across many orders of magnitude, but we don’t know *why*. It’d be amazing to have scale-invariant principles which explain them. The closest thing I’ve seen so far: