Mathias Gehrig (@mathiasgehrig) 's Twitter Profile
Mathias Gehrig

@mathiasgehrig

Skydio - Autonomy ML Engineer. PhD @UZH_en and @ETH_en in Zurich in CV and Robotics.

ID: 1130107535741132800

linkhttps://magehrig.github.io calendar_today19-05-2019 13:46:47

192 Tweet

261 Followers

282 Following

JFPuget 🇺🇦🇨🇦🇬🇱 (@jfpuget) 's Twitter Profile Photo

Every breakthrough in AI was in the US? Wasn't SGD a breakthrough? (from Leon Botou in France) Weren't CNN a breakthrough? (from Yann Lecun in France) Wasn't stable diffusion a breakthrough? (From Germany) Wasn't ViT a breathrough? (from Switzerland, at least partly) Wasn't

Nicholas Drummond (@nicholadrummond) 's Twitter Profile Photo

This is the best summary of the current geopolitical situation I have seen. Sir Alex Younger was head of MI6 between 2014 and 2020. Really worth watching.

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

Test-Time Training (TTT) is now on Video! And not just a 5-second video. We can generate a full 1-min video! TTT module is an RNN module that provides an explicit and efficient memory mechanism. It models the hidden state of an RNN with a machine learning model, which is updated

Nando de Freitas (@nandodf) 's Twitter Profile Photo

RL is not all you need, nor attention nor Bayesianism nor free energy minimisation, nor an age of first person experience. Such statements are propaganda. You need thousands of people working hard on data pipelines, scaling infrastructure, HPC, apps with feedback to drive

xjdr (@_xjdr) 's Twitter Profile Photo

you should legally be required to disclose what quantization level you are serving your current model at like it was a nutrition label. you should also be banned from dynamically adjusting quantization based on demand without notification. (you know who you are ...)

Yi Ma (@yimatweets) 's Twitter Profile Photo

I start to believe that there is some subtle difference between compression (common for all intelligence) and abstraction (unique for artificial intelligence of human). They are definitely related, but different in a fundamental way. This shall be our next major quest for AI.

Mathias Gehrig (@mathiasgehrig) 's Twitter Profile Photo

I think this why the LLM people get confused by Sutton's argument about imitation learning! The action is **not** a hidden state.

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

Finally had a chance to listen through this pod with Sutton, which was interesting and amusing. As background, Sutton's "The Bitter Lesson" has become a bit of biblical text in frontier LLM circles. Researchers routinely talk about and ask whether this or that approach or idea

Phillip Isola (@phillip_isola) 's Twitter Profile Photo

Arxiv has been such a wonderful service but I think this is a step in the wrong direction. We have other venues for peer review. To me the value of arxiv lies precisely in its lack of excessive moderation. I'd prefer it as "github for science," rather than yet another journal.

Tony Z. Zhao (@tonyzzhao) 's Twitter Profile Photo

Today, we present a step-change in robotic AI Sunday. Introducing ACT-1: A frontier robot foundation model trained on zero robot data. - Ultra long-horizon tasks - Zero-shot generalization - Advanced dexterity 🧵->