Michael Eli Sander (@m_e_sander) 's Twitter Profile
Michael Eli Sander

@m_e_sander

Research Scientist at Google DeepMind

ID: 1362051327732441088

linkhttps://michaelsdr.github.io/ calendar_today17-02-2021 14:48:44

213 Tweet

2,2K Followers

191 Following

Tom Sander @NeurIPS (@rednastom) 's Twitter Profile Photo

You didn’t believe in Differential Private training for foundation models? We achieved the same performance as non-private MAE trained on the same dataset, but with rigorous DP. Code is released: github.com/facebookresear…. Presenting tomorrow at ICML, 11:30AM poster, #2313

Mathieu Blondel (@mblondel_ml) 's Twitter Profile Photo

We uploaded a v2 of our book draft "The Elements of Differentiable Programming" with many improvements (~70 pages of new content) and a new chapter on differentiable data structures (lists and dictionaries). arxiv.org/abs/2403.14606

Geert-Jan Huizing (@gjhuizing) 's Twitter Profile Photo

🎉 New preprint! biorxiv.org/content/10.110… STORIES learns a differentiation potential from spatial transcriptomics profiled at several time points using Fused Gromov-Wasserstein, an extension of Optimal Transport. Gabriel Peyré @LauCan88

🎉 New preprint! biorxiv.org/content/10.110… STORIES learns a differentiation potential from spatial transcriptomics profiled at several time points using Fused Gromov-Wasserstein, an extension of Optimal Transport. <a href="/gabrielpeyre/">Gabriel Peyré</a> @LauCan88
Jérémie Kalfon (@jkobject) 's Twitter Profile Photo

🚨🚨 AI in Bio release 🧬  Very happy to share my work on a Large Cell Model for Gene Network Inference. It is for now just a preprint and more is to come. We are asking the question: “What can 50M cells tell us about gene networks?” ❓Behind it, other questions arose like:

🚨🚨 AI in Bio release 🧬  

Very happy to share my work on a Large Cell Model for Gene Network Inference. It is for now just a preprint and more is to come. We are asking the question: “What can 50M cells tell us about gene networks?”

❓Behind it, other questions arose like:
Gabriel Peyré (@gabrielpeyre) 's Twitter Profile Photo

"Transformers are Universal In-context Learners": in this paper, we show that deep transformers with a fixed embedding dimension are universal approximators for an arbitrarily large number of tokens. arxiv.org/abs/2408.01367

"Transformers are Universal In-context Learners": in this paper, we show that deep transformers with a fixed embedding dimension are universal approximators for an arbitrarily large number of tokens. arxiv.org/abs/2408.01367
Jules Samaran (@julessamaran) 's Twitter Profile Photo

After a very constructive back and forth with editors and reviewers of Nature Communications, scConfluence has now been published @LauCan88 Gabriel Peyré ! I'll present it this afternoon at the poster session of ECCB2026 Geneva, Switzerland (P296) Published version: nature.com/articles/s4146…

Pierre Marion (@pierremari0n) 's Twitter Profile Photo

🚨New paper alert🚨: arxiv.org/abs/2410.01537 How does Transformer retrieve information which is sparsely concentrated in few tokens? e.g., the label can change by flipping a single word. To explain this, we introduce a new statistical task, and show that attention solves it ⬇️

🚨New paper alert🚨: arxiv.org/abs/2410.01537

How does Transformer retrieve information which is sparsely concentrated in few tokens? e.g., the label can change by flipping a single word.

To explain this, we introduce a new statistical task, and show that attention solves it ⬇️
Sibylle Marcotte (@sibyllemarcotte) 's Twitter Profile Photo

🏆Didn't get the Physics Nobel Prize this year, but really excited to share that I've been named one of the #FWIS2024 Fondation L'Oréal-UNESCO 🏛️ #Education #Sciences #Culture 🇺🇳 French Young Talents alongside 34 amazing young researchers! This award recognizes my research on deep learning theory #WomenInScience 👩‍💻

🏆Didn't get the Physics Nobel Prize this year, but really excited to share that I've been named one of the #FWIS2024 <a href="/FondationLOreal/">Fondation L'Oréal</a>-<a href="/UNESCO/">UNESCO 🏛️ #Education #Sciences #Culture 🇺🇳</a> French Young Talents alongside 34 amazing young researchers! This award recognizes my research on deep learning theory #WomenInScience 👩‍💻
Fabian Pedregosa (@fpedregosa) 's Twitter Profile Photo

Six years at Google today! 🎉 From 🇨🇦 to 🇨🇭, optimizing everything in sight. Grateful for the incredible journey and amazing colleagues!

Six years at Google today! 🎉 From 🇨🇦 to 🇨🇭, optimizing everything in sight.  Grateful for the incredible journey and amazing colleagues!
Tom Sander @NeurIPS (@rednastom) 's Twitter Profile Photo

☢️ Some news about radioactivity ☢️ - We got a Spotlight at Neurips! 🥳 and we will be in Vancouver with Pierre Fernandez to present! - We have just released our code for radioactivity detection at github.com/facebookresear….

Tom Sander @NeurIPS (@rednastom) 's Twitter Profile Photo

🔒Image watermarking is promising for digital content protection. But images often undergo many modifications—spliced or altered by AI. Today at AI at Meta, we released Watermark Anything that answers not only "where does the image come from," but "what part comes from where." 🧵

🔒Image watermarking is promising  for digital content protection. But images often undergo many modifications—spliced or altered by AI. Today at <a href="/AIatMeta/">AI at Meta</a>, we released Watermark Anything that answers not only "where does the image come from," but "what part comes from where." 🧵
Sibylle Marcotte (@sibyllemarcotte) 's Twitter Profile Photo

Merci pour l’opportunité d’avoir échangé sur mes recherches et mes expériences ! Merci à mes directeurs de thèse Gabriel Peyré et Rémi Gribonval pour votre supervision 😊

Sarah Perrin (@sarah_perrin_) 's Twitter Profile Photo

♟️Mastering Board Games by External and Internal Planning with Language Models♟️ I'm happy to finally share storage.googleapis.com/deepmind-media… TL;DR: In chess, our planning agents effectively reach grandmaster-level strength with a comparable search budget to that of human players!

♟️Mastering Board Games by External and
Internal Planning with Language Models♟️

I'm happy to finally share storage.googleapis.com/deepmind-media… 

TL;DR: In chess, our planning agents effectively reach grandmaster-level strength with a comparable search budget to that of human players!
Tom Sander @NeurIPS (@rednastom) 's Twitter Profile Photo

I am in NeurIPS week :) Friday, Presenting our spotlight work: Watermarking Makes LLMs Radioactive ☢️ (arxiv.org/abs/2402.14904) Sunday, speaking at the image watermarking workshop about our latest Watermark Anything work (arxiv.org/abs/2411.07231) DM me if you’d like to chat :)

Mathieu Blondel (@mblondel_ml) 's Twitter Profile Photo

Really proud of these two companion papers by our team at GDM: 1) Joint Learning of Energy-based Models and their Partition Function arxiv.org/abs/2501.18528 2) Loss Functions and Operators Generated by f-Divergences arxiv.org/abs/2501.18537 A thread.

Mathieu Blondel (@mblondel_ml) 's Twitter Profile Photo

Distillation is becoming a major paradigm for training LLMs but its success and failure modes remain quite mysterious. Our paper introduces the phenomenon of "teacher hacking" and studies how to mitigate it. arxiv.org/abs/2502.02671 More details in the thread below.