Stephane Deny (@stphtphsn1) 's Twitter Profile
Stephane Deny

@stphtphsn1

Neuroscience, ML, also other things.

ID: 1367912887

calendar_today20-04-2013 20:08:24

4,4K Tweet

3,3K Followers

5,5K Following

John J. Vastola (@johnjvastola) 's Twitter Profile Photo

Diffusion models generalize *really* well: if you give them a million pictures of cats, they'll learn to generate reasonable-looking cats no one's ever seen before. But the weird thing is that no one knows why they work! In a theory paper accepted to #ICLR2025, I dug into this.

Martin Hebart (@martin_hebart) 's Twitter Profile Photo

Great work by Changde Du from Huiguang He's lab at the Chinese Academy of Sciences. How similar are visual and conceptual representations in (multimodal) large language models to those found in humans? It turns out quite similar! nature.com/articles/s4225


Stephane Deny (@stphtphsn1) 's Twitter Profile Photo

Two Assistant Professor positions are open at Aalto, respectively in "Neuroscience and Biomedical Engineering" and "Living State Systems", in a department which I can warmly recommend. Feel free to reach out to me for questions. aalto.fi/en/open-positi
 aalto.fi/en/open-positi


Kwang Moo Yi (@kwangmoo_yi) 's Twitter Profile Photo

Preprint of today: Beyer et al., "Highly Compressed Tokenizer Can Generate Without Training" -- github.com/lukaslaobeyer/
 The latent space of tokenizers already provides a good enough abstraction to work with -- you don't have to use a diffusion model on top to inpaint, etc!

Li Zhaoping (@li_zhaoping) 's Twitter Profile Photo

As early as primary visual cortex, V1, neural activities are associated with target boosting and distractor suppression in guided visual search, reports this interesting paper Duecker et al 2025, shorturl.at/jcVN5

Stephane Deny (@stphtphsn1) 's Twitter Profile Photo

"Hidden in plain sight: VLMs overlook their visual representations" by Stephanie Fu, Tyler Bonnen, Devin Guillory, Trevor Darrell arxiv.org/abs/2506.08008

"Hidden in plain sight: VLMs overlook their visual representations" 
by Stephanie Fu, Tyler Bonnen, Devin Guillory, Trevor Darrell
arxiv.org/abs/2506.08008
Billy (@billykyle) 's Twitter Profile Photo

I use FSD for over 2 hours of travel every day around the Philadelphia area and have never experienced an issue as bad as this one. Any ideas why it chose the completely wrong lane? Has anyone ever experienced anything like this?

Mark Ibrahim (@marksibrahim) 's Twitter Profile Photo

A good language model should say “I don’t know” by reasoning about the limits of its knowledge. Our new work AbstentionBench carefully measures this overlooked skill in leading models in an open-codebase others can build on! We find frontier reasoning degrades models’ ability to

A good language model should say “I don’t know” by reasoning about the limits of its knowledge. Our new work AbstentionBench carefully measures this overlooked skill in leading models in an open-codebase others can build on!

We find frontier reasoning degrades models’ ability to
Kempner Institute at Harvard University (@kempnerinst) 's Twitter Profile Photo

The Kempner Institute's Frontiers in NeuroAI symposium showcased researchers from around the world who are pushing forward the new field of #NeuroAI. Learn about the event & find links to recordings here: bit.ly/3G5Urhv #NeuroAI2025 #AI Sham Kakade Bernardo Sabatini

Lenny van Dyck (@levandyck) 's Twitter Profile Photo

How is high-level visual cortex organized? In a new preprint with Martin Hebart & Katharina Dobs, we show that category-selective areas encode a rich, multidimensional feature space 🌈 biorxiv.org/content/10.110
 đŸ§” 1/n

Bao Pham (@baophamhq) 's Twitter Profile Photo

Diffusion models create novel images, but they can also memorize samples from the training set. How do they blend stored features to synthesize novel patterns?  Our new work shows that diffusion models behave like Dense Associative Memory: in the low training data regime (number

Diffusion models create novel images, but they can also memorize samples from the training set. How do they blend stored features to synthesize novel patterns?  Our new work shows that diffusion models behave like Dense Associative Memory: in the low training data regime (number
David Pfau (@pfau) 's Twitter Profile Photo

Proud to announce that my stealth team has made a breakthrough that will unlock the next leap: artificial HYPER intelligence, which will make ASI look like a weak and stupid baby. The $1tn seed round for our company, Safe Hyper Intelligent Technologies, is already oversubscribed.

Bo Zhao (@bozhao__) 's Twitter Profile Photo

When and why are neural network solutions connected by low-loss paths? In our #ICML2025 paper, we show that mode connectivity often arises from symmetries—transformations of parameters that leave the network’s output unchanged. Paper: arxiv.org/abs/2505.23681 (1/6)

When and why are neural network solutions connected by low-loss paths?

In our #ICML2025 paper, we show that mode connectivity often arises from symmetries—transformations of parameters that leave the network’s output unchanged.

Paper: arxiv.org/abs/2505.23681
(1/6)
Luca Ambrogioni (@lucaamb) 's Twitter Profile Photo

1/2) Happy to share the preprint of our workshop paper on using information theory to find class separation in diffusion models It generalizes previous models of speciation and symmetry breaking to generic class definitions

1/2) Happy to share the preprint of our workshop paper on using information theory to find class separation in diffusion models

It generalizes previous models of speciation and symmetry breaking to generic class definitions