Julien Colin (@juliencolin_) 's Twitter Profile
Julien Colin

@juliencolin_

PhD student in Interpretability @ELLISAlicante / @tserre Lab at Brown University.
Keen interest for Deep Learning & Computational Cognitive Science.

ID: 1380861795003424769

calendar_today10-04-2021 12:35:01

68 Tweet

161 Takipรงi

228 Takip Edilen

Sunnie S. Y. Kim (@sunniesuhyoung) 's Twitter Profile Photo

What a pleasant surprise! Thanks for covering our work ๐Ÿ™ˆ "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction Sunnie S. Y. Kim โ˜€๏ธ Elizabeth Anne Watkins, PhD Olga Russakovsky Ruth Fong Andrรฉs Monroy-Hernรกndez Learn more ๐Ÿ‘‰ sunniesuhyoung.github.io/XAI_Trust/

Drew Linsley (@drewlinsley) 's Twitter Profile Photo

A long-held belief in Comp-Neuro is that as DNNs improve at object recognition, they will also become better models of object-selective inferotemporal (IT) cortex neurons. In our new paper, we find that this is no longer the case. Read on to learn more! serre-lab.github.io/neural_harmoniโ€ฆ

A long-held belief in Comp-Neuro is that as DNNs improve at object recognition, they will also become better models of object-selective inferotemporal (IT) cortex neurons.

In our new paper, we find that this is no longer the case. Read on to learn more!

serre-lab.github.io/neural_harmoniโ€ฆ
LoreGoetschalckx (@l_goetschalckx) 's Twitter Profile Photo

โœจNEW PREPRINTโœจ Visual cognition in the brain is dynamic. Time to consider time ๐Ÿฅ in models! We present a novel human-like reaction time metric computed from stable recurrent vision models and study ๐˜ต๐˜ฆ๐˜ฎ๐˜ฑ๐˜ฐ๐˜ณ๐˜ข๐˜ญ human-model alignment. Read onโ€ฆ๐Ÿค“ arxiv.org/abs/2306.11582 1/n

Thomas Fel (@napoolar) 's Twitter Profile Photo

๐Ÿ“… Last month, we presented ๐—˜๐—ฉ๐—” at #CVPR2023 the first attribution method using ๐…๐จ๐ซ๐ฆ๐š๐ฅ ๐Œ๐ž๐ญ๐ก๐จ๐๐ฌ! We leverage recent advances in formal methods to propagate bounds through a neural network, exploring a potentially infinite number of perturbations. ๐Ÿงต

๐Ÿ“… Last month, we presented ๐—˜๐—ฉ๐—” at #CVPR2023
the first attribution method using ๐…๐จ๐ซ๐ฆ๐š๐ฅ ๐Œ๐ž๐ญ๐ก๐จ๐๐ฌ!

We leverage recent advances in formal methods to propagate bounds through a neural network, exploring a potentially infinite number of perturbations. 

๐Ÿงต
Victor Boutin (@victorboutin) 's Twitter Profile Photo

I am at #ICML2023 to present my latest work. Is the human performance better than that of diffusion models on the one-shot drawings task ? Attend my oral presentation today to have the answer ! More details below : x.com/VictorBoutin/sโ€ฆ

ELLIS (@ellisforeurope) 's Twitter Profile Photo

Mark your calendars: The application portal of our #ELLISPhD Program will open in October! Reach many European top #ML labs with only a single application & benefit from exchanges in our network! #JoinELLISforEurope More โžก๏ธ ellis.eu/news/ellis-phdโ€ฆ #PhD #AI #PhDProgram Elise AI

Mark your calendars: The application portal of our #ELLISPhD Program will open in October! Reach many European top #ML labs with only a single application & benefit from exchanges in our network! #JoinELLISforEurope

More โžก๏ธ ellis.eu/news/ellis-phdโ€ฆ

#PhD #AI #PhDProgram <a href="/ai_elise/">Elise AI</a>
Thomas Fel (@napoolar) 's Twitter Profile Photo

๐Ÿ‘‹ Explain big vision model with ๐‚๐‘๐€๐…๐“ ๐Ÿช„๐Ÿฐ A method that ๐™–๐™ช๐™ฉ๐™ค๐™ข๐™–๐™ฉ๐™ž๐™˜๐™–๐™ก๐™ก๐™ฎ extracts the most important concepts for your favorite pre-trained vision model. e.g., we automatically discover the most important concepts on a ResNet50 for rabbits: eyes, ears, fur. ๐Ÿงถ

๐Ÿ‘‹ Explain big vision model with ๐‚๐‘๐€๐…๐“ ๐Ÿช„๐Ÿฐ

A method that ๐™–๐™ช๐™ฉ๐™ค๐™ข๐™–๐™ฉ๐™ž๐™˜๐™–๐™ก๐™ก๐™ฎ extracts the most important concepts for your favorite pre-trained vision model.

e.g., we automatically discover the most important concepts on a ResNet50 for rabbits: eyes, ears, fur.

๐Ÿงถ
LoreGoetschalckx (@l_goetschalckx) 's Twitter Profile Photo

Enjoyed writing this Spotlight article for Trends in Cognitive Sciences w/ Claudia Damiano on @WilmaBrainbridge and Trentโ€™s PNAS paper. We share our excitement for a collaboration between artists and AI to better understand memorability. authors.elsevier.com/a/1hk5N4sIRvPNโ€ฆ

LoreGoetschalckx (@l_goetschalckx) 's Twitter Profile Photo

Exciting news! Our paper was accepted as a **Spotlight** to #NeurIPS2023. We compute a human-like reaction time metric from stable recurrent vision models. Check out the ๐Ÿงตbelow! arxiv.org/abs/2306.11582 w/ Lakshmi Govindarajan Alekh Karkada Ashok Aarit Ahuja David Sheinberg and Thomas Serre

Thomas Fel (@napoolar) 's Twitter Profile Photo

๐Ÿ‘‹๐Ÿ‘จโ€๐Ÿณ๐Ÿต After a year of cooking up a secret project, I'm thrilled to officially reveal: The ๐‹๐„๐๐’ ๐๐ซ๐จ๐ฃ๐ž๐œ๐ญ. By combining modern tools of Explainable AI, how much can we explain a ResNet50? ๐Ÿงถ

Lucas Beyer (bl16) (@giffmana) 's Twitter Profile Photo

David Picard Caroline Petitjean Yann LeCun Yes. Check out MaCo which specifically makes these visualizations work for ViTs arxiv.org/abs/2306.06805 Also their accompanying website is pretty cool: serre-lab.github.io/Lens/ Thomas Fel Thomas Serre

<a href="/david_picard/">David Picard</a> <a href="/CaroPetitjean/">Caroline Petitjean</a> <a href="/ylecun/">Yann LeCun</a> Yes. Check out MaCo which specifically makes these visualizations work for ViTs arxiv.org/abs/2306.06805

Also their accompanying website is pretty cool: serre-lab.github.io/Lens/

<a href="/Napoolar/">Thomas Fel</a> <a href="/tserre/">Thomas Serre</a>
Thomas Fel (@napoolar) 's Twitter Profile Photo

Woohoo! ๐ŸŽ‰ It's not every day your work gets a shoutout from the one and only Lucas Beyer (bl16)! I must mention that our visualizations were particularly spectacular on FlexiViT (and other Big Vision models as well); they have a secret recipe... github.com/google-researcโ€ฆ

Thomas Fel (@napoolar) 's Twitter Profile Photo

๐ŸŽญRecent work shows that modelsโ€™ inductive biases for 'simpler' features may lead to shortcut learning. What do 'simple' vs 'complex' features look like? What roles do they play in generalization? Our new paper explores these questions. arxiv.org/pdf/2407.06076 #Neurips2024

๐ŸŽญRecent work shows that modelsโ€™ inductive biases for 'simpler' features may lead to shortcut learning. 

What do 'simple' vs 'complex' features look like? What roles do they play in generalization?

Our new paper explores these questions. 
arxiv.org/pdf/2407.06076

#Neurips2024
Victor Boutin (@victorboutin) 's Twitter Profile Photo

๐Ÿš€๐ŸŽ‰ Thrilled to share our #Neurips2024 paper: โ€œLatent Representation Matters: Human-like Sketches in One-shot Drawing Tasksโ€! We pit humans vs. regularized Latent Diffusion Models in the one-shot drawing task. Whoโ€™s the best sketch master? ๐Ÿ–Œ๏ธ๐Ÿค– (1/5) arxiv.org/abs/2406.06079

๐Ÿš€๐ŸŽ‰ Thrilled to share our #Neurips2024 paper: โ€œLatent Representation Matters: Human-like Sketches in One-shot Drawing Tasksโ€! We pit humans vs. regularized Latent Diffusion Models in the one-shot drawing task. Whoโ€™s the best sketch master? ๐Ÿ–Œ๏ธ๐Ÿค– (1/5) arxiv.org/abs/2406.06079
Thomas Fel (@napoolar) 's Twitter Profile Photo

Iโ€™ll be at NeurIPS Conference this year, sharing some work on explainability and representations. If youโ€™re attending and want to chat, feel free to reach out !๐Ÿ‘‹

Iโ€™ll be at <a href="/NeurIPSConf/">NeurIPS Conference</a> this year, sharing some work on explainability and representations. If youโ€™re attending and want to chat, feel free to reach out !๐Ÿ‘‹
Harry Thasarathan (@hthasarathan) 's Twitter Profile Photo

๐ŸŒŒ๐Ÿ›ฐ๏ธWanna know which features are universal vs unique in your models and how to find them? Excited to share our preprint: "Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment"! arxiv.org/abs/2502.03714 (1/9)

๐ŸŒŒ๐Ÿ›ฐ๏ธWanna know which features are universal vs unique in your models and how to find them? Excited to share our preprint: "Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment"! 

arxiv.org/abs/2502.03714

(1/9)
Thomas Fel (@napoolar) 's Twitter Profile Photo

Phenomenology โ†’ principle โ†’ method. From observed phenomena in representations (conditional orthogonality) we derive a natural instantiation. And it turns out to be an old friend: Matching Pursuit! ๐Ÿ“„ arxiv.org/abs/2506.03093 See you in San Diego, NeurIPS Conference ๐ŸŽ‰

Phenomenology โ†’ principle โ†’ method.

From observed phenomena in representations (conditional orthogonality) we derive a natural instantiation.

And it turns out to be an old friend: Matching Pursuit!

๐Ÿ“„ arxiv.org/abs/2506.03093

See you in San Diego, <a href="/NeurIPSConf/">NeurIPS Conference</a> ๐ŸŽ‰
Thomas Fel (@napoolar) 's Twitter Profile Photo

๐Ÿ•ณ๏ธ๐Ÿ‡Into the Rabbit Hull โ€“ Part I (Part II tomorrow) An interpretability deep dive into DINOv2, one of visionโ€™s most important foundation models. And today is Part I, buckle up, we're exploring some of its most charming features.

Remi Cadene (@remicadene) 's Twitter Profile Photo

I am starting a venture on top of LeRobot! Weโ€™re at a pivotal time. AI is moving beyond digital to the physical world. Embodied AI will change our surroundings in ways we can barely imagine. This technology holds the potential to empower everyone. It must not be controlled by

I am starting a venture on top of LeRobot!

Weโ€™re at a pivotal time. AI is moving beyond digital to the physical world. Embodied AI will change our surroundings in ways we can barely imagine. This technology holds the potential to empower everyone. It must not be controlled by
Thomas Fel (@napoolar) 's Twitter Profile Photo

๐Ÿ•ณ๏ธ๐Ÿ‡Into the Rabbit Hull โ€“ Part II Continuing our interpretation of DINOv2, the second part of our study concerns the geometry of concepts and the synthesis of our findings toward a new representational phenomenology: the Minkowski Representation Hypothesis

๐Ÿ•ณ๏ธ๐Ÿ‡Into the Rabbit Hull โ€“ Part II

Continuing our interpretation of DINOv2, the second part of our study concerns the geometry of concepts and the synthesis of our findings toward a new representational phenomenology: 
the Minkowski Representation Hypothesis