
Julien Colin
@juliencolin_
PhD student in Interpretability @ELLISAlicante / @tserre Lab at Brown University.
Keen interest for Deep Learning & Computational Cognitive Science.
ID: 1380861795003424769
10-04-2021 12:35:01
68 Tweet
161 Takipรงi
228 Takip Edilen

What a pleasant surprise! Thanks for covering our work ๐ "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction Sunnie S. Y. Kim โ๏ธ Elizabeth Anne Watkins, PhD Olga Russakovsky Ruth Fong Andrรฉs Monroy-Hernรกndez Learn more ๐ sunniesuhyoung.github.io/XAI_Trust/

โจNEW PREPRINTโจ Visual cognition in the brain is dynamic. Time to consider time ๐ฅ in models! We present a novel human-like reaction time metric computed from stable recurrent vision models and study ๐ต๐ฆ๐ฎ๐ฑ๐ฐ๐ณ๐ข๐ญ human-model alignment. Read onโฆ๐ค arxiv.org/abs/2306.11582 1/n



๐ Explain big vision model with ๐๐๐๐ ๐ ๐ช๐ฐ A method that ๐๐ช๐ฉ๐ค๐ข๐๐ฉ๐๐๐๐ก๐ก๐ฎ extracts the most important concepts for your favorite pre-trained vision model. e.g., we automatically discover the most important concepts on a ResNet50 for rabbits: eyes, ears, fur. ๐งถ


Enjoyed writing this Spotlight article for Trends in Cognitive Sciences w/ Claudia Damiano on @WilmaBrainbridge and Trentโs PNAS paper. We share our excitement for a collaboration between artists and AI to better understand memorability. authors.elsevier.com/a/1hk5N4sIRvPNโฆ

Exciting news! Our paper was accepted as a **Spotlight** to #NeurIPS2023. We compute a human-like reaction time metric from stable recurrent vision models. Check out the ๐งตbelow! arxiv.org/abs/2306.11582 w/ Lakshmi Govindarajan Alekh Karkada Ashok Aarit Ahuja David Sheinberg and Thomas Serre


David Picard Caroline Petitjean Yann LeCun Yes. Check out MaCo which specifically makes these visualizations work for ViTs arxiv.org/abs/2306.06805 Also their accompanying website is pretty cool: serre-lab.github.io/Lens/ Thomas Fel Thomas Serre


Woohoo! ๐ It's not every day your work gets a shoutout from the one and only Lucas Beyer (bl16)! I must mention that our visualizations were particularly spectacular on FlexiViT (and other Big Vision models as well); they have a secret recipe... github.com/google-researcโฆ


Iโll be at NeurIPS Conference this year, sharing some work on explainability and representations. If youโre attending and want to chat, feel free to reach out !๐



Phenomenology โ principle โ method. From observed phenomena in representations (conditional orthogonality) we derive a natural instantiation. And it turns out to be an old friend: Matching Pursuit! ๐ arxiv.org/abs/2506.03093 See you in San Diego, NeurIPS Conference ๐



