Mor Geva
@megamor2
ID: 850356925535531009
https://mega002.github.io/ 07-04-2017 14:37:44
450 Tweet
1,1K Takipรงi
509 Takip Edilen
Hi ho! New work: arxiv.org/pdf/2503.14481 With amazing collabs Jacob Eisenstein Reza Aghajani Adam Fisch dheeru dua Fantine Huot โ๏ธ ICLR 25 Mirella Lapata Vicky Zayats Some things are easier to learn in a social setting. We show agents can learn to faithfully express their beliefs (along... 1/3
ืืช ืฉื ืชืืื ืืืื ืืืืจ ืืืขืงื ืืจืื ืืคื ื ืฉืืืื ืืืืจ ืืืืื ื ืืืืืช ืืืชืจ ืืื ืคืฉืืืื ืื ืืืืชื. ืขืืฉืื ืื ืืชืืืื ืืกืืื ืืืืืืช ืืชืจ ืืงืืจืืช ืืงืืืืช ืฉื ืืืืืื ืก ืื ืฉื ืืืืื ืฆืืขืงืื ืืืขืงื
๐ Our Actionable Interpretability workshop has been accepted to #ICML2025! ๐ >> Follow Actionable Interpretability Workshop ICML2025 Tal Haklay Anja Reusch Marius Mosbach Sarah Wiegreffe Ian Tenney (@[email protected]) Mor Geva Paper submission deadline: May 9th!
๐จ Call for Papers is Out! The First Workshop on ๐๐๐ญ๐ข๐จ๐ง๐๐๐ฅ๐ ๐๐ง๐ญ๐๐ซ๐ฉ๐ซ๐๐ญ๐๐๐ข๐ฅ๐ข๐ญ๐ฒ will be held at ICML 2025 in Vancouver! ๐ Submission Deadline: May 9 Follow us >> Actionable Interpretability Workshop ICML2025 ๐ง Topics of interest include: ๐
๐๐ฅ Thrilled to announce our ICML25 paper: "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas"! We dive into the core reasons behind spatial reasoning difficulties for Vision-Language Models from an attention mechanism view. ๐๐ Paper:
Removing knowledge from LLMs is HARD. Yoav Gur Arieh proposes a powerful approach that disentangles the MLP parameters to edit them in high resolution and remove target concepts from the model. Check it out!