Andrea de Varda (@devarda_a) 's Twitter Profile
Andrea de Varda

@devarda_a

Postdoc at MIT BCS, interested in language(s) in humans and LMs

ID: 1506315259337973767

calendar_today22-03-2022 17:02:32

101 Tweet

340 Takipçi

466 Takip Edilen

Byung-Doh Oh (@byungdoh) 's Twitter Profile Photo

Have reading time corpora been leaked into LM pre-training corpora? Should you be cautious about using pre-trained LM surprisal as a consequence? We identify the longest overlapping token sequences and conclude the leakage is mostly not severe. In Findings of #ACL2025 #ACL2025NLP

Have reading time corpora been leaked into LM pre-training corpora? Should you be cautious about using pre-trained LM surprisal as a consequence? We identify the longest overlapping token sequences and conclude the leakage is mostly not severe. In Findings of #ACL2025 #ACL2025NLP
Andrew Lampinen (@andrewlampinen) 's Twitter Profile Photo

Looking forward to attending CogSci this week! I'll be giving a talk (see below) at the Reasoning Across Minds and Machines workshop on Wednesday at 10:25 AM, and will be around most of the week — feel free to reach out if you'd like to meet up!

Looking forward to attending CogSci this week! I'll be giving a talk (see below) at the Reasoning Across Minds and Machines workshop on Wednesday at 10:25 AM, and will be around most of the week — feel free to reach out if you'd like to meet up!
Thomas Hikaru Clark (@thomashikaru) 's Twitter Profile Photo

1/7 If you're at CogSci 2025, I'd love to see you at my talk on Friday 1pm PDT in Nob Hill A! I'll be talking about our work towards an implemented computational model of noisy-channel comprehension (with jacob lou hoo vigly ⍼, Ted Gibson, Language Lab MIT, and Roger Levy).

1/7 If you're at CogSci 2025, I'd love to see you at my talk on Friday 1pm PDT in Nob Hill A! I'll be talking about our work towards an implemented computational model of noisy-channel comprehension (with <a href="/postylem/">jacob lou hoo vigly ⍼</a>, <a href="/LanguageMIT/">Ted Gibson, Language Lab MIT</a>, and <a href="/roger_p_levy/">Roger Levy</a>).
Moshe Poliak (@moshepoliak) 's Twitter Profile Photo

(1)💡NEW PUBLICATION💡 Word and construction probabilities explain the acceptability of certain long-distance dependency structures Work with Curtis Chen and Ted Gibson, Language Lab MIT Link to paper: tedlab.mit.edu/tedlab_website… In memory of Curtis Chen.

Tom McCoy (@rtommccoy) 's Twitter Profile Photo

🤖🧠 NEW PAPER ON COGSCI & AI 🧠🤖 Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning So what role should symbols play in theories of the mind? For our answer...read on! Paper: arxiv.org/abs/2508.05776 1/n

🤖🧠 NEW PAPER ON COGSCI &amp; AI 🧠🤖

Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning

So what role should symbols play in theories of the mind? For our answer...read on!

Paper: arxiv.org/abs/2508.05776

1/n
Isabel Papadimitriou (@isabelpapad) 's Twitter Profile Photo

Are there conceptual directions in VLMs that transcend modality? Check out our COLM spotlight🔦 paper! We analyze how linear concepts interact with multimodality in VLM embeddings using SAEs with Chloe H. Su, @napoolar, Sham Kakade and Stephanie Gil arxiv.org/abs/2504.11695

Are there conceptual directions in VLMs that transcend modality? Check out our COLM spotlight🔦 paper! We analyze how linear concepts interact with multimodality in VLM embeddings using SAEs

with <a href="/Huangyu58589918/">Chloe H. Su</a>, @napoolar, <a href="/ShamKakade6/">Sham Kakade</a> and Stephanie Gil
arxiv.org/abs/2504.11695
Yevgeni Berzak (@whylikethis_) 's Twitter Profile Photo

Check out our* new preprint on decoding open-ended information seeking goals from eye movements! *Proud to say that my main contribution to this work are the banger model names: DalEye Llama and DalEye LLaVa! arxiv.org/abs/2505.02872

Check out our* new preprint on decoding open-ended information seeking goals from eye movements!

*Proud to say that my main contribution to this work are the banger model names: DalEye Llama and DalEye LLaVa!

arxiv.org/abs/2505.02872
Anna Ivanova (@neuranna) 's Twitter Profile Photo

As our lab started to build encoding 🧠 models, we were trying to figure out best practices in the field. So Taha Binhuraib 🦉 built a library to easily compare design choices & model features across datasets! We hope it will be useful to the community & plan to keep expanding it! 1/

Kanishka Misra 🌊 (@kanishkamisra) 's Twitter Profile Photo

The compling group at UT Austin (sites.utexas.edu/compling/) is looking for PhD students! Come join me, Kyle Mahowald, and Jessy Li as we tackle interesting research questions at the intersection of ling, cogsci, and ai! Some topics I am particularly interested in:

The compling group at UT Austin (sites.utexas.edu/compling/) is looking for PhD students! 

Come join me, <a href="/kmahowald/">Kyle Mahowald</a>, and <a href="/jessyjli/">Jessy Li</a> as we tackle interesting research questions at the intersection of ling, cogsci, and ai!

Some topics I am particularly interested in:
Thomas Hikaru Clark (@thomashikaru) 's Twitter Profile Photo

What makes some sentences more memorable than others? Our new paper gathers memorability norms for 2500 sentences using a recognition paradigm, building on past work in visual and word memorability. Greta Tuckute Bryan Medina Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦

What makes some sentences more memorable than others? Our new paper gathers memorability norms for 2500 sentences using a recognition paradigm, building on past work in visual and word memorability.
<a href="/GretaTuckute/">Greta Tuckute</a> <a href="/bj_mdn/">Bryan Medina</a> <a href="/ev_fedorenko/">Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦</a>